[jira] [Created] (FLINK-34086) Native k8s session cannot specify port in nodeport mode

2024-01-15 Thread waywtdcc (Jira)
waywtdcc created FLINK-34086:


 Summary: Native k8s session cannot specify port in nodeport mode
 Key: FLINK-34086
 URL: https://issues.apache.org/jira/browse/FLINK-34086
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.18.0
Reporter: waywtdcc






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22793) HybridSource Table Implementation

2023-12-25 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800414#comment-17800414
 ] 

waywtdcc commented on FLINK-22793:
--

How is this going?

> HybridSource Table Implementation
> -
>
> Key: FLINK-22793
> URL: https://issues.apache.org/jira/browse/FLINK-22793
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HybridSource
>Reporter: Nicholas Jiang
>Assignee: Ran Tao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33579) Join sql error

2023-11-16 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787057#comment-17787057
 ] 

waywtdcc commented on FLINK-33579:
--

After I turned on 'table.optimizer.join-reorder-enabled' = 'true';, the 
execution time was longer than select
   *
   from
   orders,
   customer,
   supplier
  
   where
c_custkey = o_custkey and
c_nationkey = s_nationkey is much more efficient in execution. This is the 
statement of tpch.

> Join sql error
> --
>
> Key: FLINK-33579
> URL: https://issues.apache.org/jira/browse/FLINK-33579
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.17.1
>Reporter: waywtdcc
>Priority: Major
>
>  
> {code:java}
> set pipeline.operator-chaining=true;
>  set execution.runtime-mode=BATCH;
>   set table.exec.disabled-operators = NestedLoopJoin;
> explain plan for
> select
> *
> from
> orders,
> supplier,
> customer
> where
> c_custkey = o_custkey and
> c_nationkey = s_nationkey {code}
>  
>  
>  
> error:
> {code:java}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
>  
> FlinkLogicalJoin(condition=[AND(=($21, $2), =($24, $15))], joinType=[inner])
> :- FlinkLogicalJoin(condition=[true], joinType=[inner])
> :  :- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, orders]], 
> fields=[uuid, o_orderkey, o_custkey, o_orderstatus, o_totalprice, 
> o_orderdate, o_orderpriority, o_clerk, o_shippriority, o_comment, ts])
> :  +- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, 
> supplier]], fields=[uuid, s_suppkey, s_name, s_address, s_nationkey, s_phone, 
> s_acctbal, s_comment, ts])
> +- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, customer]], 
> fields=[uuid, c_custkey, c_name, c_address, c_nationkey, c_phone, c_acctbal, 
> c_mktsegment, c_comment, ts])
>  
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL 
> features.
>  
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
> at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
> at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
> at scala.collection.Iterator.foreach(Iterator.scala:937)
> at scala.collection.Iterator.foreach$(Iterator.scala:937)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
> at scala.collection.IterableLike.foreach(IterableLike.scala:70)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
> at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
> at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:55)
> at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:93)
> at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
> at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:45)
> at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:45)
> at scala.collection.immutable.List.foreach(List.scala:388)
> at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:45)
> at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:329)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.getExplainGraphs(PlannerBase.scala:541)
> at 
> org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:115)
> at 
> org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:47)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.explainInternal(TableEnvironmentImpl.java:620)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentInternal.explainInternal(TableEnvironmentInternal.java:96)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1296)
> at 

[jira] [Updated] (FLINK-33579) Join sql error

2023-11-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-33579:
-
Description: 
 
{code:java}
set pipeline.operator-chaining=true;
 set execution.runtime-mode=BATCH;
  set table.exec.disabled-operators = NestedLoopJoin;
explain plan for
select
*
from
orders,
supplier,
customer
where
c_custkey = o_custkey and
c_nationkey = s_nationkey {code}
 

 

 

error:
{code:java}
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 
 
FlinkLogicalJoin(condition=[AND(=($21, $2), =($24, $15))], joinType=[inner])
:- FlinkLogicalJoin(condition=[true], joinType=[inner])
:  :- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, orders]], 
fields=[uuid, o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, 
o_orderpriority, o_clerk, o_shippriority, o_comment, ts])
:  +- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, supplier]], 
fields=[uuid, s_suppkey, s_name, s_address, s_nationkey, s_phone, s_acctbal, 
s_comment, ts])
+- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, customer]], 
fields=[uuid, c_custkey, c_name, c_address, c_nationkey, c_phone, c_acctbal, 
c_mktsegment, c_comment, ts])
 
This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.
 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:55)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:93)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:45)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:45)
at scala.collection.immutable.List.foreach(List.scala:388)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:45)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:329)
at 
org.apache.flink.table.planner.delegation.PlannerBase.getExplainGraphs(PlannerBase.scala:541)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:115)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:47)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.explainInternal(TableEnvironmentImpl.java:620)
at 
org.apache.flink.table.api.internal.TableEnvironmentInternal.explainInternal(TableEnvironmentInternal.java:96)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1296)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:658)
at 
org.grg_banking.flink.sqlexecute.FlinkUtils.exeucteSqlFile2(FlinkUtils.java:262)
at org.apache.flink.catalog.test.TestCatalog.testBatchDev(TestCatalog.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 

[jira] [Updated] (FLINK-33579) Join sql error

2023-11-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-33579:
-
Description: 
explain plan for
select
*
from
orders,
supplier,
customer

where
c_custkey = o_custkey and
c_nationkey = s_nationkey

 

 

error:

```
 
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 
 
FlinkLogicalJoin(condition=[AND(=($21, $2), =($24, $15))], joinType=[inner])
:- FlinkLogicalJoin(condition=[true], joinType=[inner])
:  :- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, orders]], 
fields=[uuid, o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, 
o_orderpriority, o_clerk, o_shippriority, o_comment, ts])
:  +- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, supplier]], 
fields=[uuid, s_suppkey, s_name, s_address, s_nationkey, s_phone, s_acctbal, 
s_comment, ts])
+- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, customer]], 
fields=[uuid, c_custkey, c_name, c_address, c_nationkey, c_phone, c_acctbal, 
c_mktsegment, c_comment, ts])
 
This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.
 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:55)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:93)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:45)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:45)
at scala.collection.immutable.List.foreach(List.scala:388)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:45)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:329)
at 
org.apache.flink.table.planner.delegation.PlannerBase.getExplainGraphs(PlannerBase.scala:541)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:115)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:47)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.explainInternal(TableEnvironmentImpl.java:620)
at 
org.apache.flink.table.api.internal.TableEnvironmentInternal.explainInternal(TableEnvironmentInternal.java:96)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1296)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:658)
at 
org.grg_banking.flink.sqlexecute.FlinkUtils.exeucteSqlFile2(FlinkUtils.java:262)
at org.apache.flink.catalog.test.TestCatalog.testBatchDev(TestCatalog.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 

[jira] [Created] (FLINK-33579) Join sql error

2023-11-16 Thread waywtdcc (Jira)
waywtdcc created FLINK-33579:


 Summary: Join sql error
 Key: FLINK-33579
 URL: https://issues.apache.org/jira/browse/FLINK-33579
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.17.1
Reporter: waywtdcc


explain plan for
select
*
from
orders,
supplier,
customer

where
c_custkey = o_custkey and
c_nationkey = s_nationkey

 

 

erro:

```
 
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 
 
FlinkLogicalJoin(condition=[AND(=($21, $2), =($24, $15))], joinType=[inner])
:- FlinkLogicalJoin(condition=[true], joinType=[inner])
:  :- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, orders]], 
fields=[uuid, o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, 
o_orderpriority, o_clerk, o_shippriority, o_comment, ts])
:  +- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, supplier]], 
fields=[uuid, s_suppkey, s_name, s_address, s_nationkey, s_phone, s_acctbal, 
s_comment, ts])
+- FlinkLogicalTableSourceScan(table=[[paimon, tpch100g_paimon, customer]], 
fields=[uuid, c_custkey, c_name, c_address, c_nationkey, c_phone, c_acctbal, 
c_mktsegment, c_comment, ts])
 
This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.
 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:55)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:93)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:45)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:45)
at scala.collection.immutable.List.foreach(List.scala:388)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:45)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:329)
at 
org.apache.flink.table.planner.delegation.PlannerBase.getExplainGraphs(PlannerBase.scala:541)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:115)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.explain(BatchPlanner.scala:47)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.explainInternal(TableEnvironmentImpl.java:620)
at 
org.apache.flink.table.api.internal.TableEnvironmentInternal.explainInternal(TableEnvironmentInternal.java:96)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1296)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:658)
at 
org.grg_banking.flink.sqlexecute.FlinkUtils.exeucteSqlFile2(FlinkUtils.java:262)
at org.apache.flink.catalog.test.TestCatalog.testBatchDev(TestCatalog.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 

[jira] [Updated] (FLINK-33069) Mysql and Postgre catalog support url extra parameters

2023-09-10 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-33069:
-
External issue URL:   (was: 
https://github.com/apache/flink-connector-jdbc/pull/74)

> Mysql and Postgre catalog support url extra parameters
> --
>
> Key: FLINK-33069
> URL: https://issues.apache.org/jira/browse/FLINK-33069
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: waywtdcc
>Priority: Major
>
>  
>  
> Mysql and Postgres catalog support url extra parameters
> CREATE CATALOG mymysql WITH(
> 'type' = 'jdbc',
> 'username' = 'root',
> 'password' = 'xxx',
> 'base-url' = 'jdbc:mysql://xxx:53309',
> 'extra-url-param' = '?characterEncoding=utf8'
> );
> If used in this way, the URLs of all tables obtained from this catalog are: 
> jdbc:mysql://xxx:53309?characterEncoding=utf8



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33069) Mysql and Postgre catalog support url extra parameters

2023-09-10 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-33069:
-
External issue URL: https://github.com/apache/flink-connector-jdbc/pull/74

> Mysql and Postgre catalog support url extra parameters
> --
>
> Key: FLINK-33069
> URL: https://issues.apache.org/jira/browse/FLINK-33069
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: waywtdcc
>Priority: Major
>
>  
>  
> Mysql and Postgres catalog support url extra parameters
> CREATE CATALOG mymysql WITH(
> 'type' = 'jdbc',
> 'username' = 'root',
> 'password' = 'xxx',
> 'base-url' = 'jdbc:mysql://xxx:53309',
> 'extra-url-param' = '?characterEncoding=utf8'
> );
> If used in this way, the URLs of all tables obtained from this catalog are: 
> jdbc:mysql://xxx:53309?characterEncoding=utf8



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33069) Mysql and Postgre catalog support url extra parameters

2023-09-10 Thread waywtdcc (Jira)
waywtdcc created FLINK-33069:


 Summary: Mysql and Postgre catalog support url extra parameters
 Key: FLINK-33069
 URL: https://issues.apache.org/jira/browse/FLINK-33069
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / JDBC
Reporter: waywtdcc


 

 

Mysql and Postgres catalog support url extra parameters

CREATE CATALOG mymysql WITH(
'type' = 'jdbc',
'username' = 'root',
'password' = 'xxx',
'base-url' = 'jdbc:mysql://xxx:53309',
'extra-url-param' = '?characterEncoding=utf8'
);

If used in this way, the URLs of all tables obtained from this catalog are: 
jdbc:mysql://xxx:53309?characterEncoding=utf8



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29138) Project pushdown not work for lookup source

2023-07-04 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739831#comment-17739831
 ] 

waywtdcc commented on FLINK-29138:
--

hello, Can this pr be merged into 1.14.5? Which PR do you still rely on?   
[~lzljs3620320] 

 

> Project pushdown not work for lookup source
> ---
>
> Key: FLINK-29138
> URL: https://issues.apache.org/jira/browse/FLINK-29138
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.14.6, 1.15.3
>
> Attachments: image-2022-08-30-20-33-24-105.png
>
>
> Current tests: LookupJoinTest#testJoinTemporalTableWithProjectionPushDown
> {code:java}
> @Test
> def testJoinTemporalTableWithProjectionPushDown(): Unit = {
> val sql =
> """
> |SELECT T.*, D.id
> |FROM MyTable AS T
> |JOIN LookupTable FOR SYSTEM_TIME AS OF T.proctime AS D
> |ON T.a = D.id
> """.stripMargin
> util.verifyExecPlan(sql)
> }
> {code}
> the optimized plan doesn't print the selected columns from lookup source, but 
> actually it didn't push the project into lookup source (still select all 
> columns from source), this is not as expected
> {code:java}
> 
> 
> 
> {code}
>  
> incorrect intermediate optimization result
> {code:java}
> =  logical_rewrite 
>  optimize result: 
> FlinkLogicalJoin(condition=[=($0, $5)], joinType=[inner])
> :- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, 
> MyTable]], fields=[a, b, c, proctime, rowtime])
> +- FlinkLogicalSnapshot(period=[$cor0.proctime])
>    +- FlinkLogicalCalc(select=[id])
>       +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, LookupTable]], fields=[id, name, age])
> =  time_indicator 
>  optimize result: 
> FlinkLogicalCalc(select=[a, b, c, PROCTIME_MATERIALIZE(proctime) AS proctime, 
> rowtime, id])
> +- FlinkLogicalJoin(condition=[=($0, $5)], joinType=[inner])
>    :- FlinkLogicalDataStreamTableScan(table=[[default_catalog, 
> default_database, MyTable]], fields=[a, b, c, proctime, rowtime])
>    +- FlinkLogicalSnapshot(period=[$cor0.proctime])
>       +- FlinkLogicalCalc(select=[id])
>          +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, LookupTable]], fields=[id, name, age])
> {code}
>  
> plan comparison after fix
> !image-2022-08-30-20-33-24-105.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31115) Support a task to specify multiple slots

2023-03-02 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1769#comment-1769
 ] 

waywtdcc commented on FLINK-31115:
--

[~Weijie Guo]

FLINK-31267

> Support a task to specify multiple slots
> 
>
> Key: FLINK-31115
> URL: https://issues.apache.org/jira/browse/FLINK-31115
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> Supports specifying multiple slots for one task.
> Different tasks require different slot cpu cores and memory. Like the 
> spark.task.cpus parameter of the spark engine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31115) Support a task to specify multiple slots

2023-03-02 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695554#comment-17695554
 ] 

waywtdcc commented on FLINK-31115:
--

[~Weijie Guo]  Yes, I think so too, raised a new issue

> Support a task to specify multiple slots
> 
>
> Key: FLINK-31115
> URL: https://issues.apache.org/jira/browse/FLINK-31115
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> Supports specifying multiple slots for one task.
> Different tasks require different slot cpu cores and memory. Like the 
> spark.task.cpus parameter of the spark engine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31267) Fine-Grained Resource Management supports table and sql levels

2023-02-28 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-31267:
-
Fix Version/s: 1.18.0

> Fine-Grained Resource Management supports table and sql levels
> --
>
> Key: FLINK-31267
> URL: https://issues.apache.org/jira/browse/FLINK-31267
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> Fine-Grained Resource Management supports table and sql levels. Now 
> Fine-Grained Resource can only be used at the datastream api level, and does 
> not support table and sql level settings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31267) Fine-Grained Resource Management supports table and sql levels

2023-02-28 Thread waywtdcc (Jira)
waywtdcc created FLINK-31267:


 Summary: Fine-Grained Resource Management supports table and sql 
levels
 Key: FLINK-31267
 URL: https://issues.apache.org/jira/browse/FLINK-31267
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Affects Versions: 1.16.1
Reporter: waywtdcc


Fine-Grained Resource Management supports table and sql levels. Now 
Fine-Grained Resource can only be used at the datastream api level, and does 
not support table and sql level settings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31115) Support a task to specify multiple slots

2023-02-19 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690993#comment-17690993
 ] 

waywtdcc commented on FLINK-31115:
--

Fine-Grained Resource Management is not enough. Fine-Grained Resource 
Management is task level and api, mine is job level and sql.

> Support a task to specify multiple slots
> 
>
> Key: FLINK-31115
> URL: https://issues.apache.org/jira/browse/FLINK-31115
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> Supports specifying multiple slots for one task.
> Different tasks require different slot cpu cores and memory. Like the 
> spark.task.cpus parameter of the spark engine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31116) Support taskmanager related parameters in session mode Support job granularity setting

2023-02-19 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690992#comment-17690992
 ] 

waywtdcc commented on FLINK-31116:
--

It is used for real-time synchronization, using yarn session mode. Each table 
synchronization is a job. The memory required by each job is quite different, 
but I can't set job-level resources.

> Support taskmanager related parameters in session mode Support job 
> granularity setting
> --
>
> Key: FLINK-31116
> URL: https://issues.apache.org/jira/browse/FLINK-31116
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> In session mode, taskmanager related parameters are supported and job 
> granularity settings are supported.
> If the yarn session is submitted, taskmanager.numberOfTaskSlots is set
> =2, most jobs can be configured according to this. But occasionally when 
> submitting job2, I want taskmanager to be set to 
> taskmanager.numberOfTaskSlots=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31116) Support taskmanager related parameters in session mode Support job granularity setting

2023-02-19 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690991#comment-17690991
 ] 

waywtdcc commented on FLINK-31116:
--

Fine-Grained Resource Management is not enough. Fine-Grained Resource 
Management is task level and api, mine is job level and sql.

> Support taskmanager related parameters in session mode Support job 
> granularity setting
> --
>
> Key: FLINK-31116
> URL: https://issues.apache.org/jira/browse/FLINK-31116
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> In session mode, taskmanager related parameters are supported and job 
> granularity settings are supported.
> If the yarn session is submitted, taskmanager.numberOfTaskSlots is set
> =2, most jobs can be configured according to this. But occasionally when 
> submitting job2, I want taskmanager to be set to 
> taskmanager.numberOfTaskSlots=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31116) Support taskmanager related parameters in session mode Support job granularity setting

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-31116:
-
Component/s: Runtime / Task

> Support taskmanager related parameters in session mode Support job 
> granularity setting
> --
>
> Key: FLINK-31116
> URL: https://issues.apache.org/jira/browse/FLINK-31116
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> In session mode, taskmanager related parameters are supported and job 
> granularity settings are supported.
> If the yarn session is submitted, taskmanager.numberOfTaskSlots is set
> =2, most jobs can be configured according to this. But occasionally when 
> submitting job2, I want taskmanager to be set to 
> taskmanager.numberOfTaskSlots=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31116) Support taskmanager related parameters in session mode Support job granularity setting

2023-02-16 Thread waywtdcc (Jira)
waywtdcc created FLINK-31116:


 Summary: Support taskmanager related parameters in session mode 
Support job granularity setting
 Key: FLINK-31116
 URL: https://issues.apache.org/jira/browse/FLINK-31116
 Project: Flink
  Issue Type: New Feature
Reporter: waywtdcc


In session mode, taskmanager related parameters are supported and job 
granularity settings are supported.
If the yarn session is submitted, taskmanager.numberOfTaskSlots is set
=2, most jobs can be configured according to this. But occasionally when 
submitting job2, I want taskmanager to be set to taskmanager.numberOfTaskSlots=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31116) Support taskmanager related parameters in session mode Support job granularity setting

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-31116:
-
Affects Version/s: 1.16.1

> Support taskmanager related parameters in session mode Support job 
> granularity setting
> --
>
> Key: FLINK-31116
> URL: https://issues.apache.org/jira/browse/FLINK-31116
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> In session mode, taskmanager related parameters are supported and job 
> granularity settings are supported.
> If the yarn session is submitted, taskmanager.numberOfTaskSlots is set
> =2, most jobs can be configured according to this. But occasionally when 
> submitting job2, I want taskmanager to be set to 
> taskmanager.numberOfTaskSlots=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31116) Support taskmanager related parameters in session mode Support job granularity setting

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-31116:
-
Fix Version/s: 1.18.0

> Support taskmanager related parameters in session mode Support job 
> granularity setting
> --
>
> Key: FLINK-31116
> URL: https://issues.apache.org/jira/browse/FLINK-31116
> Project: Flink
>  Issue Type: New Feature
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> In session mode, taskmanager related parameters are supported and job 
> granularity settings are supported.
> If the yarn session is submitted, taskmanager.numberOfTaskSlots is set
> =2, most jobs can be configured according to this. But occasionally when 
> submitting job2, I want taskmanager to be set to 
> taskmanager.numberOfTaskSlots=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31115) Support a task to specify multiple slots

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-31115:
-
Component/s: Runtime / Task

> Support a task to specify multiple slots
> 
>
> Key: FLINK-31115
> URL: https://issues.apache.org/jira/browse/FLINK-31115
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> Supports specifying multiple slots for one task.
> Different tasks require different slot cpu cores and memory. Like the 
> spark.task.cpus parameter of the spark engine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31115) Support a task to specify multiple slots

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-31115:
-
Issue Type: Improvement  (was: Bug)

> Support a task to specify multiple slots
> 
>
> Key: FLINK-31115
> URL: https://issues.apache.org/jira/browse/FLINK-31115
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.18.0
>
>
> Supports specifying multiple slots for one task.
> Different tasks require different slot cpu cores and memory. Like the 
> spark.task.cpus parameter of the spark engine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31115) Support a task to specify multiple slots

2023-02-16 Thread waywtdcc (Jira)
waywtdcc created FLINK-31115:


 Summary: Support a task to specify multiple slots
 Key: FLINK-31115
 URL: https://issues.apache.org/jira/browse/FLINK-31115
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.16.1
Reporter: waywtdcc
 Fix For: 1.18.0


Supports specifying multiple slots for one task.
Different tasks require different slot cpu cores and memory. Like the 
spark.task.cpus parameter of the spark engine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-28985) support create table like view

2023-02-16 Thread waywtdcc (Jira)


[ https://issues.apache.org/jira/browse/FLINK-28985 ]


waywtdcc deleted comment on FLINK-28985:
--

was (Author: waywtdcc):
hi [~jark]  [~martijnvisser] 

 

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29699) Json format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690115#comment-17690115
 ] 

waywtdcc commented on FLINK-29699:
--

I have already edited it

> Json format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.16.1
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> When I use flink cdc to read oracle, the time type data returned by cdc is a 
> long type timestamp. I want to convert it to timestamp type, but it is not 
> supported.
>  
> 1. JSON parsing supports converting long timestamps into flink timestamp 
> types, for example, supporting JSON parsing of 166625530 numbers into 
> timestamp
> 2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
> into flink timestamp type, for example, it supports 
> 1990-10-14T12:12:43.123456789Z into timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Json format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Affects Version/s: 1.16.1

> Json format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.16.1
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> When I use flink cdc to read oracle, the time type data returned by cdc is a 
> long type timestamp. I want to convert it to timestamp type, but it is not 
> supported.
>  
> 1. JSON parsing supports converting long timestamps into flink timestamp 
> types, for example, supporting JSON parsing of 166625530 numbers into 
> timestamp
> 2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
> into flink timestamp type, for example, it supports 
> 1990-10-14T12:12:43.123456789Z into timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Json format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Description: 
When I use flink cdc to read oracle, the time type data returned by cdc is a 
long type timestamp. I want to convert it to timestamp type, but it is not 
supported.

 

1. JSON parsing supports converting long timestamps into flink timestamp types, 
for example, supporting JSON parsing of 166625530 numbers into timestamp
2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
into flink timestamp type, for example, it supports 
1990-10-14T12:12:43.123456789Z into timestamp type

  was:
1. JSON parsing supports converting long timestamps into flink timestamp types, 
for example, supporting JSON parsing of 166625530 numbers into timestamp
2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
into flink timestamp type, for example, it supports 
1990-10-14T12:12:43.123456789Z into timestamp type


> Json format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
>
> When I use flink cdc to read oracle, the time type data returned by cdc is a 
> long type timestamp. I want to convert it to timestamp type, but it is not 
> supported.
>  
> 1. JSON parsing supports converting long timestamps into flink timestamp 
> types, for example, supporting JSON parsing of 166625530 numbers into 
> timestamp
> 2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
> into flink timestamp type, for example, it supports 
> 1990-10-14T12:12:43.123456789Z into timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Json format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Fix Version/s: 1.18.0

> Json format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> When I use flink cdc to read oracle, the time type data returned by cdc is a 
> long type timestamp. I want to convert it to timestamp type, but it is not 
> supported.
>  
> 1. JSON parsing supports converting long timestamps into flink timestamp 
> types, for example, supporting JSON parsing of 166625530 numbers into 
> timestamp
> 2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
> into flink timestamp type, for example, it supports 
> 1990-10-14T12:12:43.123456789Z into timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Json format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Description: 
1. JSON parsing supports converting long timestamps into flink timestamp types, 
for example, supporting JSON parsing of 166625530 numbers into timestamp
2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
into flink timestamp type, for example, it supports 
1990-10-14T12:12:43.123456789Z into timestamp type

  was:
1. JSON parsing supports converting long timestamps into flink timestamp types, 
for example, supporting JSON parsing of 166625530 numbers into timestamps
2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
into flink timestamp type, for example, it supports 
1990-10-14T12:12:43.123456789Z into timestamp type


> Json format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
>
> 1. JSON parsing supports converting long timestamps into flink timestamp 
> types, for example, supporting JSON parsing of 166625530 numbers into 
> timestamp
> 2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
> into flink timestamp type, for example, it supports 
> 1990-10-14T12:12:43.123456789Z into timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Json format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Description: 
1. JSON parsing supports converting long timestamps into flink timestamp types, 
for example, supporting JSON parsing of 166625530 numbers into timestamps
2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
into flink timestamp type, for example, it supports 
1990-10-14T12:12:43.123456789Z into timestamp type

  was:
Debezium format parsing supports converting strings at the end with Z and 
numbers  to timestamp

 

1. Previously, debezium could not parse the long timestamp to timestamp type. 
For example, 166625530.
2. The time format string with Z suffix cannot be parsed to timestamp type, 
such as 2022-10-19T19:38:43Z format


> Json format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
>
> 1. JSON parsing supports converting long timestamps into flink timestamp 
> types, for example, supporting JSON parsing of 166625530 numbers into 
> timestamps
> 2. JSON analysis supports the conversion of WITH_LOCAL_TIMEZONE string data 
> into flink timestamp type, for example, it supports 
> 1990-10-14T12:12:43.123456789Z into timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Json format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Summary: Json format parsing supports converting strings  at the end with Z 
and numbers to timestamp  (was: Debezium format parsing supports converting 
strings  at the end with Z and numbers to timestamp)

> Json format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
>
> Debezium format parsing supports converting strings at the end with Z and 
> numbers  to timestamp
>  
> 1. Previously, debezium could not parse the long timestamp to timestamp type. 
> For example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings at the end with Z and numbers to timestamp

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Issue Type: Improvement  (was: New Feature)

> Debezium format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
>
> Debezium format parsing supports converting strings at the end with Z and 
> numbers  to timestamp
>  
> 1. Previously, debezium could not parse the long timestamp to timestamp type. 
> For example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28985) support create table like view

2023-02-16 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690076#comment-17690076
 ] 

waywtdcc commented on FLINK-28985:
--

hi [~jark]  [~martijnvisser] 

 

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Issue Type: Improvement  (was: New Feature)

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2023-02-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Fix Version/s: 1.18.0
   (was: 1.17.0)

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29912) jdbc scan.partition.column can specify any type of field

2022-11-07 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29912:
-
Description: scan.partition. column can specify any type of field.  At 
present, scan.partition. column must be a numeric, date, or timestamp column 
from the table in question.  You can specify any type of field, such as string 
type, which can satisfy all high concurrent read scenarios  (was: 
scan.partition.column can specify any type of field. At present, 
scan.partition.column must be a numeric, date, or timestamp column from the 
table in question. You can specify any type of field, which can satisfy all 
high concurrent read scenarios.)

> jdbc scan.partition.column can specify any type of field
> 
>
> Key: FLINK-29912
> URL: https://issues.apache.org/jira/browse/FLINK-29912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> scan.partition. column can specify any type of field.  At present, 
> scan.partition. column must be a numeric, date, or timestamp column from the 
> table in question.  You can specify any type of field, such as string type, 
> which can satisfy all high concurrent read scenarios



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29912) jdbc scan.partition.column can specify any type of field

2022-11-06 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29912:
-
Description: scan.partition.column can specify any type of field. At 
present, scan.partition.column must be a numeric, date, or timestamp column 
from the table in question. You can specify any type of field, which can 
satisfy all high concurrent read scenarios.  (was: scan.partition.column can 
specify any type of field. At present, scan. partition Column must be a 
numeric, date, or timestamp column from the table in question. You can specify 
any type of field.)

> jdbc scan.partition.column can specify any type of field
> 
>
> Key: FLINK-29912
> URL: https://issues.apache.org/jira/browse/FLINK-29912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> scan.partition.column can specify any type of field. At present, 
> scan.partition.column must be a numeric, date, or timestamp column from the 
> table in question. You can specify any type of field, which can satisfy all 
> high concurrent read scenarios.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29912) jdbcscan.partition.column can specify any type of field

2022-11-06 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29912:
-
Summary: jdbcscan.partition.column can specify any type of field  (was: 
scan.partition.column can specify any type of field)

> jdbcscan.partition.column can specify any type of field
> ---
>
> Key: FLINK-29912
> URL: https://issues.apache.org/jira/browse/FLINK-29912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> scan.partition.column can specify any type of field. At present, scan. 
> partition Column must be a numeric, date, or timestamp column from the table 
> in question. You can specify any type of field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29912) scan.partition.column can specify any type of field

2022-11-06 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29912:
-
Summary: scan.partition.column can specify any type of field  (was: 
scan.partition. column can specify any type of field)

> scan.partition.column can specify any type of field
> ---
>
> Key: FLINK-29912
> URL: https://issues.apache.org/jira/browse/FLINK-29912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> scan.partition.column can specify any type of field. At present, scan. 
> partition Column must be a numeric, date, or timestamp column from the table 
> in question. You can specify any type of field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29912) jdbc scan.partition.column can specify any type of field

2022-11-06 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29912:
-
Summary: jdbc scan.partition.column can specify any type of field  (was: 
jdbcscan.partition.column can specify any type of field)

> jdbc scan.partition.column can specify any type of field
> 
>
> Key: FLINK-29912
> URL: https://issues.apache.org/jira/browse/FLINK-29912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> scan.partition.column can specify any type of field. At present, scan. 
> partition Column must be a numeric, date, or timestamp column from the table 
> in question. You can specify any type of field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29912) scan.partition. column can specify any type of field

2022-11-06 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29912:
-
Description: scan.partition.column can specify any type of field. At 
present, scan. partition Column must be a numeric, date, or timestamp column 
from the table in question. You can specify any type of field.  (was: 
scan.partition. Column can specify any type of field. At present, scan. 
partition Column must be a numeric, date, or timestamp column from the table in 
question. You can specify any type of field.)

> scan.partition. column can specify any type of field
> 
>
> Key: FLINK-29912
> URL: https://issues.apache.org/jira/browse/FLINK-29912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> scan.partition.column can specify any type of field. At present, scan. 
> partition Column must be a numeric, date, or timestamp column from the table 
> in question. You can specify any type of field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29912) scan.partition. column can specify any type of field

2022-11-06 Thread waywtdcc (Jira)
waywtdcc created FLINK-29912:


 Summary: scan.partition. column can specify any type of field
 Key: FLINK-29912
 URL: https://issues.apache.org/jira/browse/FLINK-29912
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / JDBC
Affects Versions: 1.16.0
Reporter: waywtdcc
 Fix For: 1.17.0


scan.partition. Column can specify any type of field. At present, scan. 
partition Column must be a numeric, date, or timestamp column from the table in 
question. You can specify any type of field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29858) Jdbc reading supports setting multiple queryTemplates

2022-11-06 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29858:
-
Labels: jdbc_connector  (was: )

> Jdbc reading supports setting multiple queryTemplates
> -
>
> Key: FLINK-29858
> URL: https://issues.apache.org/jira/browse/FLINK-29858
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
>  Labels: jdbc_connector
> Fix For: 1.17.0
>
> Attachments: image-2022-11-03-13-53-12-593.png
>
>
> Jdbc reading supports setting multiple queryTemplates. Currently, jdbc 
> reading only supports reading one query template. Sometimes it is not enough. 
> The queryTemplate in the JdbcRowDataInputFormat. Sometimes you may need to 
> select * from table where col1>=? and col1 < ?  And select * from table where 
> col1>=? and col1 <= ?  Both templates should be used
> !image-2022-11-03-13-53-12-593.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28985) support create table like view

2022-11-03 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17628645#comment-17628645
 ] 

waywtdcc commented on FLINK-28985:
--

hello? [~godfreyhe] 

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29672) Support oracle catalog

2022-11-03 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc closed FLINK-29672.

Resolution: Later

> Support oracle catalog 
> ---
>
> Key: FLINK-29672
> URL: https://issues.apache.org/jira/browse/FLINK-29672
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>
> Support oracle catalog 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29673) Support sqlserver catalog

2022-11-03 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc closed FLINK-29673.

Resolution: Later

> Support sqlserver catalog
> -
>
> Key: FLINK-29673
> URL: https://issues.apache.org/jira/browse/FLINK-29673
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29858) Jdbc reading supports setting multiple queryTemplates

2022-11-02 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29858:
-
Attachment: image-2022-11-03-13-53-12-593.png

> Jdbc reading supports setting multiple queryTemplates
> -
>
> Key: FLINK-29858
> URL: https://issues.apache.org/jira/browse/FLINK-29858
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
> Attachments: image-2022-11-03-13-53-12-593.png
>
>
> Jdbc reading supports setting multiple queryTemplates. Currently, jdbc 
> reading only supports reading one query template. Sometimes it is not enough.
> queryTemplate 
> in JdbcRowDataInputFormat



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29858) Jdbc reading supports setting multiple queryTemplates

2022-11-02 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29858:
-
Description: 
Jdbc reading supports setting multiple queryTemplates. Currently, jdbc reading 
only supports reading one query template. Sometimes it is not enough. The 
queryTemplate in the JdbcRowDataInputFormat. Sometimes you may need to select * 
from table where col1>=? and col1 < ?  And select * from table where col1>=? 
and col1 <= ?  Both templates should be used

!image-2022-11-03-13-53-12-593.png!

  was:
Jdbc reading supports setting multiple queryTemplates. Currently, jdbc reading 
only supports reading one query template. Sometimes it is not enough.

queryTemplate 

in JdbcRowDataInputFormat


> Jdbc reading supports setting multiple queryTemplates
> -
>
> Key: FLINK-29858
> URL: https://issues.apache.org/jira/browse/FLINK-29858
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
> Attachments: image-2022-11-03-13-53-12-593.png
>
>
> Jdbc reading supports setting multiple queryTemplates. Currently, jdbc 
> reading only supports reading one query template. Sometimes it is not enough. 
> The queryTemplate in the JdbcRowDataInputFormat. Sometimes you may need to 
> select * from table where col1>=? and col1 < ?  And select * from table where 
> col1>=? and col1 <= ?  Both templates should be used
> !image-2022-11-03-13-53-12-593.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29858) Jdbc reading supports setting multiple queryTemplates

2022-11-02 Thread waywtdcc (Jira)
waywtdcc created FLINK-29858:


 Summary: Jdbc reading supports setting multiple queryTemplates
 Key: FLINK-29858
 URL: https://issues.apache.org/jira/browse/FLINK-29858
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / JDBC
Affects Versions: 1.16.0
Reporter: waywtdcc
 Fix For: 1.17.0


Jdbc reading supports setting multiple queryTemplates. Currently, jdbc reading 
only supports reading one query template. Sometimes it is not enough.

queryTemplate 

in JdbcRowDataInputFormat



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29764) Automatic judgment of parallelism of source

2022-10-26 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc closed FLINK-29764.

Resolution: Not A Problem

> Automatic judgment of parallelism of source
> ---
>
> Key: FLINK-29764
> URL: https://issues.apache.org/jira/browse/FLINK-29764
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.0, 1.15.2
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> The parallelism of the source is automatically judged. The parallelism of the 
> source should not be determined by jobmanager. adaptive batch scheduler The 
> default source parallelism is judged by the two configurations of 
> jobmanager.adaptive batch-scheduler.min-parallelism and jobmanager.adaptive 
> batch-scheduler.max-parallelism and the number of partitions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29764) Automatic judgment of parallelism of source

2022-10-26 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17624796#comment-17624796
 ] 

waywtdcc commented on FLINK-29764:
--

Oh, sorry, I misunderstood. 

> Automatic judgment of parallelism of source
> ---
>
> Key: FLINK-29764
> URL: https://issues.apache.org/jira/browse/FLINK-29764
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.0, 1.15.2
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> The parallelism of the source is automatically judged. The parallelism of the 
> source should not be determined by jobmanager. adaptive batch scheduler The 
> default source parallelism is judged by the two configurations of 
> jobmanager.adaptive batch-scheduler.min-parallelism and jobmanager.adaptive 
> batch-scheduler.max-parallelism and the number of partitions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29764) Automatic judgment of parallelism of source

2022-10-25 Thread waywtdcc (Jira)
waywtdcc created FLINK-29764:


 Summary: Automatic judgment of parallelism of source
 Key: FLINK-29764
 URL: https://issues.apache.org/jira/browse/FLINK-29764
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.15.2, 1.16.0
Reporter: waywtdcc
 Fix For: 1.17.0


The parallelism of the source is automatically judged. The parallelism of the 
source should not be determined by jobmanager. adaptive batch scheduler The 
default source parallelism is judged by the two configurations of 
jobmanager.adaptive batch-scheduler.min-parallelism and jobmanager.adaptive 
batch-scheduler.max-parallelism and the number of partitions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings at the end with Z and numbers to timestamp

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Description: 
Debezium format parsing supports converting strings at the end with Z and 
numbers  to timestamp

 

1. Previously, debezium could not parse the long timestamp to timestamp type. 
For example, 166625530.
2. The time format string with Z suffix cannot be parsed to timestamp type, 
such as 2022-10-19T19:38:43Z format

  was:
Debezium format parsing supports converting strings at the end with Z and 
numbers  to timestamp

 

1. Previously, debezium could not parse the timestamp to timestamp type. For 
example, 166625530.
2. The time format string with Z suffix cannot be parsed to timestamp type, 
such as 2022-10-19T19:38:43Z format


> Debezium format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.1
>
>
> Debezium format parsing supports converting strings at the end with Z and 
> numbers  to timestamp
>  
> 1. Previously, debezium could not parse the long timestamp to timestamp type. 
> For example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Issue Type: New Feature  (was: Improvement)

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Assignee: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.1
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings at the end with Z and numbers to timestamp

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Summary: Debezium format parsing supports converting strings  at the end 
with Z and numbers to timestamp  (was: Debezium format parsing supports 
converting strings  with Z and numbers at the end to timestamp)

> Debezium format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.1
>
>
> Debezium format parsing supports converting strings  with Z and numbers at 
> the end to timestamp
>  
> 1. Previously, debezium could not parse the timestamp to timestamp type. For 
> example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings with Z and numbers at the end to timestamp

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Summary: Debezium format parsing supports converting strings  with Z and 
numbers at the end to timestamp  (was: Debezium format parsing supports 
converting strings and numbers with Z at the end to timestamp)

> Debezium format parsing supports converting strings  with Z and numbers at 
> the end to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.1
>
>
> Debezium format parsing supports converting strings  with Z and numbers at 
> the end to timestamp
>  
> 1. Previously, debezium could not parse the timestamp to timestamp type. For 
> example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings at the end with Z and numbers to timestamp

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Description: 
Debezium format parsing supports converting strings at the end with Z and 
numbers  to timestamp

 

1. Previously, debezium could not parse the timestamp to timestamp type. For 
example, 166625530.
2. The time format string with Z suffix cannot be parsed to timestamp type, 
such as 2022-10-19T19:38:43Z format

  was:
Debezium format parsing supports converting strings  with Z and numbers at the 
end to timestamp

 

1. Previously, debezium could not parse the timestamp to timestamp type. For 
example, 166625530.
2. The time format string with Z suffix cannot be parsed to timestamp type, 
such as 2022-10-19T19:38:43Z format


> Debezium format parsing supports converting strings  at the end with Z and 
> numbers to timestamp
> ---
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.1
>
>
> Debezium format parsing supports converting strings at the end with Z and 
> numbers  to timestamp
>  
> 1. Previously, debezium could not parse the timestamp to timestamp type. For 
> example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29699) Debezium format parsing supports converting strings and numbers with Z at the end to timestamp

2022-10-20 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17620860#comment-17620860
 ] 

waywtdcc commented on FLINK-29699:
--

[~martijnvisser] Thanks you .update now

> Debezium format parsing supports converting strings and numbers with Z at the 
> end to timestamp
> --
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.16.1
>
>
> Debezium format parsing supports converting strings  with Z and numbers at 
> the end to timestamp
>  
> 1. Previously, debezium could not parse the timestamp to timestamp type. For 
> example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings and numbers with Z at the end to timestamp

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Fix Version/s: 1.16.1

> Debezium format parsing supports converting strings and numbers with Z at the 
> end to timestamp
> --
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.16.1
>
>
> Debezium format parsing supports converting strings  with Z and numbers at 
> the end to timestamp
>  
> 1. Previously, debezium could not parse the timestamp to timestamp type. For 
> example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings and numbers with Z at the end to timestamp

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Description: 
Debezium format parsing supports converting strings  with Z and numbers at the 
end to timestamp

 

1. Previously, debezium could not parse the timestamp to timestamp type. For 
example, 166625530.
2. The time format string with Z suffix cannot be parsed to timestamp type, 
such as 2022-10-19T19:38:43Z format

  was:Debezium format parsing supports converting strings  with Z and numbers 
at the end to timestamp


> Debezium format parsing supports converting strings and numbers with Z at the 
> end to timestamp
> --
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
>
> Debezium format parsing supports converting strings  with Z and numbers at 
> the end to timestamp
>  
> 1. Previously, debezium could not parse the timestamp to timestamp type. For 
> example, 166625530.
> 2. The time format string with Z suffix cannot be parsed to timestamp type, 
> such as 2022-10-19T19:38:43Z format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29699) Debezium format parsing supports converting strings and numbers with Z at the end to timestamp

2022-10-20 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29699:
-
Description: Debezium format parsing supports converting strings  with Z 
and numbers at the end to timestamp  (was: Debezium format parsing supports 
converting strings and numbers with Z at the end to timestamp)

> Debezium format parsing supports converting strings and numbers with Z at the 
> end to timestamp
> --
>
> Key: FLINK-29699
> URL: https://issues.apache.org/jira/browse/FLINK-29699
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
>
> Debezium format parsing supports converting strings  with Z and numbers at 
> the end to timestamp



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29699) Debezium format parsing supports converting strings and numbers with Z at the end to timestamp

2022-10-20 Thread waywtdcc (Jira)
waywtdcc created FLINK-29699:


 Summary: Debezium format parsing supports converting strings and 
numbers with Z at the end to timestamp
 Key: FLINK-29699
 URL: https://issues.apache.org/jira/browse/FLINK-29699
 Project: Flink
  Issue Type: New Feature
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.16.0
Reporter: waywtdcc
 Fix For: 1.16.1


Debezium format parsing supports converting strings and numbers with Z at the 
end to timestamp



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29673) Support sqlserver catalog

2022-10-17 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619181#comment-17619181
 ] 

waywtdcc commented on FLINK-29673:
--

please assign this to me.

> Support sqlserver catalog
> -
>
> Key: FLINK-29673
> URL: https://issues.apache.org/jira/browse/FLINK-29673
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29673) Support sqlserver catalog

2022-10-17 Thread waywtdcc (Jira)
waywtdcc created FLINK-29673:


 Summary: Support sqlserver catalog
 Key: FLINK-29673
 URL: https://issues.apache.org/jira/browse/FLINK-29673
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.16.0
Reporter: waywtdcc
 Fix For: 1.17.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28985) support create table like view

2022-10-17 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619177#comment-17619177
 ] 

waywtdcc commented on FLINK-28985:
--

Please assign this to me.

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.1
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29672) Support oracle catalog

2022-10-17 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619176#comment-17619176
 ] 

waywtdcc commented on FLINK-29672:
--

please assign this to me

> Support oracle catalog 
> ---
>
> Key: FLINK-29672
> URL: https://issues.apache.org/jira/browse/FLINK-29672
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0, 1.16.1
>
>
> Support oracle catalog 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29672) Support oracle catalog

2022-10-17 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29672:
-
Fix Version/s: (was: 1.16.1)

> Support oracle catalog 
> ---
>
> Key: FLINK-29672
> URL: https://issues.apache.org/jira/browse/FLINK-29672
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> Support oracle catalog 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29672) Support oracle catalog

2022-10-17 Thread waywtdcc (Jira)
waywtdcc created FLINK-29672:


 Summary: Support oracle catalog 
 Key: FLINK-29672
 URL: https://issues.apache.org/jira/browse/FLINK-29672
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.16.0
Reporter: waywtdcc
 Fix For: 1.17.0


Support oracle catalog 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29672) Support oracle catalog

2022-10-17 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-29672:
-
Fix Version/s: 1.16.1

> Support oracle catalog 
> ---
>
> Key: FLINK-29672
> URL: https://issues.apache.org/jira/browse/FLINK-29672
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0, 1.16.1
>
>
> Support oracle catalog 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2022-10-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Fix Version/s: 1.17.0

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0, 1.16.1
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2022-10-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Fix Version/s: 1.16.1
   (was: 1.17.0)

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.16.1
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2022-10-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Affects Version/s: 1.16.0

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1, 1.16.0
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.16.1
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-3033) Redis Source Connector

2022-10-11 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-3033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616199#comment-17616199
 ] 

waywtdcc commented on FLINK-3033:
-

[~pramod] Hello, where is the Redis connector you wrote? Recently, we also plan 
to develop a Redis source

> Redis Source Connector
> --
>
> Key: FLINK-3033
> URL: https://issues.apache.org/jira/browse/FLINK-3033
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Matthias J. Sax
>Assignee: Subhankar Biswas
>Priority: Not a Priority
>  Labels: stale-assigned
>
> Flink does not provide a source connector for Redis.
> See FLINK-3034



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2022-08-22 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Description: 
At present, to create a table based on table like, you can only use the table 
type table, not the view type.

 

create table  like ;

Only like table type can be used before. This is similar to create table as < 
querysql >, but some scenarios use views more flexibly and can reuse a single 
view in multiple places.

  was:At present, to create a table based on table like, you can only use the 
table type table, not the view type.


> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.17.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.
>  
> create table  like ;
> Only like table type can be used before. This is similar to create table as < 
> querysql >, but some scenarios use views more flexibly and can reuse a single 
> view in multiple places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-21283) Support sql extension for flink sql

2022-08-22 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583265#comment-17583265
 ] 

waywtdcc commented on FLINK-21283:
--

Hello, do you have any follow-up plans for this? We want to use it, and try SQL 
calling stored procedures.[~jark] 

> Support sql extension for flink sql 
> 
>
> Key: FLINK-21283
> URL: https://issues.apache.org/jira/browse/FLINK-21283
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Jun Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> I think we should add sql extension for flink sql so that users can customize 
> sql parsing, sql optimization, etc. we can refer to [spark sql extension 
> |https://issues.apache.org/jira/browse/SPARK-18127]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28985) support create table like view

2022-08-21 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17582684#comment-17582684
 ] 

waywtdcc commented on FLINK-28985:
--

嗨,[~jark]

 

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.16.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28985) support create table like view

2022-08-19 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-28985:
-
Affects Version/s: 1.15.1
   (was: 1.16.0)

> support create table like view
> --
>
> Key: FLINK-28985
> URL: https://issues.apache.org/jira/browse/FLINK-28985
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.16.0
>
>
> At present, to create a table based on table like, you can only use the table 
> type table, not the view type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28985) support create table like view

2022-08-15 Thread waywtdcc (Jira)
waywtdcc created FLINK-28985:


 Summary: support create table like view
 Key: FLINK-28985
 URL: https://issues.apache.org/jira/browse/FLINK-28985
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.16.0
Reporter: waywtdcc
 Fix For: 1.16.0


At present, to create a table based on table like, you can only use the table 
type table, not the view type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26942) Support SELECT clause in CREATE TABLE(CTAS)

2022-06-06 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550743#comment-17550743
 ] 

waywtdcc commented on FLINK-26942:
--

CREATE TABLE [ IF NOT EXISTS ] table_name 
[ WITH ( table_properties ) ]
[ AS query_expression ] 
[ WITH [ NO ] DATA ]




I think [with [no] data] should be added to indicate whether to only create a 
table without writing data

> Support SELECT clause in CREATE TABLE(CTAS)
> ---
>
> Key: FLINK-26942
> URL: https://issues.apache.org/jira/browse/FLINK-26942
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: tartarus
>Priority: Major
> Fix For: 1.16.0
>
>
> Support CTAS(CREATE TABLE AS SELECT) syntax
> {code:java}
> CREATE TABLE [ IF NOT EXISTS ] table_name 
> [ WITH ( table_properties ) ]
> [ AS query_expression ] {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27631) Datastream job combined with table job

2022-05-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-27631:
-
Affects Version/s: 1.13.6
   (was: 1.14.4)

> Datastream job combined with table job
> --
>
> Key: FLINK-27631
> URL: https://issues.apache.org/jira/browse/FLINK-27631
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Table SQL / API
>Affects Versions: 1.13.6
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2022-05-16-14-57-09-836.png
>
>
> Datastream job combined with table job
> One datastream, write two sink, one uses datastream API: datastream 
> addSink(..); After another SQL: insert is used, it is converted to another 
> table_ table from table1; Perform the task like this without using 
> streamexecutionenvironment Execute() executes the addsink operator task, but 
> cannot execute the SQL task; Do not use streamtableenvironment Executesql(), 
> but the addsink operator task will not be executed. If two 
> streamexecutionenvironment Execute() and streamtableenvironment If 
> executesql() is written, two tasks will be executed. Is there any way to put 
> two sink in one task?
>  
> !image-2022-05-16-14-57-09-836.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27631) Datastream job combined with table job

2022-05-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-27631:
-
Affects Version/s: 1.14.4
   (was: 1.13.6)

> Datastream job combined with table job
> --
>
> Key: FLINK-27631
> URL: https://issues.apache.org/jira/browse/FLINK-27631
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Table SQL / API
>Affects Versions: 1.14.4
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2022-05-16-14-57-09-836.png
>
>
> Datastream job combined with table job
> One datastream, write two sink, one uses datastream API: datastream 
> addSink(..); After another SQL: insert is used, it is converted to another 
> table_ table from table1; Perform the task like this without using 
> streamexecutionenvironment Execute() executes the addsink operator task, but 
> cannot execute the SQL task; Do not use streamtableenvironment Executesql(), 
> but the addsink operator task will not be executed. If two 
> streamexecutionenvironment Execute() and streamtableenvironment If 
> executesql() is written, two tasks will be executed. Is there any way to put 
> two sink in one task?
>  
> !image-2022-05-16-14-57-09-836.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27631) Datastream job combined with table job

2022-05-16 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-27631:
-
Description: 
Datastream job combined with table job

One datastream, write two sink, one uses datastream API: datastream 
addSink(..); After another SQL: insert is used, it is converted to another 
table_ table from table1; Perform the task like this without using 
streamexecutionenvironment Execute() executes the addsink operator task, but 
cannot execute the SQL task; Do not use streamtableenvironment Executesql(), 
but the addsink operator task will not be executed. If two 
streamexecutionenvironment Execute() and streamtableenvironment If executesql() 
is written, two tasks will be executed. Is there any way to put two sink in one 
task?

 

!image-2022-05-16-14-57-09-836.png!

  was:
Datastream job combined with table job

One datastream, write two sink, one uses datastream API: datastream 
addSink(..); After another SQL: insert is used, it is converted to another 
table_ table from table1; Can these two writes be put into one task?

 

!image-2022-05-16-14-57-09-836.png!


> Datastream job combined with table job
> --
>
> Key: FLINK-27631
> URL: https://issues.apache.org/jira/browse/FLINK-27631
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Table SQL / API
>Affects Versions: 1.13.6
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2022-05-16-14-57-09-836.png
>
>
> Datastream job combined with table job
> One datastream, write two sink, one uses datastream API: datastream 
> addSink(..); After another SQL: insert is used, it is converted to another 
> table_ table from table1; Perform the task like this without using 
> streamexecutionenvironment Execute() executes the addsink operator task, but 
> cannot execute the SQL task; Do not use streamtableenvironment Executesql(), 
> but the addsink operator task will not be executed. If two 
> streamexecutionenvironment Execute() and streamtableenvironment If 
> executesql() is written, two tasks will be executed. Is there any way to put 
> two sink in one task?
>  
> !image-2022-05-16-14-57-09-836.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27631) Datastream job combined with table job

2022-05-16 Thread waywtdcc (Jira)
waywtdcc created FLINK-27631:


 Summary: Datastream job combined with table job
 Key: FLINK-27631
 URL: https://issues.apache.org/jira/browse/FLINK-27631
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Table SQL / API
Affects Versions: 1.13.6
Reporter: waywtdcc
 Attachments: image-2022-05-16-14-57-09-836.png

Datastream job combined with table job

One datastream, write two sink, one uses datastream API: datastream 
addSink(..); After another SQL: insert is used, it is converted to another 
table_ table from table1; Can these two writes be put into one task?

 

!image-2022-05-16-14-57-09-836.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Description: 
* Scene
 Use classloader.resolve-order to configure parent-first, and modify the source 
code, then print out the modified source code output * command

 
{code:java}
/home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
-Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
-Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" \ 
-Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
-Dexecution.checkpointing.snapshot-compression="true" \ 
-Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 \ 
-Djobmanager.memory.process.size=1g \ -Dtaskmanager.memory.process.size=1024m \ 
-Dclassloader.resolve-order="parent-first" \ 
/home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
 \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
{code}
 

 

*   result

!image-2021-08-25-11-16-34-432.png!

!image-2021-08-25-11-16-39-960.png!!image-2021-08-25-11-16-34-397.png!

  was:
*  Scene
Use classloader.resolve-order to configure parent-first, and modify the source 
code, then print out the modified source code output * command

 
{code:java}
/home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
-Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
-Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" \ 
-Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
-Dexecution.checkpointing.snapshot-compression="true" \ 
-Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 \ 
-Djobmanager.memory.process.size=1g \ -Dtaskmanager.memory.process.size=1024m \ 
-Dclassloader.resolve-order="parent-first" \ 
/home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
 \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
{code}
 

 

*   result

!image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!


> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> * Scene
>  Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-16-34-432.png!
> !image-2021-08-25-11-16-39-960.png!!image-2021-08-25-11-16-34-397.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: image-2021-08-25-11-15-56-855.png

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-15-39-771.png, 
> image-2021-08-25-11-15-39-805.png, image-2021-08-25-11-15-39-839.png, 
> image-2021-08-25-11-15-56-823.png, image-2021-08-25-11-15-56-855.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Description: 
* Scene
 Use classloader.resolve-order to configure parent-first, and modify the source 
code, then print out the modified source code output
 *  command

 
{code:java}
/home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
-Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
-Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" \ 
-Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
-Dexecution.checkpointing.snapshot-compression="true" \ 
-Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 \ 
-Djobmanager.memory.process.size=1g \ -Dtaskmanager.memory.process.size=1024m \ 
-Dclassloader.resolve-order="parent-first" \ 
/home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
 \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
{code}
 

 

   * result

 

 

!image-2021-08-25-11-16-34-432.png!

!image-2021-08-25-11-16-39-960.png!!image-2021-08-25-11-16-34-397.png!

  was:
* Scene
 Use classloader.resolve-order to configure parent-first, and modify the source 
code, then print out the modified source code output * command

 
{code:java}
/home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
-Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
-Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" \ 
-Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
-Dexecution.checkpointing.snapshot-compression="true" \ 
-Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 \ 
-Djobmanager.memory.process.size=1g \ -Dtaskmanager.memory.process.size=1024m \ 
-Dclassloader.resolve-order="parent-first" \ 
/home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
 \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
{code}
 

 

*   result

!image-2021-08-25-11-16-34-432.png!

!image-2021-08-25-11-16-39-960.png!!image-2021-08-25-11-16-34-397.png!


> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> * Scene
>  Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output
>  *  command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
>    * result
>  
>  
> !image-2021-08-25-11-16-34-432.png!
> !image-2021-08-25-11-16-39-960.png!!image-2021-08-25-11-16-34-397.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: (was: image-2021-08-25-11-15-56-855.png)

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: image-2021-08-25-11-16-39-960.png

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Description: 
*  Scene
Use classloader.resolve-order to configure parent-first, and modify the source 
code, then print out the modified source code output * command

 
{code:java}
/home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
-Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
-Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" \ 
-Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
-Dexecution.checkpointing.snapshot-compression="true" \ 
-Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 \ 
-Djobmanager.memory.process.size=1g \ -Dtaskmanager.memory.process.size=1024m \ 
-Dclassloader.resolve-order="parent-first" \ 
/home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
 \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
{code}
 

 

*   result

!image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: image-2021-08-25-11-16-34-432.png

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: image-2021-08-25-11-16-34-397.png

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: (was: image-2021-08-25-11-15-39-805.png)

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: (was: image-2021-08-25-11-16-02-217.png)

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: (was: image-2021-08-25-11-15-56-823.png)

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: (was: image-2021-08-25-11-15-39-771.png)

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: image-2021-08-25-11-16-02-217.png

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: (was: image-2021-08-25-11-15-39-839.png)

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-16-34-397.png, 
> image-2021-08-25-11-16-34-432.png, image-2021-08-25-11-16-39-960.png
>
>
> *  Scene
> Use classloader.resolve-order to configure parent-first, and modify the 
> source code, then print out the modified source code output * command
>  
> {code:java}
> /home/aicore/opt/apps/soft/flink-1.13.2/bin/flink \ run-application \ -t 
> yarn-application \ -Dyarn.application.name="TestInnerJoinTable2" \ 
> -Dyarn.application.queue="default" \ -Dparallelism.default=1 \ 
> -Dstate.checkpoints.dir="hdfs:///flink-data/checkpoints/TestInnerJoinTable2" 
> \ -Dyarn.containers.vcores=1 \ -Dexecution.checkpointing.interval="5min" \ 
> -Dexecution.checkpointing.snapshot-compression="true" \ 
> -Dexecution.checkpointing.timeout="50min" \ -Dtaskmanager.numberOfTaskSlots=1 
> \ -Djobmanager.memory.process.size=1g \ 
> -Dtaskmanager.memory.process.size=1024m \ 
> -Dclassloader.resolve-order="parent-first" \ 
> /home/aicore/opt/apps/soft/flink-1.13.2/user_jars/bigdata-flink/flink-job/target/flink-job-1.0-SNAPSHOT-jar-with-dependencies.jar
>  \ com.cc.flink.test.join_sql_test.TestInnerJoinTable2
> {code}
>  
>  
> *   result
> !image-2021-08-25-11-15-56-855.png!!image-2021-08-25-11-16-02-217.png!!image-2021-08-25-11-15-56-823.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: image-2021-08-25-11-15-39-805.png

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-15-39-771.png, 
> image-2021-08-25-11-15-39-805.png, image-2021-08-25-11-15-39-839.png, 
> image-2021-08-25-11-15-56-823.png, image-2021-08-25-11-15-56-855.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23958) parent-first class orders configuration not effect

2021-08-24 Thread waywtdcc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

waywtdcc updated FLINK-23958:
-
Attachment: image-2021-08-25-11-15-56-823.png

> parent-first class orders configuration not effect
> --
>
> Key: FLINK-23958
> URL: https://issues.apache.org/jira/browse/FLINK-23958
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.2
>Reporter: waywtdcc
>Priority: Major
> Attachments: image-2021-08-25-11-15-39-771.png, 
> image-2021-08-25-11-15-39-805.png, image-2021-08-25-11-15-39-839.png, 
> image-2021-08-25-11-15-56-823.png, image-2021-08-25-11-15-56-855.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >