[jira] [Created] (FLINK-35230) Split FlinkSqlParserImplTest to reduce the code lines.

2024-04-24 Thread Feng Jin (Jira)
Feng Jin created FLINK-35230:


 Summary: Split FlinkSqlParserImplTest to reduce the code lines.
 Key: FLINK-35230
 URL: https://issues.apache.org/jira/browse/FLINK-35230
 Project: Flink
  Issue Type: Technical Debt
  Components: Table SQL / Planner
Reporter: Feng Jin


With the increasing extension of Calcite syntax, the current 
FlinkSqlParserImplTest has reached nearly 3000 lines of code. 

If it exceeds the current limit, it will result in errors in the code style 
check.

{code:log}
08:33:19.679 [ERROR] 
src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java:[1] 
(sizes) FileLength: File length is 3,166 lines (max allowed is 3,100).
{code}

To facilitate future syntax extensions, I suggest that we split 
FlinkSqlParserImplTest and place the same type of syntax in separate Java tests 
for the convenience of avoiding the continuous growth of the original test 
class.

My current idea is: 
Since *FlinkSqlParserImplTest* currently inherits *SqlParserTest*, and 
*SqlParserTest* itself contains many unit tests, for the convenience of future 
test splits, we should introduce a basic *ParserTestBase* inheriting 
*SqlParserTest*, and disable the original related unit tests in 
*SqlParserTest*. 

This will facilitate writing relevant unit tests more quickly during subsequent 
splitting, without the need to repeatedly execute the unit tests inside 
SqlParserTest.






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34312) Improve the handling of default node types when using named parameters.

2024-01-30 Thread Feng Jin (Jira)
Feng Jin created FLINK-34312:


 Summary: Improve the handling of default node types when using 
named parameters.
 Key: FLINK-34312
 URL: https://issues.apache.org/jira/browse/FLINK-34312
 Project: Flink
  Issue Type: Sub-task
Reporter: Feng Jin


Currently, we have supported the use of named parameters with optional 
arguments. 

By adapting the interface of Calcite, we can fill in the default operator when 
a parameter is missing. Whether it is during the validation phase or when 
converting to SqlToRel phase, we need to handle it specially by modifying the 
return type of DEFAULT operator based on the argument type of the operator.  
We have multiple places that need to handle the type of DEFAULT operator, 
including SqlCallBinding, SqlOperatorBinding, and SqlToRelConverter.


The improved solution is as follows: 

Before SqlToRel, we can construct a DEFAULT node with a return type that 
matches the argument type. This way, during the SqlToRel phase, there is no 
need for special handling of the DEFAULT node's type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34265) Add the doc of named parameters

2024-01-29 Thread Feng Jin (Jira)
Feng Jin created FLINK-34265:


 Summary: Add the doc of named parameters
 Key: FLINK-34265
 URL: https://issues.apache.org/jira/browse/FLINK-34265
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table SQL / Planner
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34058) Support optional parameters for named parameters

2024-01-10 Thread Feng Jin (Jira)
Feng Jin created FLINK-34058:


 Summary: Support optional parameters for named parameters
 Key: FLINK-34058
 URL: https://issues.apache.org/jira/browse/FLINK-34058
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Feng Jin
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34057) Support named parameters for functions

2024-01-10 Thread Feng Jin (Jira)
Feng Jin created FLINK-34057:


 Summary: Support named parameters for functions
 Key: FLINK-34057
 URL: https://issues.apache.org/jira/browse/FLINK-34057
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Feng Jin
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34056) Support named parameters for procedures

2024-01-10 Thread Feng Jin (Jira)
Feng Jin created FLINK-34056:


 Summary: Support named parameters for procedures
 Key: FLINK-34056
 URL: https://issues.apache.org/jira/browse/FLINK-34056
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Feng Jin
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34055) Introduce a new annotation for named parameters.

2024-01-10 Thread Feng Jin (Jira)
Feng Jin created FLINK-34055:


 Summary: Introduce a new annotation for named parameters.
 Key: FLINK-34055
 URL: https://issues.apache.org/jira/browse/FLINK-34055
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Feng Jin
 Fix For: 1.19.0


Introduce a new annotation to specify the parameter name, indicate if it is 
optional, and potentially support specifying default values in the future.

Deprecate the argumentNames method in FunctionHints as it is not user-friendly 
for specifying argument names with optional configuration.

 
{code:java}
public @interface ArgumentHint {
/**
 * The name of the parameter, default is an empty string.
 */
String name() default "";
 
/**
 * Whether the parameter is optional, default is false.
 */
boolean isOptional() default false;
 
/**
 * The data type hint for the parameter.
 */
DataTypeHint type() default @DataTypeHint();
}
{code}



{code:java}
public @interface FunctionHint {
  
/**
 * Deprecated attribute for specifying the names of the arguments.
 * It is no longer recommended to use this attribute.
 */
@Deprecated
String[] argumentNames() default {""};
  
/**
 * Attribute for specifying the hints and additional information for 
function arguments.
 */
ArgumentHint[] arguments() default {};
}
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34054) FLIP-387: Support named parameters for functions and call procedures

2024-01-10 Thread Feng Jin (Jira)
Feng Jin created FLINK-34054:


 Summary: FLIP-387: Support named parameters for functions and call 
procedures
 Key: FLINK-34054
 URL: https://issues.apache.org/jira/browse/FLINK-34054
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Reporter: Feng Jin
 Fix For: 1.19.0


Umbrella issue for 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-387%3A+Support+named+parameters+for+functions+and+call+procedures



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33996) Support disabling project rewrite when multiple exprs in the project reference the same project.

2024-01-04 Thread Feng Jin (Jira)
Feng Jin created FLINK-33996:


 Summary: Support disabling project rewrite when multiple exprs in 
the project reference the same project.
 Key: FLINK-33996
 URL: https://issues.apache.org/jira/browse/FLINK-33996
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Affects Versions: 1.18.0
Reporter: Feng Jin


When multiple top projects reference the same bottom project, project rewrite 
rules may result in complex projects being calculated multiple times.

Take the following SQL as an example:

{code:sql}
create table test_source(a varchar) with ('connector'='datagen');

explan plan for select a || 'a' as a, a || 'b' as b FROM (select 
REGEXP_REPLACE(a, 'aaa', 'bbb') as a FROM test_source);
{code}


The final SQL plan is as follows:


{code:sql}
== Abstract Syntax Tree ==
LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
+- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
   +- LogicalTableScan(table=[[default_catalog, default_database, test_source]])

== Optimized Physical Plan ==
Calc(select=[||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
_UTF-16LE'a') AS a, ||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
_UTF-16LE'b') AS b])
+- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
fields=[a])

== Optimized Execution Plan ==
Calc(select=[||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'a') AS a, 
||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'b') AS b])
+- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
fields=[a])
{code}

It can be observed that after project write, regex_place is calculated twice. 
Generally speaking, regular expression matching is a time-consuming operation 
and we usually do not want it to be calculated multiple times. Therefore, for 
this scenario, we can support disabling project rewrite.

After disabling some rules, the final plan we obtained is as follows:


{code:sql}
== Abstract Syntax Tree ==
LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
+- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
   +- LogicalTableScan(table=[[default_catalog, default_database, test_source]])

== Optimized Physical Plan ==
Calc(select=[||(a, _UTF-16LE'a') AS a, ||(a, _UTF-16LE'b') AS b])
+- Calc(select=[REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb') AS a])
   +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
fields=[a])

== Optimized Execution Plan ==
Calc(select=[||(a, 'a') AS a, ||(a, 'b') AS b])
+- Calc(select=[REGEXP_REPLACE(a, 'aaa', 'bbb') AS a])
   +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
fields=[a])
{code}


After testing, we probably need to modify these few rules:

org.apache.flink.table.planner.plan.rules.logical.FlinkProjectMergeRule

org.apache.flink.table.planner.plan.rules.logical.FlinkCalcMergeRule

org.apache.flink.table.planner.plan.rules.logical.FlinkProjectMergeRule








--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33936) Mini-batch should output the result when the result is same as last if TTL is setted.

2023-12-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-33936:


 Summary: Mini-batch should output the result when the result is 
same as last if TTL is setted.
 Key: FLINK-33936
 URL: https://issues.apache.org/jira/browse/FLINK-33936
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.18.0
Reporter: Feng Jin


If mini-batch is enabled currently, and if the aggregated result is the same as 
the previous output, this time's aggregation result will not be sent 
downstream. The specific logic is as follows. This will cause downstream nodes 
to not receive updated data. If there is a TTL set for states at this time, the 
TTL of downstream will not be updated either.

https://github.com/hackergin/flink/blob/a18c0cd3f0cdfd7e0acb53283f40cd2033a86472/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/aggregate/MiniBatchGroupAggFunction.java#L224

{code:java}
if (!equaliser.equals(prevAggValue, newAggValue)) {
// new row is not same with prev row
if (generateUpdateBefore) {
// prepare UPDATE_BEFORE message for previous row
resultRow
.replace(currentKey, prevAggValue)
.setRowKind(RowKind.UPDATE_BEFORE);
out.collect(resultRow);
}
// prepare UPDATE_AFTER message for new row
resultRow.replace(currentKey, 
newAggValue).setRowKind(RowKind.UPDATE_AFTER);
out.collect(resultRow);
}
// new row is same with prev row, no need to output
{code}



When mini-batch is not enabled, even if the aggregation result of this time is 
the same as last time, new results will still be sent if TTL is set.

https://github.com/hackergin/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/aggregate/GroupAggFunction.java#L170

{code:java}

if (stateRetentionTime <= 0 && equaliser.equals(prevAggValue, 
newAggValue)) {
// newRow is the same as before and state cleaning is not 
enabled.
// We do not emit retraction and acc message.
// If state cleaning is enabled, we have to emit messages 
to prevent too early
// state eviction of downstream operators.
return;
} else {
// retract previous result
if (generateUpdateBefore) {
// prepare UPDATE_BEFORE message for previous row
resultRow
.replace(currentKey, prevAggValue)
.setRowKind(RowKind.UPDATE_BEFORE);
out.collect(resultRow);
}
// prepare UPDATE_AFTER message for new row
resultRow.replace(currentKey, 
newAggValue).setRowKind(RowKind.UPDATE_AFTER);
}
{code}


Therefore, based on the consideration of TTL scenarios, I believe that when 
mini-batch aggregation is enabled, new results should also be issued when the 
aggregated result is the same as the previous one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33070) Add doc for 'unnest'

2023-09-11 Thread Feng Jin (Jira)
Feng Jin created FLINK-33070:


 Summary: Add doc for 'unnest' 
 Key: FLINK-33070
 URL: https://issues.apache.org/jira/browse/FLINK-33070
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Reporter: Feng Jin


Row and column transformation is a commonly used approach. In Flink SQL, we can 
use unnest for this purpose.

However, the usage and support of unnest are not explained in the documentation.

 

 I think we can at least add it to the built-in functions section 
(https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/#scalar-functions)
 , or we provide some examples. 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32976) NullpointException when starting flink cluster

2023-08-28 Thread Feng Jin (Jira)
Feng Jin created FLINK-32976:


 Summary: NullpointException when starting flink cluster
 Key: FLINK-32976
 URL: https://issues.apache.org/jira/browse/FLINK-32976
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Configuration
Affects Versions: 1.17.1
Reporter: Feng Jin


The error message as follows: 

 

 
{code:java}
//代码占位符
Caused by: java.ang.NullPointerExceptionat org.apache.flink. runtime. 
security.token.hadoop.HadoopFSDelegationTokenProvider.getFileSystemsToAccess(HadoopFSDelegationTokenProvider.java:173)~[flink-dist-1.17.1.jar:1.17.1]at
 
org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvidertionTokens$1(HadoopFSDelegationTokenProvider.java:113)
 ~[flink-dist-1.17.1.jar:1.17.1at 
java.security.AccessController.doprivileged(Native Method)~[?:1.8.0 281]at 
javax.security.auth.Subject.doAs(Subject.java:422)~[?:1.8.0 281]at org. 
apache.hadoop . security.UserGroupInformation.doAs(UserGroupInformation. 
java:1876) 
~flink-shacd-hadoop-3-uber-3.1.1.7.2.1.0-327-9.0.jar:3.1.1.7.2.1.0-327-9.0]at 
org. apache.flink. runtime.security.token .hadoop 
.HadoopFSDelegationTokenProvider.obtainDelegationTcens(HadoopFSDelegationTokenProvider.java:108)~flink-dist-1.17.1.jar:1.17.1]at
 org.apache.flink. runtime. security.token.DefaultDelegationTokenManager . 
lambda$obtainDelSAndGetNextRenewal$1(DefaultDelegationTokenManager .java:264)~ 
flink-dist-1.17.1.jar:1.17.1]at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
~?:1.8.0 281at 
java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1628)~[?:1.8.0 
281]at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)~?:1.8.0 
281]at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) 
~?:1.8.0 281at 
java,util.stream.Reduce0ps$Reduce0p.evaluateSequential(Reduce0ps.java:708)~?:1.8.0
 281]at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)~[?:1.8.0 
281]at java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:479) 
~?:1.8.0 281at 
java.util.stream.ReferencePipeline.min(ReferencePipeline.java:520)~?:1.8.0 
281at org. apache. flink. runtime. security.token.DefaultDelegationTokenManager 
.obtainDelegationTokensAndGeNextRenewal(DefaultDelegationTokenManager 
.java:286)~[flink-dist-1.17.1.jar:1.17.1at org.apache. flink.runtime. 
security.token.DefaultDelegationTokenManager. 
obtainDelegationTokens(DefaltDelegationTokenManager.java:242)~[flink-dist-1.17.1.jar:1.17.1]at
 org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializes@) 
~[flink-dist-1.17.1.jar:1.17.1]at 
org.apache.flink.runtime.entrypoint.clusterEntrypoint.nk-dist-1.17.1.jar:1.17.1]at
 org.apache.flink.runtime.entrypoint.ClusterEntrypoint:232) 
~[flink-dist-1.17.1.jar:1.17.1]at 
java.security.AccessController.doPrivileged(Native Method) ~[?:1.8. 281]at 
javax.security.auth.Subject.doAs(Subject.java:422)~?:1.8.0 281]at org. 
apache.hadoop . security.UserGroupInformation. doAs (UserGroupInformation. 
java:1876)~[flink-shadd-hadoop-3-uber-3.1.1.7.2.1.0-327-9.0.jar:3.1.1.7.2.1.0-327-9.0]at
 org.apache.flink.runtime.security. contexts 
.HadoopSecurityContext.runSecured(HadoopSecurijava:41) 
~[flink-dist-1.17.1.jar:1.17.1at org. apache.flink. runtime. entrypoint. 
ClusterEntrypoint . startCluster(clusterEntrypoint. 
java:229)link-dist-1.17.1.jar:1.17.1]...2 more{code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32653) Add doc for catalog store

2023-07-23 Thread Feng Jin (Jira)
Feng Jin created FLINK-32653:


 Summary: Add doc for catalog store
 Key: FLINK-32653
 URL: https://issues.apache.org/jira/browse/FLINK-32653
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32647) Support config catalog store in python table environment

2023-07-22 Thread Feng Jin (Jira)
Feng Jin created FLINK-32647:


 Summary: Support config catalog store in python table environment
 Key: FLINK-32647
 URL: https://issues.apache.org/jira/browse/FLINK-32647
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32569) Fix the incomplete serialization of ResolvedCatalogTable caused by the newly introduced time travel interface

2023-07-10 Thread Feng Jin (Jira)
Feng Jin created FLINK-32569:


 Summary: Fix the incomplete serialization of ResolvedCatalogTable 
caused by the newly introduced  time travel interface
 Key: FLINK-32569
 URL: https://issues.apache.org/jira/browse/FLINK-32569
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32475) Add doc for time travel

2023-06-28 Thread Feng Jin (Jira)
Feng Jin created FLINK-32475:


 Summary: Add doc for time travel
 Key: FLINK-32475
 URL: https://issues.apache.org/jira/browse/FLINK-32475
 Project: Flink
  Issue Type: Sub-task
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32474) Support time travel in table planner

2023-06-28 Thread Feng Jin (Jira)
Feng Jin created FLINK-32474:


 Summary: Support time travel in table planner 
 Key: FLINK-32474
 URL: https://issues.apache.org/jira/browse/FLINK-32474
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32473) Introduce base interfaces for time travel

2023-06-28 Thread Feng Jin (Jira)
Feng Jin created FLINK-32473:


 Summary: Introduce base interfaces for time travel
 Key: FLINK-32473
 URL: https://issues.apache.org/jira/browse/FLINK-32473
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32472) FLIP-308: Support Time Travel

2023-06-28 Thread Feng Jin (Jira)
Feng Jin created FLINK-32472:


 Summary: FLIP-308: Support Time Travel
 Key: FLINK-32472
 URL: https://issues.apache.org/jira/browse/FLINK-32472
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Feng Jin


Umbrella issue for 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-308%3A+Support+Time+Travel



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32433) Add build-in FileCatalogStore

2023-06-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-32433:


 Summary: Add build-in FileCatalogStore 
 Key: FLINK-32433
 URL: https://issues.apache.org/jira/browse/FLINK-32433
 Project: Flink
  Issue Type: Sub-task
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32432) Support CatalogStore in Flink SQL gateway

2023-06-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-32432:


 Summary: Support CatalogStore in Flink SQL gateway
 Key: FLINK-32432
 URL: https://issues.apache.org/jira/browse/FLINK-32432
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Gateway
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32431) Support configuring CatalogStore in Table API

2023-06-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-32431:


 Summary: Support configuring CatalogStore in Table API
 Key: FLINK-32431
 URL: https://issues.apache.org/jira/browse/FLINK-32431
 Project: Flink
  Issue Type: Sub-task
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32430) Support configuring CatalogStore through flink conf

2023-06-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-32430:


 Summary: Support configuring CatalogStore through flink conf
 Key: FLINK-32430
 URL: https://issues.apache.org/jira/browse/FLINK-32430
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32429) Introduce CatalogStore in CatalogManager to support lazy initialization of catalogs and persistence of catalog configurations

2023-06-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-32429:


 Summary: Introduce CatalogStore in CatalogManager to support lazy 
initialization of catalogs and persistence of catalog configurations
 Key: FLINK-32429
 URL: https://issues.apache.org/jira/browse/FLINK-32429
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32428) Introduce base interfaces for CatalogStore

2023-06-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-32428:


 Summary: Introduce base interfaces for CatalogStore
 Key: FLINK-32428
 URL: https://issues.apache.org/jira/browse/FLINK-32428
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Feng Jin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32427) FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-06-25 Thread Feng Jin (Jira)
Feng Jin created FLINK-32427:


 Summary: FLIP-295: Support lazy initialization of catalogs and 
persistence of catalog configurations
 Key: FLINK-32427
 URL: https://issues.apache.org/jira/browse/FLINK-32427
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Feng Jin


Umbrella issue for 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-295%3A+Support+lazy+initialization+of+catalogs+and+persistence+of+catalog+configurations



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31822) Support configure maxRows when fetch result

2023-04-17 Thread Feng Jin (Jira)
Feng Jin created FLINK-31822:


 Summary: Support configure maxRows when fetch result 
 Key: FLINK-31822
 URL: https://issues.apache.org/jira/browse/FLINK-31822
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Gateway
Affects Versions: 1.16.1
Reporter: Feng Jin


The default value of maxRow during fetch result is 5000. When requested from a 
web page, too many results in a single request may cause the web page to freeze.

 

Therefore, we can support configuring the maximum number of request results.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31788) Add back Support emitValueWithRetract for TableAggregateFunction

2023-04-12 Thread Feng Jin (Jira)
Feng Jin created FLINK-31788:


 Summary: Add back Support emitValueWithRetract for 
TableAggregateFunction
 Key: FLINK-31788
 URL: https://issues.apache.org/jira/browse/FLINK-31788
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Feng Jin


This feature was originally implemented in the old planner: 
[https://github.com/apache/flink/pull/8550/files]

However, this logic was not implemented in the new planner , the Blink planner. 

With the removal of the old planner in version 1.14 
[https://github.com/apache/flink/pull/16080] , this code was also removed.

 

We should add it back. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30924) Conversion issues between timestamp and bingint

2023-02-06 Thread Feng Jin (Jira)
Feng Jin created FLINK-30924:


 Summary: Conversion issues between timestamp and bingint
 Key: FLINK-30924
 URL: https://issues.apache.org/jira/browse/FLINK-30924
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.16.1
Reporter: Feng Jin


When casting to convert timestamp and bigint, the following exception is 
thrown: 
{code:java}
//代码占位符
org.apache.flink.table.api.ValidationException: The cast from NUMERIC type to 
TIMESTAMP type is not allowed. It's recommended to use 
TO_TIMESTAMP(FROM_UNIXTIME(numeric_col)) instead, note the numeric is in 
seconds.

{code}
However, the FROM_UNIXTIME function will use the local time zone for 
conversion, but the TO_TIMESTAMP function will not use the local time zone but 
will use the UTC time zone conversion, so that the actual result  in the  wrong 
result.

 

The following is an example of the results of the test
{code:java}
//代码占位符

Flink SQL> SET 'table.local-time-zone' = 'Asia/Shanghai';
Flink SQL> select TO_TIMESTAMP(FROM_UNIXTIME(0));

// result 
                 EXPR$0
 1970-01-01 08:00:00.000

{code}
  

 

UNIX_TIMESTAMP(CAST(timestamp_col AS STRING)) has the same problem. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28345) Flink Jdbc connector should check batch count before flush

2022-07-01 Thread Feng Jin (Jira)
Feng Jin created FLINK-28345:


 Summary: Flink Jdbc connector should check batch count before flush
 Key: FLINK-28345
 URL: https://issues.apache.org/jira/browse/FLINK-28345
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Affects Versions: 1.14.5, 1.15.0
Reporter: Feng Jin


org.apache.flink.connector.jdbc.internal.JdbcOutputFormat#flush
{code:java}
//代码占位符
@Override
public synchronized void flush() throws IOException {
checkFlushException();

for (int i = 0; i <= executionOptions.getMaxRetries(); i++) {
try {
attemptFlush();
batchCount = 0;
break; 
   {code}
When flush the batch,  we should check batchCount  is grater than 0. Other wise 
it would cause some problem with some drivers that do not support empty 
batches, like clickhouse jdbc driver. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)