[jira] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2023-06-27 Thread jackylau (Jira)


[ https://issues.apache.org/jira/browse/FLINK-31664 ]


jackylau deleted comment on FLINK-31664:
--

was (Author: jackylau):
[~Sergey Nuyanzin] ok, thanks for your explain. i will contact the author first

> Add ARRAY_INTERSECT supported in SQL & Table API
> 
>
> Key: FLINK-31664
> URL: https://issues.apache.org/jira/browse/FLINK-31664
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-06-26 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17737181#comment-17737181
 ] 

jackylau commented on FLINK-31377:
--

hi [~Sergey Nuyanzin] [~twalthr] ,does anyone help review the pr again?

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-06-26 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17737180#comment-17737180
 ] 

jackylau commented on FLINK-31691:
--

hi [~Sergey Nuyanzin] if you review this pr 
[https://github.com/apache/flink/pull/22745] you will find i comment the reason 
why need a new pr.

because the first pr just aligns with spark. but spark has two behavior 
last_win/exception, and calcite and flink  map both only supports last_win. so 
i just do not break the current behavior now. when we need support it, i will 
discuss in the mail to supports exception behavior of all map function.

> Add MAP_FROM_ENTRIES supported in SQL & Table API
> -
>
> Key: FLINK-31691
> URL: https://issues.apache.org/jira/browse/FLINK-31691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> map_from_entries(map) - Returns a map created from an arrays of row with two 
> fields. Note that the number of fields in a row array should be 2 and the key 
> of a row array should not be null.
> Syntax:
> map_from_entries(array_of_rows)
> Arguments:
> array_of_rows: an arrays of row with two fields.
> Returns:
> Returns a map created from an arrays of row with two fields. Note that the 
> number of fields in a row array should be 2 and the key of a row array should 
> not be null.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_from_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-06-26 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17737109#comment-17737109
 ] 

jackylau edited comment on FLINK-31691 at 6/26/23 11:49 AM:


hi [~snuyanzin] do you have time to help review this pr 
https://github.com/apache/flink/pull/22745? 


was (Author: jackylau):
hi [~snuyanzin] do you have time to help review? 

> Add MAP_FROM_ENTRIES supported in SQL & Table API
> -
>
> Key: FLINK-31691
> URL: https://issues.apache.org/jira/browse/FLINK-31691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> map_from_entries(map) - Returns a map created from an arrays of row with two 
> fields. Note that the number of fields in a row array should be 2 and the key 
> of a row array should not be null.
> Syntax:
> map_from_entries(array_of_rows)
> Arguments:
> array_of_rows: an arrays of row with two fields.
> Returns:
> Returns a map created from an arrays of row with two fields. Note that the 
> number of fields in a row array should be 2 and the key of a row array should 
> not be null.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_from_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-06-26 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17737109#comment-17737109
 ] 

jackylau commented on FLINK-31691:
--

hi [~snuyanzin] do you have time to help review? 

> Add MAP_FROM_ENTRIES supported in SQL & Table API
> -
>
> Key: FLINK-31691
> URL: https://issues.apache.org/jira/browse/FLINK-31691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> map_from_entries(map) - Returns a map created from an arrays of row with two 
> fields. Note that the number of fields in a row array should be 2 and the key 
> of a row array should not be null.
> Syntax:
> map_from_entries(array_of_rows)
> Arguments:
> array_of_rows: an arrays of row with two fields.
> Returns:
> Returns a map created from an arrays of row with two fields. Note that the 
> number of fields in a row array should be 2 and the key of a row array should 
> not be null.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_from_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32363) calcite 1.21 supports type coercion but flink don't enable it in validate

2023-06-16 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733367#comment-17733367
 ] 

jackylau commented on FLINK-32363:
--

[~libenchao] Thanks for your explain

> calcite 1.21 supports type coercion but flink don't enable it in validate
> -
>
> Key: FLINK-32363
> URL: https://issues.apache.org/jira/browse/FLINK-32363
> Project: Flink
>  Issue Type: Improvement
>Reporter: jackylau
>Priority: Major
>
> 1) calcite 1.21 supports type coercion and enabled default while flink 
> disabled
> 2) spark /mysql can run it 
> 3) although, we can make it run by select count(distinct `if`(1>5, 'x', 
> cast(null as varchar)));
> i think we should enable it or offers a config to enable it
>  
> {code:java}
> Flink SQL> select count(distinct `if`(1>5, 'x', null));
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.calcite.sql.validate.SqlValidatorException: Illegal use of 
> 'NULL'{code}
> {code:java}
> // it can run in spark
> spark-sql (default)> select count(distinct `if`(1>5, 'x', null)); 
> 0
> {code}
>  
> {code:java}
> private def createSqlValidator(catalogReader: CalciteCatalogReader) = {
>   val validator = new FlinkCalciteSqlValidator(
> operatorTable,
> catalogReader,
> typeFactory,
> SqlValidator.Config.DEFAULT
>   .withIdentifierExpansion(true)
>   .withDefaultNullCollation(FlinkPlannerImpl.defaultNullCollation)
>   .withTypeCoercionEnabled(false)
>   ) // Disable implicit type coercion for now.
>   validator
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32363) calcite 1.21 supports type coercion but flink don't enable it in validate

2023-06-15 Thread jackylau (Jira)
jackylau created FLINK-32363:


 Summary: calcite 1.21 supports type coercion but flink don't 
enable it in validate
 Key: FLINK-32363
 URL: https://issues.apache.org/jira/browse/FLINK-32363
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


1) calcite 1.21 supports type coercion and enabled default while flink disabled

2) spark /mysql can run it 

3) although, we can make it run by select count(distinct `if`(1>5, 'x', 
cast(null as varchar)));

i think we should enable it or offers a config to enable it

 
{code:java}
Flink SQL> select count(distinct `if`(1>5, 'x', null));
[ERROR] Could not execute SQL statement. Reason:
org.apache.calcite.sql.validate.SqlValidatorException: Illegal use of 
'NULL'{code}
{code:java}
// it can run in spark
spark-sql (default)> select count(distinct `if`(1>5, 'x', null)); 
0
{code}
 
{code:java}
private def createSqlValidator(catalogReader: CalciteCatalogReader) = {
  val validator = new FlinkCalciteSqlValidator(
operatorTable,
catalogReader,
typeFactory,
SqlValidator.Config.DEFAULT
  .withIdentifierExpansion(true)
  .withDefaultNullCollation(FlinkPlannerImpl.defaultNullCollation)
  .withTypeCoercionEnabled(false)
  ) // Disable implicit type coercion for now.
  validator
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-06-08 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17730507#comment-17730507
 ] 

jackylau commented on FLINK-26945:
--

hi [~twalthr] this pr is long time no one to review it? will you have time to 
help review

> Add DATE_SUB supported in SQL & Table API
> -
>
> Key: FLINK-26945
> URL: https://issues.apache.org/jira/browse/FLINK-26945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Returns the date {{numDays}} before {{{}startDate{}}}.
> Syntax:
> {code:java}
> date_sub(startDate, numDays) {code}
> Arguments:
>  * {{{}startDate{}}}: A DATE expression.
>  * {{{}numDays{}}}: An INTEGER expression.
> Returns:
> A DATE.
> If {{numDays}} is negative abs(num_days) are added to {{{}startDate{}}}.
> If the result date overflows the date range the function raises an error.
> Examples:
> {code:java}
> > SELECT date_sub('2016-07-30', 1);
>  2016-07-29 {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2023-06-06 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729616#comment-17729616
 ] 

jackylau edited comment on FLINK-31664 at 6/6/23 7:01 AM:
--

[~Sergey Nuyanzin] ok, thanks for your explain. i will contact the author first


was (Author: jackylau):
[~Sergey Nuyanzin] ok, thanks for your explain

> Add ARRAY_INTERSECT supported in SQL & Table API
> 
>
> Key: FLINK-31664
> URL: https://issues.apache.org/jira/browse/FLINK-31664
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2023-06-06 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729616#comment-17729616
 ] 

jackylau commented on FLINK-31664:
--

[~Sergey Nuyanzin] ok, thanks for your explain

> Add ARRAY_INTERSECT supported in SQL & Table API
> 
>
> Key: FLINK-31664
> URL: https://issues.apache.org/jira/browse/FLINK-31664
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2023-06-06 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729606#comment-17729606
 ] 

jackylau commented on FLINK-31664:
--

hi [~Sergey Nuyanzin] thanks for your suggestion

1) this issue i created before but i do not have permission to assign to myself.

2) the pr submit by another author implementation has problem, we can not use 
set directly, so i submit another pr.

> Add ARRAY_INTERSECT supported in SQL & Table API
> 
>
> Key: FLINK-31664
> URL: https://issues.apache.org/jira/browse/FLINK-31664
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31902) cast expr to type with not null should throw exception like calcite

2023-05-30 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727389#comment-17727389
 ] 

jackylau commented on FLINK-31902:
--

i will fix it in calcite first 
https://issues.apache.org/jira/browse/CALCITE-5731

> cast expr to type with not null should throw exception like calcite
> ---
>
> Key: FLINK-31902
> URL: https://issues.apache.org/jira/browse/FLINK-31902
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // calcite cast type  not null, will throw exception
> expr("cast(x as int ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int ^not^ null array)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int array ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*"); 
> // while  the flink not
> expr("cast(x as array)")
> .ok("(?s).*Encountered \"not\" at .*");
> expr("cast(x as array not null)")
> .ok("(?s).*Encountered \"not\" at .*");{code}
> the reason is flink add extended type, which will supports not null
> {code:java}
> // code placeholder
> <#-- additional types are included here -->
> <#-- put custom data types in front of Calcite core data types -->
> <#list (parser.dataTypeParserMethods!default.parser.dataTypeParserMethods) as 
> method>
> LOOKAHEAD(2)
> typeNameSpec = ${method}
> |
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-05-25 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726140#comment-17726140
 ] 

jackylau commented on FLINK-31908:
--

i fixed it in calcite, it will fix when upgrade calcite to 1.35

> cast expr to type with not null  should not change nullable of expr
> ---
>
> Key: FLINK-31908
> URL: https://issues.apache.org/jira/browse/FLINK-31908
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
> .onFieldsWithData(new Integer[]{1, 2}, 3)
> .andDataTypes(DataTypes.ARRAY(INT()), INT())
> .testSqlResult(
> "CAST(f0 AS ARRAY)",
> new Double[]{1.0d, 2.0d},
> DataTypes.ARRAY(DOUBLE().notNull(;
> } {code}
> but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
> calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-05-04 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719236#comment-17719236
 ] 

jackylau commented on FLINK-31908:
--

why i found this is here i supports the flink builtin functions like 
array_union [https://github.com/apache/flink/pull/22483/files]

for example in it
f0 is [1, 2, null] and type is INTEGER ARRAY
array_union(f0, [1.0E0, NULL, 4.0E0]) => it should return [1.0, 2.0, NULL, 4.0] 
but not.

the reasion is following  
{code:java}
[1, 2, null] => INTEGER ARRAY
[1.0E0, NULL, 4.0E0] => DOUBLE NOT NULL ARRAY NOT NULL

so the array_union function type is the common of the both DOUBLE ARRAY.
so the flink will insert implicit cast for f0, 
the cast should like cast(f0 as DOUBLE ARRAY), but it return cast(f0 as DOUBLE 
NOT NULL ARRAY),

this will cast [1, 2, null] -> [1.0, 2.0. 0.0] in the runtime eval method, make 
the result not correct. {code}
why it return cast(f0 as DOUBLE NOT NULL ARRAY)?
the root cause is calcite not transitive the nullable for array element

> cast expr to type with not null  should not change nullable of expr
> ---
>
> Key: FLINK-31908
> URL: https://issues.apache.org/jira/browse/FLINK-31908
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
> .onFieldsWithData(new Integer[]{1, 2}, 3)
> .andDataTypes(DataTypes.ARRAY(INT()), INT())
> .testSqlResult(
> "CAST(f0 AS ARRAY)",
> new Double[]{1.0d, 2.0d},
> DataTypes.ARRAY(DOUBLE().notNull(;
> } {code}
> but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
> calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-05-04 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719236#comment-17719236
 ] 

jackylau edited comment on FLINK-31908 at 5/4/23 9:40 AM:
--

[~jark] 

why i found this is here i supports the flink builtin functions like 
array_union [https://github.com/apache/flink/pull/22483/files]

for example in it
f0 is [1, 2, null] and type is INTEGER ARRAY
array_union(f0, [1.0E0, NULL, 4.0E0]) => it should return [1.0, 2.0, NULL, 4.0] 
but not.

the reasion is following  
{code:java}
[1, 2, null] => INTEGER ARRAY
[1.0E0, NULL, 4.0E0] => DOUBLE NOT NULL ARRAY NOT NULL

so the array_union function type is the common of the both DOUBLE ARRAY.
so the flink will insert implicit cast for f0, 
the cast should like cast(f0 as DOUBLE ARRAY), but it return cast(f0 as DOUBLE 
NOT NULL ARRAY),

this will cast [1, 2, null] -> [1.0, 2.0. 0.0] in the runtime eval method, make 
the result not correct. {code}
why it return cast(f0 as DOUBLE NOT NULL ARRAY)?
the root cause is calcite not transitive the nullable for array element


was (Author: jackylau):
why i found this is here i supports the flink builtin functions like 
array_union [https://github.com/apache/flink/pull/22483/files]

for example in it
f0 is [1, 2, null] and type is INTEGER ARRAY
array_union(f0, [1.0E0, NULL, 4.0E0]) => it should return [1.0, 2.0, NULL, 4.0] 
but not.

the reasion is following  
{code:java}
[1, 2, null] => INTEGER ARRAY
[1.0E0, NULL, 4.0E0] => DOUBLE NOT NULL ARRAY NOT NULL

so the array_union function type is the common of the both DOUBLE ARRAY.
so the flink will insert implicit cast for f0, 
the cast should like cast(f0 as DOUBLE ARRAY), but it return cast(f0 as DOUBLE 
NOT NULL ARRAY),

this will cast [1, 2, null] -> [1.0, 2.0. 0.0] in the runtime eval method, make 
the result not correct. {code}
why it return cast(f0 as DOUBLE NOT NULL ARRAY)?
the root cause is calcite not transitive the nullable for array element

> cast expr to type with not null  should not change nullable of expr
> ---
>
> Key: FLINK-31908
> URL: https://issues.apache.org/jira/browse/FLINK-31908
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
> .onFieldsWithData(new Integer[]{1, 2}, 3)
> .andDataTypes(DataTypes.ARRAY(INT()), INT())
> .testSqlResult(
> "CAST(f0 AS ARRAY)",
> new Double[]{1.0d, 2.0d},
> DataTypes.ARRAY(DOUBLE().notNull(;
> } {code}
> but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
> calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716062#comment-17716062
 ] 

jackylau commented on FLINK-31908:
--

hi [~jark] i will fix in calcite first here 
https://issues.apache.org/jira/browse/CALCITE-5674

> cast expr to type with not null  should not change nullable of expr
> ---
>
> Key: FLINK-31908
> URL: https://issues.apache.org/jira/browse/FLINK-31908
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
> .onFieldsWithData(new Integer[]{1, 2}, 3)
> .andDataTypes(DataTypes.ARRAY(INT()), INT())
> .testSqlResult(
> "CAST(f0 AS ARRAY)",
> new Double[]{1.0d, 2.0d},
> DataTypes.ARRAY(DOUBLE().notNull(;
> } {code}
> but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
> calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17715705#comment-17715705
 ] 

jackylau edited comment on FLINK-31908 at 4/24/23 10:20 AM:


hi [~jark] cast only make type take effective. and nullable should be use 
original expr's nullable.


was (Author: jackylau):
cast only make type take effective. and nullable should be use original expr's 
nullable.

> cast expr to type with not null  should not change nullable of expr
> ---
>
> Key: FLINK-31908
> URL: https://issues.apache.org/jira/browse/FLINK-31908
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
> .onFieldsWithData(new Integer[]{1, 2}, 3)
> .andDataTypes(DataTypes.ARRAY(INT()), INT())
> .testSqlResult(
> "CAST(f0 AS ARRAY)",
> new Double[]{1.0d, 2.0d},
> DataTypes.ARRAY(DOUBLE().notNull(;
> } {code}
> but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
> calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17715705#comment-17715705
 ] 

jackylau commented on FLINK-31908:
--

cast only make type take effective. and nullable should be use original expr's 
nullable.

> cast expr to type with not null  should not change nullable of expr
> ---
>
> Key: FLINK-31908
> URL: https://issues.apache.org/jira/browse/FLINK-31908
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
> .onFieldsWithData(new Integer[]{1, 2}, 3)
> .andDataTypes(DataTypes.ARRAY(INT()), INT())
> .testSqlResult(
> "CAST(f0 AS ARRAY)",
> new Double[]{1.0d, 2.0d},
> DataTypes.ARRAY(DOUBLE().notNull(;
> } {code}
> but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
> calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31902) cast expr to type with not null should throw exception like calcite

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17715691#comment-17715691
 ] 

jackylau commented on FLINK-31902:
--

the standard grammar it here from iso 2016
{code:java}
// code placeholder
 ::=
CAST   AS  
 ::=

| 
 ::=

| 

 ::=

| 
| 
| 
| 


 ::=

| 
 ::=
 ARRAY
[
] {code}

> cast expr to type with not null should throw exception like calcite
> ---
>
> Key: FLINK-31902
> URL: https://issues.apache.org/jira/browse/FLINK-31902
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // calcite cast type  not null, will throw exception
> expr("cast(x as int ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int ^not^ null array)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int array ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*"); 
> // while  the flink not
> expr("cast(x as array)")
> .ok("(?s).*Encountered \"not\" at .*");
> expr("cast(x as array not null)")
> .ok("(?s).*Encountered \"not\" at .*");{code}
> the reason is flink add extended type, which will supports not null
> {code:java}
> // code placeholder
> <#-- additional types are included here -->
> <#-- put custom data types in front of Calcite core data types -->
> <#list (parser.dataTypeParserMethods!default.parser.dataTypeParserMethods) as 
> method>
> LOOKAHEAD(2)
> typeNameSpec = ${method}
> |
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31902) cast expr to type with not null should throw exception like calcite

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17715688#comment-17715688
 ] 

jackylau commented on FLINK-31902:
--

hi [~jark]  thanks for your response, it is not sql standard. not null is 
constraint info in ddl, and should not be changed by cast in sql standard.

it will make strange like here https://issues.apache.org/jira/browse/FLINK-31908

> cast expr to type with not null should throw exception like calcite
> ---
>
> Key: FLINK-31902
> URL: https://issues.apache.org/jira/browse/FLINK-31902
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // calcite cast type  not null, will throw exception
> expr("cast(x as int ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int ^not^ null array)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int array ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*"); 
> // while  the flink not
> expr("cast(x as array)")
> .ok("(?s).*Encountered \"not\" at .*");
> expr("cast(x as array not null)")
> .ok("(?s).*Encountered \"not\" at .*");{code}
> the reason is flink add extended type, which will supports not null
> {code:java}
> // code placeholder
> <#-- additional types are included here -->
> <#-- put custom data types in front of Calcite core data types -->
> <#list (parser.dataTypeParserMethods!default.parser.dataTypeParserMethods) as 
> method>
> LOOKAHEAD(2)
> typeNameSpec = ${method}
> |
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31904) fix current serveral flink nullable type handle

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17715626#comment-17715626
 ] 

jackylau commented on FLINK-31904:
--

hi [~danny0405] [~twalthr] [~snuyanzin] , i create three subtask to fix these 
problem,

do you have time to have a look?

> fix current serveral flink nullable type handle
> ---
>
> Key: FLINK-31904
> URL: https://issues.apache.org/jira/browse/FLINK-31904
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31906) typeof should only return type exclude nullable

2023-04-24 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31906:
-
Description: 
nullable is table level constraint, which can only show by showing schema

 

pg [https://www.postgresql.org/docs/9.3/functions-info.html]

spark :https://spark.apache.org/docs/latest/api/sql/index.html#typeof
{code:java}
// code placeholder
select typeof(1Y), typeof(1S), typeof(1), typeof(1L)
-- !query schema
struct
-- !query output
tinyintsmallint   intbigint


-- !query
select typeof(cast(1.0 as float)), typeof(1.0D), typeof(1.2)
-- !query schema
struct
-- !query output
float  double decimal(2,1) {code}

  was:
nullable is table level constraint, which can only show by showing schema

 

pg [https://www.postgresql.org/docs/9.3/functions-info.html]

spark :
{code:java}
// code placeholder
select typeof(1Y), typeof(1S), typeof(1), typeof(1L)
-- !query schema
struct
-- !query output
tinyintsmallint   intbigint


-- !query
select typeof(cast(1.0 as float)), typeof(1.0D), typeof(1.2)
-- !query schema
struct
-- !query output
float  double decimal(2,1) {code}


> typeof should only return type exclude nullable 
> 
>
> Key: FLINK-31906
> URL: https://issues.apache.org/jira/browse/FLINK-31906
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> nullable is table level constraint, which can only show by showing schema
>  
> pg [https://www.postgresql.org/docs/9.3/functions-info.html]
> spark :https://spark.apache.org/docs/latest/api/sql/index.html#typeof
> {code:java}
> // code placeholder
> select typeof(1Y), typeof(1S), typeof(1), typeof(1L)
> -- !query schema
> struct
> -- !query output
> tinyintsmallint   intbigint
> -- !query
> select typeof(cast(1.0 as float)), typeof(1.0D), typeof(1.2)
> -- !query schema
> struct FLOAT)):string,typeof(1.0):string,typeof(1.2):string>
> -- !query output
> float  double decimal(2,1) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-04-24 Thread jackylau (Jira)
jackylau created FLINK-31908:


 Summary: cast expr to type with not null  should not change 
nullable of expr
 Key: FLINK-31908
 URL: https://issues.apache.org/jira/browse/FLINK-31908
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau


{code:java}
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
.onFieldsWithData(new Integer[]{1, 2}, 3)
.andDataTypes(DataTypes.ARRAY(INT()), INT())
.testSqlResult(
"CAST(f0 AS ARRAY)",
new Double[]{1.0d, 2.0d},
DataTypes.ARRAY(DOUBLE().notNull(;
} {code}
but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31906) typeof should only return type exclude nullable

2023-04-24 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31906:
-
Description: 
nullable is table level constraint, which can only show by showing schema

 

pg [https://www.postgresql.org/docs/9.3/functions-info.html]

spark :
{code:java}
// code placeholder
select typeof(1Y), typeof(1S), typeof(1), typeof(1L)
-- !query schema
struct
-- !query output
tinyintsmallint   intbigint


-- !query
select typeof(cast(1.0 as float)), typeof(1.0D), typeof(1.2)
-- !query schema
struct
-- !query output
float  double decimal(2,1) {code}

> typeof should only return type exclude nullable 
> 
>
> Key: FLINK-31906
> URL: https://issues.apache.org/jira/browse/FLINK-31906
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> nullable is table level constraint, which can only show by showing schema
>  
> pg [https://www.postgresql.org/docs/9.3/functions-info.html]
> spark :
> {code:java}
> // code placeholder
> select typeof(1Y), typeof(1S), typeof(1), typeof(1L)
> -- !query schema
> struct
> -- !query output
> tinyintsmallint   intbigint
> -- !query
> select typeof(cast(1.0 as float)), typeof(1.0D), typeof(1.2)
> -- !query schema
> struct FLOAT)):string,typeof(1.0):string,typeof(1.2):string>
> -- !query output
> float  double decimal(2,1) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31906) typeof should only return type exclude nullable

2023-04-24 Thread jackylau (Jira)
jackylau created FLINK-31906:


 Summary: typeof should only return type exclude nullable 
 Key: FLINK-31906
 URL: https://issues.apache.org/jira/browse/FLINK-31906
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31904) fix current serveral flink nullable type handle

2023-04-24 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31904:
-
Description: (was: 1. https://issues.apache.org/jira/browse/FLINK-31902)

> fix current serveral flink nullable type handle
> ---
>
> Key: FLINK-31904
> URL: https://issues.apache.org/jira/browse/FLINK-31904
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31904) fix current serveral flink nullable type handle

2023-04-24 Thread jackylau (Jira)
jackylau created FLINK-31904:


 Summary: fix current serveral flink nullable type handle
 Key: FLINK-31904
 URL: https://issues.apache.org/jira/browse/FLINK-31904
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31904) fix current serveral flink nullable type handle

2023-04-24 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31904:
-
Description: 1. https://issues.apache.org/jira/browse/FLINK-31902

> fix current serveral flink nullable type handle
> ---
>
> Key: FLINK-31904
> URL: https://issues.apache.org/jira/browse/FLINK-31904
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> 1. https://issues.apache.org/jira/browse/FLINK-31902



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31902) cast expr to type with not null should throw exception like calcite

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17715605#comment-17715605
 ] 

jackylau commented on FLINK-31902:
--

hi [~danny0405] [~snuyanzin]  please have a look

> cast expr to type with not null should throw exception like calcite
> ---
>
> Key: FLINK-31902
> URL: https://issues.apache.org/jira/browse/FLINK-31902
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // calcite cast type  not null, will throw exception
> expr("cast(x as int ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int ^not^ null array)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int array ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*"); 
> // while  the flink not
> expr("cast(x as array)")
> .ok("(?s).*Encountered \"not\" at .*");
> expr("cast(x as array not null)")
> .ok("(?s).*Encountered \"not\" at .*");{code}
> the reason is flink add extended type, which will supports not null
> {code:java}
> // code placeholder
> <#-- additional types are included here -->
> <#-- put custom data types in front of Calcite core data types -->
> <#list (parser.dataTypeParserMethods!default.parser.dataTypeParserMethods) as 
> method>
> LOOKAHEAD(2)
> typeNameSpec = ${method}
> |
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31902) cast expr to type with not null should throw exception like calcite

2023-04-24 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31902:
-
Description: 
{code:java}
// calcite cast type  not null, will throw exception

expr("cast(x as int ^not^ null)")
.fails("(?s).*Encountered \"not\" at .*");
expr("cast(x as int ^not^ null array)")
.fails("(?s).*Encountered \"not\" at .*");
expr("cast(x as int array ^not^ null)")
.fails("(?s).*Encountered \"not\" at .*"); 

// while  the flink not
expr("cast(x as array)")
.ok("(?s).*Encountered \"not\" at .*");
expr("cast(x as array not null)")
.ok("(?s).*Encountered \"not\" at .*");{code}
the reason is flink add extended type, which will supports not null
{code:java}
// code placeholder
<#-- additional types are included here -->
<#-- put custom data types in front of Calcite core data types -->
<#list (parser.dataTypeParserMethods!default.parser.dataTypeParserMethods) as 
method>
LOOKAHEAD(2)
typeNameSpec = ${method}
|
 {code}

> cast expr to type with not null should throw exception like calcite
> ---
>
> Key: FLINK-31902
> URL: https://issues.apache.org/jira/browse/FLINK-31902
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // calcite cast type  not null, will throw exception
> expr("cast(x as int ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int ^not^ null array)")
> .fails("(?s).*Encountered \"not\" at .*");
> expr("cast(x as int array ^not^ null)")
> .fails("(?s).*Encountered \"not\" at .*"); 
> // while  the flink not
> expr("cast(x as array)")
> .ok("(?s).*Encountered \"not\" at .*");
> expr("cast(x as array not null)")
> .ok("(?s).*Encountered \"not\" at .*");{code}
> the reason is flink add extended type, which will supports not null
> {code:java}
> // code placeholder
> <#-- additional types are included here -->
> <#-- put custom data types in front of Calcite core data types -->
> <#list (parser.dataTypeParserMethods!default.parser.dataTypeParserMethods) as 
> method>
> LOOKAHEAD(2)
> typeNameSpec = ${method}
> |
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31902) cast expr to type with not null should throw exception like calcite

2023-04-24 Thread jackylau (Jira)
jackylau created FLINK-31902:


 Summary: cast expr to type with not null should throw exception 
like calcite
 Key: FLINK-31902
 URL: https://issues.apache.org/jira/browse/FLINK-31902
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31677) Add MAP_ENTRIES supported in SQL & Table API

2023-04-20 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17714539#comment-17714539
 ] 

jackylau commented on FLINK-31677:
--

hi [~Sergey Nuyanzin] do you have time to review it ?

 

> Add MAP_ENTRIES supported in SQL & Table API
> 
>
> Key: FLINK-31677
> URL: https://issues.apache.org/jira/browse/FLINK-31677
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> map_entries(map) - Returns an unordered array of all entries in the given map.
> Syntax:
> map_entries(map)
> Arguments:
> map: An MAP to be handled.
> Returns:
> Returns an unordered array of all entries in the given map.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31682) map_from_arrays should take whether allow duplicate keys and null key into consideration

2023-04-20 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17714538#comment-17714538
 ] 

jackylau commented on FLINK-31682:
--

hi [~Sergey Nuyanzin] , i supported this here 
https://issues.apache.org/jira/browse/FLINK-31691 , when it merged, i will fix 
this 

do you have time to review it ?

> map_from_arrays should take whether allow duplicate keys and null key into 
> consideration
> 
>
> Key: FLINK-31682
> URL: https://issues.apache.org/jira/browse/FLINK-31682
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> after research the spark/presto/maxcompute about 
> map_from_arrays/map_from_entries, there all support duplicate keys and null 
> key  for the most part
>  
> spark https://github.com/apache/spark/pull/21258/files
> maxcompute 
> [https://www.alibabacloud.com/help/en/maxcompute/latest/complex-type-functions#section-7ue-e91-m0s]
> presto https://prestodb.io/docs/current/functions/map.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31751) array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is not correct

2023-04-14 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712262#comment-17712262
 ] 

jackylau commented on FLINK-31751:
--

the inputTypeStrategy/outputTypeStrategy infers function argument types. the 
return type is using type which is infered by inputTypeStrategy. So it is not 
problem. 

 

close it

> array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is 
> not correct
> 
>
> Key: FLINK-31751
> URL: https://issues.apache.org/jira/browse/FLINK-31751
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> like array return type
> Type strategy that returns a \{@link DataTypes#ARRAY(DataType)} with element 
> type equal to the type of the first argument, which is not equals calcite 
> semantic.
> for example
> {code:java}
> ARRAY and ARRAY NOT NULL
> it should return  ARRAY instead of ARRAY{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31751) array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is not correct

2023-04-14 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau closed FLINK-31751.

Resolution: Not A Problem

> array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is 
> not correct
> 
>
> Key: FLINK-31751
> URL: https://issues.apache.org/jira/browse/FLINK-31751
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> like array return type
> Type strategy that returns a \{@link DataTypes#ARRAY(DataType)} with element 
> type equal to the type of the first argument, which is not equals calcite 
> semantic.
> for example
> {code:java}
> ARRAY and ARRAY NOT NULL
> it should return  ARRAY instead of ARRAY{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31751) array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is not correct

2023-04-07 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709612#comment-17709612
 ] 

jackylau commented on FLINK-31751:
--

hi [~twalthr] [~jark] what do you think?

> array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is 
> not correct
> 
>
> Key: FLINK-31751
> URL: https://issues.apache.org/jira/browse/FLINK-31751
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> like array return type
> Type strategy that returns a \{@link DataTypes#ARRAY(DataType)} with element 
> type equal to the type of the first argument, which is not equals calcite 
> semantic.
> for example
> {code:java}
> ARRAY and ARRAY NOT NULL
> it should return  ARRAY instead of ARRAY{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31751) array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is not correct

2023-04-07 Thread jackylau (Jira)
jackylau created FLINK-31751:


 Summary: array return type SpecificTypeStrategies.ARRAY and 
ifThenElse return type is not correct
 Key: FLINK-31751
 URL: https://issues.apache.org/jira/browse/FLINK-31751
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


like array return type

Type strategy that returns a \{@link DataTypes#ARRAY(DataType)} with element 
type equal to the type of the first argument, which is not equals calcite 
semantic.

for example
{code:java}
ARRAY and ARRAY NOT NULL
it should return  ARRAY instead of ARRAY{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31691:
-
Description: 
map_from_entries(map) - Returns a map created from an arrays of row with two 
fields. Note that the number of fields in a row array should be 2 and the key 
of a row array should not be null.

Syntax:
map_from_entries(array_of_rows)

Arguments:
array_of_rows: an arrays of row with two fields.

Returns:

Returns a map created from an arrays of row with two fields. Note that the 
number of fields in a row array should be 2 and the key of a row array should 
not be null.

Returns null if the argument is null
{code:sql}
> SELECT map_from_entries(map[1, 'a', 2, 'b']);
 [(1,"a"),(2,"b")]{code}
See also
presto [https://prestodb.io/docs/current/functions/map.html]

spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries

> Add MAP_FROM_ENTRIES supported in SQL & Table API
> -
>
> Key: FLINK-31691
> URL: https://issues.apache.org/jira/browse/FLINK-31691
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> map_from_entries(map) - Returns a map created from an arrays of row with two 
> fields. Note that the number of fields in a row array should be 2 and the key 
> of a row array should not be null.
> Syntax:
> map_from_entries(array_of_rows)
> Arguments:
> array_of_rows: an arrays of row with two fields.
> Returns:
> Returns a map created from an arrays of row with two fields. Note that the 
> number of fields in a row array should be 2 and the key of a row array should 
> not be null.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_from_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)
jackylau created FLINK-31691:


 Summary: Add MAP_FROM_ENTRIES supported in SQL & Table API
 Key: FLINK-31691
 URL: https://issues.apache.org/jira/browse/FLINK-31691
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707755#comment-17707755
 ] 

jackylau commented on FLINK-31602:
--

Merged to master as 
[a1e4ba2a0ac39a667b9c3169f254253bd98330b8|https://github.com/apache/flink/commit/a1e4ba2a0ac39a667b9c3169f254253bd98330b8]
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13524990]

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707755#comment-17707755
 ] 

jackylau edited comment on FLINK-31602 at 4/3/23 4:06 AM:
--

Merged to master as 
[a1e4ba2a0ac39a667b9c3169f254253bd98330b8|https://github.com/apache/flink/commit/a1e4ba2a0ac39a667b9c3169f254253bd98330b8]


was (Author: jackylau):
Merged to master as 
[a1e4ba2a0ac39a667b9c3169f254253bd98330b8|https://github.com/apache/flink/commit/a1e4ba2a0ac39a667b9c3169f254253bd98330b8]
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13524990]

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau closed FLINK-31602.

Resolution: Fixed

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31682) map_from_arrays should take whether allow duplicate keys and null key into consideration

2023-03-31 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31682:
-
Description: 
after research the spark/presto/maxcompute about 
map_from_arrays/map_from_entries, there all support duplicate keys and null key 
 for the most part

 

spark https://github.com/apache/spark/pull/21258/files

maxcompute 
[https://www.alibabacloud.com/help/en/maxcompute/latest/complex-type-functions#section-7ue-e91-m0s]

presto https://prestodb.io/docs/current/functions/map.html

  was:after research the spark/presto/maxcompute about 
map_from_arrays/map_from_entries, there all support duplicate keys and null key 
 for the most part


> map_from_arrays should take whether allow duplicate keys and null key into 
> consideration
> 
>
> Key: FLINK-31682
> URL: https://issues.apache.org/jira/browse/FLINK-31682
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> after research the spark/presto/maxcompute about 
> map_from_arrays/map_from_entries, there all support duplicate keys and null 
> key  for the most part
>  
> spark https://github.com/apache/spark/pull/21258/files
> maxcompute 
> [https://www.alibabacloud.com/help/en/maxcompute/latest/complex-type-functions#section-7ue-e91-m0s]
> presto https://prestodb.io/docs/current/functions/map.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31682) map_from_arrays should take whether allow duplicate keys and null key into consideration

2023-03-31 Thread jackylau (Jira)
jackylau created FLINK-31682:


 Summary: map_from_arrays should take whether allow duplicate keys 
and null key into consideration
 Key: FLINK-31682
 URL: https://issues.apache.org/jira/browse/FLINK-31682
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


after research the spark/presto/maxcompute about 
map_from_arrays/map_from_entries, there all support duplicate keys and null key 
 for the most part



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31677) Add MAP_ENTRIES supported in SQL & Table API

2023-03-30 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31677:
-
Description: 
map_entries(map) - Returns an unordered array of all entries in the given map.

Syntax:
map_entries(map)

Arguments:
map: An MAP to be handled.

Returns:

Returns an unordered array of all entries in the given map.

Returns null if the argument is null
{code:sql}

> SELECT map_entries(map[1, 'a', 2, 'b']);
 [(1,"a"),(2,"b")]{code}
See also
presto [https://prestodb.io/docs/current/functions/map.html]

spark https://spark.apache.org/docs/latest/api/sql/index.html#map_entries

> Add MAP_ENTRIES supported in SQL & Table API
> 
>
> Key: FLINK-31677
> URL: https://issues.apache.org/jira/browse/FLINK-31677
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> map_entries(map) - Returns an unordered array of all entries in the given map.
> Syntax:
> map_entries(map)
> Arguments:
> map: An MAP to be handled.
> Returns:
> Returns an unordered array of all entries in the given map.
> Returns null if the argument is null
> {code:sql}
> > SELECT map_entries(map[1, 'a', 2, 'b']);
>  [(1,"a"),(2,"b")]{code}
> See also
> presto [https://prestodb.io/docs/current/functions/map.html]
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_entries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31677) Add MAP_ENTRIES supported in SQL & Table API

2023-03-30 Thread jackylau (Jira)
jackylau created FLINK-31677:


 Summary: Add MAP_ENTRIES supported in SQL & Table API
 Key: FLINK-31677
 URL: https://issues.apache.org/jira/browse/FLINK-31677
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31663) Add ARRAY_EXCEPT supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17706693#comment-17706693
 ] 

jackylau commented on FLINK-31663:
--

should links to https://issues.apache.org/jira/browse/FLINK-22484

> Add ARRAY_EXCEPT supported in SQL & Table API
> -
>
> Key: FLINK-31663
> URL: https://issues.apache.org/jira/browse/FLINK-31663
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31118) Add ARRAY_UNION supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31118:
-
Description: 
Remove all elements that equal to element from array.

Syntax:
array_union(array)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}
> SELECT array_union(array(1, 2, 3), array(1, 3, 5));
 [1,2,3,5] {code}
See also
spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_union]

presto [https://prestodb.io/docs/current/functions/array.html]

  was:
Remove all elements that equal to element from array.

Syntax:
array_remove(array)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}
> SELECT array_union(array(1, 2, 3), array(1, 3, 5));
 [1,2,3,5] {code}
See also
spark https://spark.apache.org/docs/latest/api/sql/index.html#array_union

presto https://prestodb.io/docs/current/functions/array.html


> Add ARRAY_UNION supported in SQL & Table API
> 
>
> Key: FLINK-31118
> URL: https://issues.apache.org/jira/browse/FLINK-31118
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_union(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> > SELECT array_union(array(1, 2, 3), array(1, 3, 5));
>  [1,2,3,5] {code}
> See also
> spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_union]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31666) Add ARRAY_OVERLAP supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)
jackylau created FLINK-31666:


 Summary: Add ARRAY_OVERLAP supported in SQL & Table API
 Key: FLINK-31666
 URL: https://issues.apache.org/jira/browse/FLINK-31666
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31665) Add ARRAY_CONCAT supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)
jackylau created FLINK-31665:


 Summary: Add ARRAY_CONCAT supported in SQL & Table API
 Key: FLINK-31665
 URL: https://issues.apache.org/jira/browse/FLINK-31665
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)
jackylau created FLINK-31664:


 Summary: Add ARRAY_INTERSECT supported in SQL & Table API
 Key: FLINK-31664
 URL: https://issues.apache.org/jira/browse/FLINK-31664
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-03-27 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31602:
-
Description: 
array_position(array, element) - Returns the (1-based) index of the first 
element of the array as long.

Syntax:
array_position(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Returns the position of the first occurrence of element in the given array as 
long.

Returns 0 if the given value could not be found in the array.

Returns null if either of the arguments are null
{code:sql}
> SELECT array_position(array(3, 2, 1), 1);
 3 {code}
See also
spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]

  was:
array_position(array, element) - Returns the (1-based) index of the first 
element of the array as long.

Syntax:
array_position(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Returns the position of the first occurrence of element in the given array as 
long.

Returns 0 if the given value could not be found in the array.

Returns null if either of the arguments are null
{code:sql}
> SELECT array_position(array(3, 2, 1), 1);
 3 {code}
See also
spark 
[https://spark.apache.org/docs/latest/api/sql/index.html#array_remove|https://spark.apache.org/docs/latest/api/sql/index.html#array_position]

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]


> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_position
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31622) Add ARRAY_APPEND supported in SQL & Table API

2023-03-27 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17705296#comment-17705296
 ] 

jackylau commented on FLINK-31622:
--

[~Sergey Nuyanzin] ok. thanks for your tip

> Add ARRAY_APPEND supported in SQL & Table API
> -
>
> Key: FLINK-31622
> URL: https://issues.apache.org/jira/browse/FLINK-31622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> array_append(array, element) - Append the element at the end of the array.
> Syntax:
> array_append(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Append the element at the end of the array.
> This function does not return null when the elements are null. It appends 
> null at the end of the array. But returns null if the array is null.
> {code:sql}
> > SELECT array_append(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark not in docs 
> https://spark.apache.org/docs/latest/api/sql/index.html#array but in code. 
> [https://github.com/apache/spark/blob/c55c7ea6fc92c3733543d5f3d99eb00921cbe564/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L5059]
> snowflake [https://docs.snowflake.com/en/sql-reference/functions/array_append]
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31621) Add ARRAY_REVERSE supported in SQL & Table API

2023-03-27 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17705261#comment-17705261
 ] 

jackylau commented on FLINK-31621:
--

hi [~Sergey Nuyanzin] , could you please help review it?

> Add ARRAY_REVERSE supported in SQL & Table API
> --
>
> Key: FLINK-31621
> URL: https://issues.apache.org/jira/browse/FLINK-31621
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_reverse(array) - Returns an array in reverse order.
> Syntax:
> array_reverse(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns an array in reverse order.
> Returns null if the argument is null
> {code:sql}
> > SELECT array_reverse(array(1, 2, 2, NULL));
>  NULL, 2, 2, 1{code}
> See also
> bigquery 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31622) Add ARRAY_APPEND supported in SQL & Table API

2023-03-27 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31622:
-
Description: 
array_append(array, element) - Append the element at the end of the array.

Syntax:
array_append(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Append the element at the end of the array.

This function does not return null when the elements are null. It appends null 
at the end of the array. But returns null if the array is null.
{code:sql}
> SELECT array_append(array(3, 2, 1), 1);
 3 {code}
See also
spark not in docs https://spark.apache.org/docs/latest/api/sql/index.html#array 
but in code. 
[https://github.com/apache/spark/blob/c55c7ea6fc92c3733543d5f3d99eb00921cbe564/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L5059]

snowflake [https://docs.snowflake.com/en/sql-reference/functions/array_append]

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]

  was:
array_append(array, element) - Append the element at the end of the array.

Syntax:
array_append(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Append the element at the end of the array.

This function does not return null when the elements are null. It appends null 
at the end of the array. But returns null if the array is null.
{code:sql}
> SELECT array_append(array(3, 2, 1), 1);
 3 {code}
See also
spark not in docs but in code. 
https://github.com/apache/spark/blob/c55c7ea6fc92c3733543d5f3d99eb00921cbe564/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L5059

snowflake [https://docs.snowflake.com/en/sql-reference/functions/array_append]

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]


> Add ARRAY_APPEND supported in SQL & Table API
> -
>
> Key: FLINK-31622
> URL: https://issues.apache.org/jira/browse/FLINK-31622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> array_append(array, element) - Append the element at the end of the array.
> Syntax:
> array_append(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Append the element at the end of the array.
> This function does not return null when the elements are null. It appends 
> null at the end of the array. But returns null if the array is null.
> {code:sql}
> > SELECT array_append(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark not in docs 
> https://spark.apache.org/docs/latest/api/sql/index.html#array but in code. 
> [https://github.com/apache/spark/blob/c55c7ea6fc92c3733543d5f3d99eb00921cbe564/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L5059]
> snowflake [https://docs.snowflake.com/en/sql-reference/functions/array_append]
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31622) Add ARRAY_APPEND supported in SQL & Table API

2023-03-27 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31622:
-
Description: 
array_append(array, element) - Append the element at the end of the array.

Syntax:
array_append(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Append the element at the end of the array.

This function does not return null when the elements are null. It appends null 
at the end of the array. But returns null if the array is null.
{code:sql}
> SELECT array_append(array(3, 2, 1), 1);
 3 {code}
See also
spark not in docs but in code. 
https://github.com/apache/spark/blob/c55c7ea6fc92c3733543d5f3d99eb00921cbe564/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L5059

snowflake [https://docs.snowflake.com/en/sql-reference/functions/array_append]

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]

  was:
array_append(array, element) - Append the element at the end of the array.

Syntax:
array_append(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Append the element at the end of the array.

This function does not return null when the elements are null. It appends null 
at the end of the array. But returns null if the array is null.
{code:sql}
> SELECT array_append(array(3, 2, 1), 1);
 3 {code}
See also
spark not in docs but in code.

snowflake https://docs.snowflake.com/en/sql-reference/functions/array_append

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]


> Add ARRAY_APPEND supported in SQL & Table API
> -
>
> Key: FLINK-31622
> URL: https://issues.apache.org/jira/browse/FLINK-31622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> array_append(array, element) - Append the element at the end of the array.
> Syntax:
> array_append(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Append the element at the end of the array.
> This function does not return null when the elements are null. It appends 
> null at the end of the array. But returns null if the array is null.
> {code:sql}
> > SELECT array_append(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark not in docs but in code. 
> https://github.com/apache/spark/blob/c55c7ea6fc92c3733543d5f3d99eb00921cbe564/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L5059
> snowflake [https://docs.snowflake.com/en/sql-reference/functions/array_append]
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31622) Add ARRAY_APPEND supported in SQL & Table API

2023-03-27 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31622:
-
Description: 
array_append(array, element) - Append the element at the end of the array.

Syntax:
array_append(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Append the element at the end of the array.

This function does not return null when the elements are null. It appends null 
at the end of the array. But returns null if the array is null.
{code:sql}
> SELECT array_append(array(3, 2, 1), 1);
 3 {code}
See also
spark not in docs but in code.

snowflake https://docs.snowflake.com/en/sql-reference/functions/array_append

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]

> Add ARRAY_APPEND supported in SQL & Table API
> -
>
> Key: FLINK-31622
> URL: https://issues.apache.org/jira/browse/FLINK-31622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> array_append(array, element) - Append the element at the end of the array.
> Syntax:
> array_append(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Append the element at the end of the array.
> This function does not return null when the elements are null. It appends 
> null at the end of the array. But returns null if the array is null.
> {code:sql}
> > SELECT array_append(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark not in docs but in code.
> snowflake https://docs.snowflake.com/en/sql-reference/functions/array_append
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31622) Add ARRAY_APPEND supported in SQL & Table API

2023-03-26 Thread jackylau (Jira)
jackylau created FLINK-31622:


 Summary: Add ARRAY_APPEND supported in SQL & Table API
 Key: FLINK-31622
 URL: https://issues.apache.org/jira/browse/FLINK-31622
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31621) Add ARRAY_REVERSE supported in SQL & Table API

2023-03-26 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31621:
-
Description: 
array_reverse(array) - Returns an array in reverse order.

Syntax:
array_reverse(array)

Arguments:
array: An ARRAY to be handled.

Returns:

Returns an array in reverse order.

Returns null if the argument is null
{code:sql}
> SELECT array_reverse(array(1, 2, 2, NULL));
 NULL, 2, 2, 1{code}
See also
bigquery 
[https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse]

  was:
array_reverse(array) - Returns the input {{ARRAY}} with elements in reverse 
order.

Syntax:
array_reverse(array)

Arguments:
array: An ARRAY to be handled.

Returns:

Returns the input array with elements in reverse order.

Returns null if the argument is null
{code:sql}
> SELECT array_reverse(array(1, 2, 2, NULL));
 NULL, 2, 2, 1{code}
See also
bigquery 
https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse


> Add ARRAY_REVERSE supported in SQL & Table API
> --
>
> Key: FLINK-31621
> URL: https://issues.apache.org/jira/browse/FLINK-31621
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> array_reverse(array) - Returns an array in reverse order.
> Syntax:
> array_reverse(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns an array in reverse order.
> Returns null if the argument is null
> {code:sql}
> > SELECT array_reverse(array(1, 2, 2, NULL));
>  NULL, 2, 2, 1{code}
> See also
> bigquery 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31621) Add ARRAY_REVERSE supported in SQL & Table API

2023-03-26 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31621:
-
Description: 
array_reverse(array) - Returns the input {{ARRAY}} with elements in reverse 
order.

Syntax:
array_reverse(array)

Arguments:
array: An ARRAY to be handled.

Returns:

Returns the input array with elements in reverse order.

Returns null if the argument is null
{code:sql}
> SELECT array_reverse(array(1, 2, 2, NULL));
 NULL, 2, 2, 1{code}
See also
bigquery 
https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse

> Add ARRAY_REVERSE supported in SQL & Table API
> --
>
> Key: FLINK-31621
> URL: https://issues.apache.org/jira/browse/FLINK-31621
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> array_reverse(array) - Returns the input {{ARRAY}} with elements in reverse 
> order.
> Syntax:
> array_reverse(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the input array with elements in reverse order.
> Returns null if the argument is null
> {code:sql}
> > SELECT array_reverse(array(1, 2, 2, NULL));
>  NULL, 2, 2, 1{code}
> See also
> bigquery 
> https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#array_reverse



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31621) Add ARRAY_REVERSE supported in SQL & Table API

2023-03-26 Thread jackylau (Jira)
jackylau created FLINK-31621:


 Summary: Add ARRAY_REVERSE supported in SQL & Table API
 Key: FLINK-31621
 URL: https://issues.apache.org/jira/browse/FLINK-31621
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-26 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17705173#comment-17705173
 ] 

jackylau commented on FLINK-26945:
--

hi [~twalthr] ,do you have time to help review it?

> Add DATE_SUB supported in SQL & Table API
> -
>
> Key: FLINK-26945
> URL: https://issues.apache.org/jira/browse/FLINK-26945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Returns the date {{numDays}} before {{{}startDate{}}}.
> Syntax:
> {code:java}
> date_sub(startDate, numDays) {code}
> Arguments:
>  * {{{}startDate{}}}: A DATE expression.
>  * {{{}numDays{}}}: An INTEGER expression.
> Returns:
> A DATE.
> If {{numDays}} is negative abs(num_days) are added to {{{}startDate{}}}.
> If the result date overflows the date range the function raises an error.
> Examples:
> {code:java}
> > SELECT date_sub('2016-07-30', 1);
>  2016-07-29 {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-03-23 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31602:
-
Description: 
array_position(array, element) - Returns the (1-based) index of the first 
element of the array as long.

Syntax:
array_position(array, element)

Arguments:
array: An ARRAY to be handled.

Returns:

Returns the position of the first occurrence of element in the given array as 
long.

Returns 0 if the given value could not be found in the array.

Returns null if either of the arguments are null
{code:sql}
> SELECT array_position(array(3, 2, 1), 1);
 3 {code}
See also
spark 
[https://spark.apache.org/docs/latest/api/sql/index.html#array_remove|https://spark.apache.org/docs/latest/api/sql/index.html#array_position]

postgresql 
[https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_remove|https://spark.apache.org/docs/latest/api/sql/index.html#array_position]
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-03-23 Thread jackylau (Jira)
jackylau created FLINK-31602:


 Summary: Add ARRAY_POSITION supported in SQL & Table API
 Key: FLINK-31602
 URL: https://issues.apache.org/jira/browse/FLINK-31602
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-23 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703975#comment-17703975
 ] 

jackylau commented on FLINK-26945:
--

hi [~twalthr] , thanks, i know what does the conversionClass means now. it can 
using conversionClass(bridgedTo) to converter to the java class 
internal/external of DateType.

DataTypes.DATE().notNull().bridgedTo(int.class)` -> it will to external int 
instead of LocalDatetime (DateType default conversion class).

 

for example String input type, so the cast logical will 
 # StringToDateCast  (string -> int)
 # internal to External (int to int) DataStructureConverter.

 
{code:java}
// code placeholder
castEvaluator =
context.createEvaluator(
$("startDate").cast(DataTypes.DATE().notNull()),
DataTypes.DATE().notNull(),
DataTypes.FIELD("startDate", 
startDate.notNull().toInternal())); {code}
 

 

> Add DATE_SUB supported in SQL & Table API
> -
>
> Key: FLINK-26945
> URL: https://issues.apache.org/jira/browse/FLINK-26945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Returns the date {{numDays}} before {{{}startDate{}}}.
> Syntax:
> {code:java}
> date_sub(startDate, numDays) {code}
> Arguments:
>  * {{{}startDate{}}}: A DATE expression.
>  * {{{}numDays{}}}: An INTEGER expression.
> Returns:
> A DATE.
> If {{numDays}} is negative abs(num_days) are added to {{{}startDate{}}}.
> If the result date overflows the date range the function raises an error.
> Examples:
> {code:java}
> > SELECT date_sub('2016-07-30', 1);
>  2016-07-29 {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-22 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703653#comment-17703653
 ] 

jackylau edited comment on FLINK-26945 at 3/22/23 1:05 PM:
---

and i I have tried ResolverRule like spark cast to date in advance, it works on 
table api but not sql. because the sql will not  to the logic.
{code:java}
final class DateTimeOperationsResolverRule implements ResolverRule {

private static final DateTypeExtractor DATE_TYPE_EXTRACTOR = new 
DateTypeExtractor();

@Override
public List apply(List expression, 
ResolutionContext context) {
return expression.stream()
.map(expr -> expr.accept(new 
ExpressionResolverVisitor(context)))
.collect(Collectors.toList());
}

private static class ExpressionResolverVisitor extends 
RuleExpressionVisitor {

ExpressionResolverVisitor(ResolutionContext context) {
super(context);
}

@Override
public Expression visit(UnresolvedCallExpression unresolvedCall) {
if (unresolvedCall.getFunctionDefinition() == 
BuiltInFunctionDefinitions.DATE_SUB) {
List children = unresolvedCall.getChildren();
Expression date = children.get(0);
resolutionContext.getOutputDataType();

if (date.accept(DATE_TYPE_EXTRACTOR)) {
Expression castedDate =
unresolvedCall(
BuiltInFunctionDefinitions.CAST,
date,
typeLiteral(DataTypes.DATE()));
return unresolvedCall(
BuiltInFunctionDefinitions.DATE_SUB, castedDate, 
children.get(1));
}
}

return unresolvedCall;
}

@Override
protected Expression defaultMethod(Expression expression) {
return expression;
}
}

private static class DateTypeExtractor extends 
ApiExpressionDefaultVisitor {

@Override
protected Boolean defaultMethod(Expression expression) {
return false;
}

/** for table api. */
@Override
public Boolean visit(FieldReferenceExpression fieldReference) {
final LogicalType literalType = 
fieldReference.getOutputDataType().getLogicalType();
if (literalType.isAnyOf(LogicalTypeFamily.CHARACTER_STRING)
|| literalType.isAnyOf(
LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE,
LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
return true;
}

return false;
}

@Override
public Boolean visit(ValueLiteralExpression valueLiteral) {
final LogicalType literalType = 
valueLiteral.getOutputDataType().getLogicalType();
if (literalType.isAnyOf(LogicalTypeFamily.CHARACTER_STRING)
|| literalType.isAnyOf(
LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE,
LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
return true;
}

return false;
}
}
} {code}
And i have also tried FlinkConvertletTable, but it only works on sql, but not 
table api. the reason is table api will not go to this logical, it only 
converter expressions to RexNode.

so i don't find a good implement like spark to converter string/timestamp type 
to int in advance.

do you have a best implementation?


was (Author: jackylau):
and i I have tried ResolverRule like spark cast to date in advance, it works on 
table api but not sql. because the sql will not  to the logic.
{code:java}
final class DateTimeOperationsResolverRule implements ResolverRule {

private static final DateTypeExtractor DATE_TYPE_EXTRACTOR = new 
DateTypeExtractor();

@Override
public List apply(List expression, 
ResolutionContext context) {
return expression.stream()
.map(expr -> expr.accept(new 
ExpressionResolverVisitor(context)))
.collect(Collectors.toList());
}

private static class ExpressionResolverVisitor extends 
RuleExpressionVisitor {

ExpressionResolverVisitor(ResolutionContext context) {
super(context);
}

@Override
public Expression visit(UnresolvedCallExpression unresolvedCall) {
if (unresolvedCall.getFunctionDefinition() == 
BuiltInFunctionDefinitions.DATE_SUB) {
List children = unresolvedCall.getChildren();
Expression date = children.get(0);
resolutionContext.getOutputDataType();

if (date.accept(DATE_TYPE_EXTRACTOR)) {
Expression castedDate =

[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-22 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703653#comment-17703653
 ] 

jackylau commented on FLINK-26945:
--

and i I have tried ResolverRule like spark cast to date in advance, it works on 
table api but not sql. because the sql will not  to the logic.
{code:java}
final class DateTimeOperationsResolverRule implements ResolverRule {

private static final DateTypeExtractor DATE_TYPE_EXTRACTOR = new 
DateTypeExtractor();

@Override
public List apply(List expression, 
ResolutionContext context) {
return expression.stream()
.map(expr -> expr.accept(new 
ExpressionResolverVisitor(context)))
.collect(Collectors.toList());
}

private static class ExpressionResolverVisitor extends 
RuleExpressionVisitor {

ExpressionResolverVisitor(ResolutionContext context) {
super(context);
}

@Override
public Expression visit(UnresolvedCallExpression unresolvedCall) {
if (unresolvedCall.getFunctionDefinition() == 
BuiltInFunctionDefinitions.DATE_SUB) {
List children = unresolvedCall.getChildren();
Expression date = children.get(0);
resolutionContext.getOutputDataType();

if (date.accept(DATE_TYPE_EXTRACTOR)) {
Expression castedDate =
unresolvedCall(
BuiltInFunctionDefinitions.CAST,
date,
typeLiteral(DataTypes.DATE()));
return unresolvedCall(
BuiltInFunctionDefinitions.DATE_SUB, castedDate, 
children.get(1));
}
}

return unresolvedCall;
}

@Override
protected Expression defaultMethod(Expression expression) {
return expression;
}
}

private static class DateTypeExtractor extends 
ApiExpressionDefaultVisitor {

@Override
protected Boolean defaultMethod(Expression expression) {
return false;
}

/** for table api. */
@Override
public Boolean visit(FieldReferenceExpression fieldReference) {
final LogicalType literalType = 
fieldReference.getOutputDataType().getLogicalType();
if (literalType.isAnyOf(LogicalTypeFamily.CHARACTER_STRING)
|| literalType.isAnyOf(
LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE,
LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
return true;
}

return false;
}

@Override
public Boolean visit(ValueLiteralExpression valueLiteral) {
final LogicalType literalType = 
valueLiteral.getOutputDataType().getLogicalType();
if (literalType.isAnyOf(LogicalTypeFamily.CHARACTER_STRING)
|| literalType.isAnyOf(
LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE,
LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
return true;
}

return false;
}
}
} {code}
and i have also tried FlinkConvertletTable, but it only works on sql, but not 
table api.

so i do n't find a good implement like spark to converter string/timestamp type 
to int in advance.

do you have a best implementation?

> Add DATE_SUB supported in SQL & Table API
> -
>
> Key: FLINK-26945
> URL: https://issues.apache.org/jira/browse/FLINK-26945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Returns the date {{numDays}} before {{{}startDate{}}}.
> Syntax:
> {code:java}
> date_sub(startDate, numDays) {code}
> Arguments:
>  * {{{}startDate{}}}: A DATE expression.
>  * {{{}numDays{}}}: An INTEGER expression.
> Returns:
> A DATE.
> If {{numDays}} is negative abs(num_days) are added to {{{}startDate{}}}.
> If the result date overflows the date range the function raises an error.
> Examples:
> {code:java}
> > SELECT date_sub('2016-07-30', 1);
>  2016-07-29 {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-22 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703649#comment-17703649
 ] 

jackylau commented on FLINK-26945:
--

I tried what you said, but it also didn't work . it will throws exception 
ValidationException: Invalid argument type at position 0. Data type DATE 
expected but STRING passed.

The reason is the implicit cast need supportsImplicitCast and StringToDate 
should explicit

 

 
{code:java}
// code placeholder
class DateTimeArgumentTypeStrategy implements InputTypeStrategy {

@Override
public Optional> inferInputTypes(
CallContext callContext, boolean throwOnFailure) {
List argumentDataTypes = callContext.getArgumentDataTypes();
List argumentTypes =
argumentDataTypes.stream()
.map(DataType::getLogicalType)
.collect(Collectors.toList());

LogicalType dateType = argumentTypes.get(0);
if (dateType.isAnyOf(LogicalTypeFamily.CHARACTER_STRING)
|| dateType.isAnyOf(
LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE,
LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
argumentTypes.set(0, new DateType());
}

if (dateType.isAnyOf(LogicalTypeFamily.CHARACTER_STRING)
|| dateType.isAnyOf(
LogicalTypeRoot.TINYINT,
LogicalTypeRoot.SMALLINT,
LogicalTypeRoot.INTEGER)) {
argumentTypes.set(1, new IntType());
}

// TODO fail other types.
return Optional.of(
argumentTypes.stream()
.map(TypeConversions::fromLogicalToDataType)
.collect(Collectors.toList()));
}
}



{code}
 

 

> Add DATE_SUB supported in SQL & Table API
> -
>
> Key: FLINK-26945
> URL: https://issues.apache.org/jira/browse/FLINK-26945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Returns the date {{numDays}} before {{{}startDate{}}}.
> Syntax:
> {code:java}
> date_sub(startDate, numDays) {code}
> Arguments:
>  * {{{}startDate{}}}: A DATE expression.
>  * {{{}numDays{}}}: An INTEGER expression.
> Returns:
> A DATE.
> If {{numDays}} is negative abs(num_days) are added to {{{}startDate{}}}.
> If the result date overflows the date range the function raises an error.
> Examples:
> {code:java}
> > SELECT date_sub('2016-07-30', 1);
>  2016-07-29 {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-22 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703636#comment-17703636
 ] 

jackylau commented on FLINK-26945:
--

hi [~twalthr] , i want to do cast ahead like spark through rules 
[https://github.com/apache/spark/blob/d679dabdd1b5ad04b8c7deb1c06ce886a154a928/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala#L1172]
 . the DateSubFunction only need treat the input type as an int at runtime.

> Add DATE_SUB supported in SQL & Table API
> -
>
> Key: FLINK-26945
> URL: https://issues.apache.org/jira/browse/FLINK-26945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Returns the date {{numDays}} before {{{}startDate{}}}.
> Syntax:
> {code:java}
> date_sub(startDate, numDays) {code}
> Arguments:
>  * {{{}startDate{}}}: A DATE expression.
>  * {{{}numDays{}}}: An INTEGER expression.
> Returns:
> A DATE.
> If {{numDays}} is negative abs(num_days) are added to {{{}startDate{}}}.
> If the result date overflows the date range the function raises an error.
> Examples:
> {code:java}
> > SELECT date_sub('2016-07-30', 1);
>  2016-07-29 {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31200) Add MAP_VALUES supported in SQL & Table API

2023-03-21 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau closed FLINK-31200.

Resolution: Duplicate

> Add MAP_VALUES supported in SQL & Table API
> ---
>
> Key: FLINK-31200
> URL: https://issues.apache.org/jira/browse/FLINK-31200
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.18.0
>
>
> Returns an unordered array containing the values of the map.
> Syntax:
> map_values(map)
> Arguments:
> map An Map to be handled.
> Returns:
> An Map. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> > SELECT map_values(map(1, 'a', 2, 'b'));
>  - ["a","b"]{code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#map_values



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-15 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17700971#comment-17700971
 ] 

jackylau edited comment on FLINK-26945 at 3/16/23 5:34 AM:
---

hi [~jark] [~twalthr] [~snuyanzin] , when i implement the date_sub builtin 
function i found the current built-in function implementation lacks samples and 
is not developer friendly. So i have different 3 implementations, which way is 
the best.

the type definition like this
{code:java}
// code placeholder
public static final BuiltInFunctionDefinition DATE_SUB =
BuiltInFunctionDefinition.newBuilder()
.name("DATE_SUB")
.kind(SCALAR)
.inputTypeStrategy(
sequence(
or(
logical(LogicalTypeRoot.DATE),

logical(LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE),

logical(LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE),

logical(LogicalTypeFamily.CHARACTER_STRING)),
or(
InputTypeStrategies.explicit(TINYINT()),

InputTypeStrategies.explicit(SMALLINT()),
InputTypeStrategies.explicit(INT()
.outputTypeStrategy(nullableIfArgs(explicit(DATE(

.runtimeClass("org.apache.flink.table.runtime.functions.scalar.DateSubFunction")
.build();
 {code}
1)like hive style, i think it is not good.
{code:java}
    public @Nullable Object eval(Object startDate, Object days) {
        if (startDate == null || days == null) {
            return null;
        }        int start = 0;
        if (startDateLogicalType.is(LogicalTypeFamily.CHARACTER_STRING)) {
            start = BinaryStringDataUtil.toDate((BinaryStringData) startDate);
        } else if 
(startDateLogicalType.is(LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE)) {
            start =
                    (int)
                            (((TimestampData) startDate).getMillisecond()
                                    / DateTimeUtils.MILLIS_PER_DAY);
        } else if 
(startDateLogicalType.is(LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
            DateTimeUtils.timestampWithLocalZoneToDate((TimestampData) 
startDate, LOCAL_TZ);
        } else if (startDateLogicalType.is(LogicalTypeRoot.DATE)) {
            start = (int) startDate;
        } else {
            throw new FlinkRuntimeException(
                    "DATE_SUB() don't support argument startDate type " + 
startDate);
        }        return start - ((Number) days).intValue();
    } {code}
2) like spark style.

spark will converts the String/Timestamp type to Date in advance. and it just 
think it is int type in DateSub function like this

 
{code:java}
// think it is int type in DateSub function, so start.asInstanceOf[Int]

case class DateSub(startDate: Expression, days: Expression)
  extends BinaryExpression with ExpectsInputTypes with NullIntolerant {
  override def left: Expression = startDate
  override def right: Expression = days

  override def inputTypes: Seq[AbstractDataType] =
Seq(DateType, TypeCollection(IntegerType, ShortType, ByteType))

  override def dataType: DataType = DateType

  override def nullSafeEval(start: Any, d: Any): Any = {
start.asInstanceOf[Int] - d.asInstanceOf[Number].intValue()
  }

  override def prettyName: String = "date_sub"
}


// converts the String/Timestamp type to Date in advance
object DateTimeOperations extends TypeCoercionRule {
  override val transform: PartialFunction[Expression, Expression] = {
// Skip nodes who's children have not been resolved yet.
case e if !e.childrenResolved => e
case d @ DateAdd(AnyTimestampType(), _) => d.copy(startDate = 
Cast(d.startDate, DateType))
case d @ DateAdd(StringType(), _) => d.copy(startDate = Cast(d.startDate, 
DateType))
case d @ DateSub(AnyTimestampType(), _) => d.copy(startDate = 
Cast(d.startDate, DateType))
case d @ DateSub(StringType(), _) => d.copy(startDate = Cast(d.startDate, 
DateType))
  }
}  {code}
 

If I refer to the implementation of spark, it needs to be implemented by 
{color:#ff}*org.apache.flink.table.planner.expressions.converter.converters.CustomizedConverte*{color}r
 like this. 

but the CustomizedConverter will be call only in 
*{color:#ff}ExpressionEvaluatorFactory.createEvaluator{color}*
{code:java}
class DateSubConverter extends CustomizedConverter {
@Override
public RexNode convert(CallExpression call, 
CallExpressionConvertRule.ConvertContext context) {
checkArgumentNumber(call, 2);

final RexNode child = context.toRexNode(call.getChildren().get(0));

final RelDataType targetRelDataType =
 

[jira] [Comment Edited] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-15 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17700971#comment-17700971
 ] 

jackylau edited comment on FLINK-26945 at 3/16/23 4:34 AM:
---

hi [~jark] [~twalthr] [~snuyanzin] , when i implement the date_sub builtin 
function i found the current built-in function implementation lacks samples and 
is not developer friendly. So i have different 3 implementations, which way is 
the best.

the type definition like this
{code:java}
// code placeholder
public static final BuiltInFunctionDefinition DATE_SUB =
BuiltInFunctionDefinition.newBuilder()
.name("DATE_SUB")
.kind(SCALAR)
.inputTypeStrategy(
sequence(
or(
logical(LogicalTypeRoot.DATE),

logical(LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE),

logical(LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE),

logical(LogicalTypeFamily.CHARACTER_STRING)),
or(
InputTypeStrategies.explicit(TINYINT()),

InputTypeStrategies.explicit(SMALLINT()),
InputTypeStrategies.explicit(INT()
.outputTypeStrategy(nullableIfArgs(explicit(DATE(

.runtimeClass("org.apache.flink.table.runtime.functions.scalar.DateSubFunction")
.build();
 {code}
1)like hive style, i think it is not good.
{code:java}
    public @Nullable Object eval(Object startDate, Object days) {
        if (startDate == null || days == null) {
            return null;
        }        int start = 0;
        if (startDateLogicalType.is(LogicalTypeFamily.CHARACTER_STRING)) {
            start = BinaryStringDataUtil.toDate((BinaryStringData) startDate);
        } else if 
(startDateLogicalType.is(LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE)) {
            start =
                    (int)
                            (((TimestampData) startDate).getMillisecond()
                                    / DateTimeUtils.MILLIS_PER_DAY);
        } else if 
(startDateLogicalType.is(LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
            DateTimeUtils.timestampWithLocalZoneToDate((TimestampData) 
startDate, LOCAL_TZ);
        } else if (startDateLogicalType.is(LogicalTypeRoot.DATE)) {
            start = (int) startDate;
        } else {
            throw new FlinkRuntimeException(
                    "DATE_SUB() don't support argument startDate type " + 
startDate);
        }        return start - ((Number) days).intValue();
    } {code}
2) like spark style.

spark will converts the String/Timestamp type to Date in advance. and it just 
think it is int type in DateSub function like this

 
{code:java}
// think it is int type in DateSub function, so start.asInstanceOf[Int]

case class DateSub(startDate: Expression, days: Expression)
  extends BinaryExpression with ExpectsInputTypes with NullIntolerant {
  override def left: Expression = startDate
  override def right: Expression = days

  override def inputTypes: Seq[AbstractDataType] =
Seq(DateType, TypeCollection(IntegerType, ShortType, ByteType))

  override def dataType: DataType = DateType

  override def nullSafeEval(start: Any, d: Any): Any = {
start.asInstanceOf[Int] - d.asInstanceOf[Number].intValue()
  }

  override def prettyName: String = "date_sub"
}


// converts the String/Timestamp type to Date in advance
object DateTimeOperations extends TypeCoercionRule {
  override val transform: PartialFunction[Expression, Expression] = {
// Skip nodes who's children have not been resolved yet.
case e if !e.childrenResolved => e
case d @ DateAdd(AnyTimestampType(), _) => d.copy(startDate = 
Cast(d.startDate, DateType))
case d @ DateAdd(StringType(), _) => d.copy(startDate = Cast(d.startDate, 
DateType))
case d @ DateSub(AnyTimestampType(), _) => d.copy(startDate = 
Cast(d.startDate, DateType))
case d @ DateSub(StringType(), _) => d.copy(startDate = Cast(d.startDate, 
DateType))
  }
}  {code}
 

If I refer to the implementation of spark, it needs to be implemented by 
{color:#ff}*org.apache.flink.table.planner.expressions.converter.converters.CustomizedConverte*{color}r
 like this. 

but the CustomizedConverter will be call only in 
*{color:#ff}ExpressionEvaluatorFactory.createEvaluator{color}*
{code:java}
class DateSubConverter extends CustomizedConverter {
@Override
public RexNode convert(CallExpression call, 
CallExpressionConvertRule.ConvertContext context) {
checkArgumentNumber(call, 2);

final RexNode child = context.toRexNode(call.getChildren().get(0));

final RelDataType targetRelDataType =
 

[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-15 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17700971#comment-17700971
 ] 

jackylau commented on FLINK-26945:
--

hi [~jark] [~twalthr] , when i implement the date_sub builtin function i found 
the current built-in function implementation lacks samples and is not developer 
friendly. So i have different 3 implementations, which way is the best.

the type definition like this
{code:java}
// code placeholder
public static final BuiltInFunctionDefinition DATE_SUB =
BuiltInFunctionDefinition.newBuilder()
.name("DATE_SUB")
.kind(SCALAR)
.inputTypeStrategy(
sequence(
or(
logical(LogicalTypeRoot.DATE),

logical(LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE),

logical(LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE),

logical(LogicalTypeFamily.CHARACTER_STRING)),
or(
InputTypeStrategies.explicit(TINYINT()),

InputTypeStrategies.explicit(SMALLINT()),
InputTypeStrategies.explicit(INT()
.outputTypeStrategy(nullableIfArgs(explicit(DATE(

.runtimeClass("org.apache.flink.table.runtime.functions.scalar.DateSubFunction")
.build();
 {code}
1)like hive style, i think it is not good.
{code:java}
    public @Nullable Object eval(Object startDate, Object days) {
        if (startDate == null || days == null) {
            return null;
        }        int start = 0;
        if (startDateLogicalType.is(LogicalTypeFamily.CHARACTER_STRING)) {
            start = BinaryStringDataUtil.toDate((BinaryStringData) startDate);
        } else if 
(startDateLogicalType.is(LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE)) {
            start =
                    (int)
                            (((TimestampData) startDate).getMillisecond()
                                    / DateTimeUtils.MILLIS_PER_DAY);
        } else if 
(startDateLogicalType.is(LogicalTypeRoot.TIMESTAMP_WITH_LOCAL_TIME_ZONE)) {
            DateTimeUtils.timestampWithLocalZoneToDate((TimestampData) 
startDate, LOCAL_TZ);
        } else if (startDateLogicalType.is(LogicalTypeRoot.DATE)) {
            start = (int) startDate;
        } else {
            throw new FlinkRuntimeException(
                    "DATE_SUB() don't support argument startDate type " + 
startDate);
        }        return start - ((Number) days).intValue();
    } {code}
2) like spark style.

spark will converts the String/Timestamp type to Date in advance. and it just 
think it is int type in DateSub function like this

 
{code:java}
// think it is int type in DateSub function, so start.asInstanceOf[Int]

case class DateSub(startDate: Expression, days: Expression)
  extends BinaryExpression with ExpectsInputTypes with NullIntolerant {
  override def left: Expression = startDate
  override def right: Expression = days

  override def inputTypes: Seq[AbstractDataType] =
Seq(DateType, TypeCollection(IntegerType, ShortType, ByteType))

  override def dataType: DataType = DateType

  override def nullSafeEval(start: Any, d: Any): Any = {
start.asInstanceOf[Int] - d.asInstanceOf[Number].intValue()
  }

  override def prettyName: String = "date_sub"
}


// converts the String/Timestamp type to Date in advance
object DateTimeOperations extends TypeCoercionRule {
  override val transform: PartialFunction[Expression, Expression] = {
// Skip nodes who's children have not been resolved yet.
case e if !e.childrenResolved => e
case d @ DateAdd(AnyTimestampType(), _) => d.copy(startDate = 
Cast(d.startDate, DateType))
case d @ DateAdd(StringType(), _) => d.copy(startDate = Cast(d.startDate, 
DateType))
case d @ DateSub(AnyTimestampType(), _) => d.copy(startDate = 
Cast(d.startDate, DateType))
case d @ DateSub(StringType(), _) => d.copy(startDate = Cast(d.startDate, 
DateType))
  }
}  {code}
 

If I refer to the implementation of spark, it needs to be implemented by 
{color:#FF}*org.apache.flink.table.planner.expressions.converter.converters.CustomizedConverte*{color}r
 like this. 

but the CustomizedConverter will be call only in 
*{color:#FF}ExpressionEvaluatorFactory.createEvaluator{color}*
{code:java}
class DateSubConverter extends CustomizedConverter {
@Override
public RexNode convert(CallExpression call, 
CallExpressionConvertRule.ConvertContext context) {
checkArgumentNumber(call, 2);

final RexNode child = context.toRexNode(call.getChildren().get(0));

final RelDataType targetRelDataType =
context.getTypeFactory()
   

[jira] [Updated] (FLINK-31102) Add ARRAY_REMOVE supported in SQL & Table API

2023-03-10 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31102:
-
Description: 
Remove all elements that equal to element from array.

Syntax:
array_remove(array, needle)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If array is NULL, the result is NULL. 
Examples:
{code:sql}
SELECT array_remove(array[1, 2, 3, null, 3], 3); 
-- [1,2,null]
{code}
See also
spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]

presto [https://prestodb.io/docs/current/functions/array.html]

postgresql 
https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE

  was:
Remove all elements that equal to element from array.

Syntax:
array_remove(array, needle)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If array is NULL, the result is NULL. 
Examples:
{code:sql}
SELECT array_remove(array[1, 2, 3, null, 3], 3); 
-- [1,2,null]
{code}
See also
spark https://spark.apache.org/docs/latest/api/sql/index.html#array_remove

presto [https://prestodb.io/docs/current/functions/array.html]

postgresql 
[https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 


> Add ARRAY_REMOVE supported in SQL & Table API
> -
>
> Key: FLINK-31102
> URL: https://issues.apache.org/jira/browse/FLINK-31102
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_remove(array, needle)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If array is NULL, the result is NULL. 
> Examples:
> {code:sql}
> SELECT array_remove(array[1, 2, 3, null, 3], 3); 
> -- [1,2,null]
> {code}
> See also
> spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]
> presto [https://prestodb.io/docs/current/functions/array.html]
> postgresql 
> https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31102) Add ARRAY_REMOVE supported in SQL & Table API

2023-03-10 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31102:
-
Description: 
Remove all elements that equal to element from array.

Syntax:
array_remove(array, needle)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If array is NULL, the result is NULL. 
Examples:
{code:sql}
SELECT array_remove(array[1, 2, 3, null, 3], 3); 
-- [1,2,null]
{code}
See also
spark https://spark.apache.org/docs/latest/api/sql/index.html#array_remove

presto [https://prestodb.io/docs/current/functions/array.html]

postgresql 
[https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 

  was:
Remove all elements that equal to element from array.

Syntax:
array_remove(array, needle)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}
SELECT array_remove(array[1, 2, 3, null, 3], 3); 
-- [1,2,null]
{code}
See also
spark 
[[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]

presto [https://prestodb.io/docs/current/functions/array.html]

postgresql 
[https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 


> Add ARRAY_REMOVE supported in SQL & Table API
> -
>
> Key: FLINK-31102
> URL: https://issues.apache.org/jira/browse/FLINK-31102
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_remove(array, needle)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If array is NULL, the result is NULL. 
> Examples:
> {code:sql}
> SELECT array_remove(array[1, 2, 3, null, 3], 3); 
> -- [1,2,null]
> {code}
> See also
> spark https://spark.apache.org/docs/latest/api/sql/index.html#array_remove
> presto [https://prestodb.io/docs/current/functions/array.html]
> postgresql 
> [https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31102) Add ARRAY_REMOVE supported in SQL & Table API

2023-03-10 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17699136#comment-17699136
 ] 

jackylau commented on FLINK-31102:
--

[~Sergey Nuyanzin] ,thanks for your suggestion and have fixed. and I will be 
aware of these problem when supporting other array functions later.

> Add ARRAY_REMOVE supported in SQL & Table API
> -
>
> Key: FLINK-31102
> URL: https://issues.apache.org/jira/browse/FLINK-31102
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_remove(array, needle)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> SELECT array_remove(array[1, 2, 3, null, 3], 3); 
> -- [1,2,null]
> {code}
> See also
> spark 
> [[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]
> presto [https://prestodb.io/docs/current/functions/array.html]
> postgresql 
> [https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31102) Add ARRAY_REMOVE supported in SQL & Table API

2023-03-10 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698956#comment-17698956
 ] 

jackylau edited comment on FLINK-31102 at 3/10/23 2:48 PM:
---

[~Sergey Nuyanzin] yeap, it is a misspelling and i have fixed description


was (Author: jackylau):
[~jackylau] yeap, it is a misspelling and i have fixed description

> Add ARRAY_REMOVE supported in SQL & Table API
> -
>
> Key: FLINK-31102
> URL: https://issues.apache.org/jira/browse/FLINK-31102
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_remove(array, needle)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> SELECT array_remove(array[1, 2, 3, null, 3], 3); 
> -- [1,2,null]
> {code}
> See also
> spark 
> [[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]
> presto [https://prestodb.io/docs/current/functions/array.html]
> postgresql 
> [https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31102) Add ARRAY_REMOVE supported in SQL & Table API

2023-03-10 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698956#comment-17698956
 ] 

jackylau commented on FLINK-31102:
--

[~jackylau] yeap, it is a misspelling and i have fixed description

> Add ARRAY_REMOVE supported in SQL & Table API
> -
>
> Key: FLINK-31102
> URL: https://issues.apache.org/jira/browse/FLINK-31102
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_remove(array, needle)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> SELECT array_remove(array[1, 2, 3, null, 3], 3); 
> -- [1,2,null]
> {code}
> See also
> spark 
> [[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]
> presto [https://prestodb.io/docs/current/functions/array.html]
> postgresql 
> [https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31102) Add ARRAY_REMOVE supported in SQL & Table API

2023-03-10 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31102:
-
Description: 
Remove all elements that equal to element from array.

Syntax:
array_remove(array, needle)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}
SELECT array_remove(array[1, 2, 3, null, 3], 3); 
-- [1,2,null]
{code}
See also
spark 
[[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]

presto [https://prestodb.io/docs/current/functions/array.html]

postgresql 
[https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 

  was:
Remove all elements that equal to element from array.

Syntax:
array_remove(array)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}
SELECT array_remove(array(1, 2, 3, null, 3), 3); 
-- [1,2,null]
{code}
See also
spark 
[[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]

presto [https://prestodb.io/docs/current/functions/array.html]

postgresql 
[https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 


> Add ARRAY_REMOVE supported in SQL & Table API
> -
>
> Key: FLINK-31102
> URL: https://issues.apache.org/jira/browse/FLINK-31102
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_remove(array, needle)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> SELECT array_remove(array[1, 2, 3, null, 3], 3); 
> -- [1,2,null]
> {code}
> See also
> spark 
> [[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]
> presto [https://prestodb.io/docs/current/functions/array.html]
> postgresql 
> [https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-10 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698812#comment-17698812
 ] 

jackylau commented on FLINK-31377:
--

[~Sergey Nuyanzin] but it impacts on table api and python api. and 
array_contains example not right logic will impact others to do other function.

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698780#comment-17698780
 ] 

jackylau commented on FLINK-31377:
--

this https://issues.apache.org/jira/browse/FLINK-27438 just a pure sql level, 
which is not blocked.

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698777#comment-17698777
 ] 

jackylau commented on FLINK-31377:
--

[~Sergey Nuyanzin] this unit test code can reproduce it 
{code:java}
// code placeholder
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
.onFieldsWithData(
new Map[] {
null,
CollectionUtil.map(entry(1, "a"), entry(2, 
"b")),
CollectionUtil.map(entry(3, "c"), entry(4, 
"d")),
},
null)
.andDataTypes(
DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
DataTypes.STRING())),
DataTypes.STRING())
.testResult(
$("f0").arrayContains(
CollectionUtil.map(entry(3, "c"), 
entry(4, "d"))),
"ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
true,
DataTypes.BOOLEAN()));
} {code}

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698710#comment-17698710
 ] 

jackylau commented on FLINK-31377:
--

hi [~snuyanzin] , array_contains have another bug, could you also have a look. 

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698701#comment-17698701
 ] 

jackylau commented on FLINK-31377:
--

hi [~twalthr] , i don't think needle and element type are identical. because " 
a NOT NULL type can be stored in NULL type but not vice versa." and after dig 
the code TypeInferenceOperandChecker.insertImplicitCasts, you can see here 
supportsAvoidingCast.

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31166) array_contains does NOT work when haystack elements are not nullable and needle is nullable

2023-03-09 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31166:
-
Summary: array_contains does NOT work when haystack elements are not 
nullable and needle is nullable  (was: array_contains element type error)

> array_contains does NOT work when haystack elements are not nullable and 
> needle is nullable
> ---
>
> Key: FLINK-31166
> URL: https://issues.apache.org/jira/browse/FLINK-31166
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-21-18-37-45-202.png, 
> image-2023-02-21-18-41-19-385.png, image-2023-02-22-09-56-59-257.png
>
>
> {{ARRAY_CONTAINS}} works ok for the case when both haystack elements and 
> needle are not nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1], 0);{code}
> it works ok when both haystack elements and needle are nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1, NULL], CAST(NULL AS INT));{code}
> it works ok when haystack elements are nullable and needle is not nullable 
> e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1, NULL], 1);{code}
> and it does NOT work when haystack elements are not nullable and needle is 
> nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1], CAST(NULL AS INT));{code}
>  
> !image-2023-02-22-09-56-59-257.png!
>  
> !image-2023-02-21-18-41-19-385.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698364#comment-17698364
 ] 

jackylau commented on FLINK-31377:
--

hi [~twalthr] the fix is simple. i guess you accidentally put the needle type, 
but should element type

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698352#comment-17698352
 ] 

jackylau commented on FLINK-31377:
--

and i find another problem here 
https://issues.apache.org/jira/browse/FLINK-31381 

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31381) UnsupportedOperationException: Unsupported type when convertTypeToSpec: MAP

2023-03-09 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31381:
-
Description: 
when i fix this https://issues.apache.org/jira/browse/FLINK-31377, and find 
another bug.

which is not fixed completely https://github.com/apache/flink/pull/18967/files
{code:java}
SELECT array_contains(ARRAY[CAST(null AS MAP), MAP[1, 2]], MAP[1, 
2]); {code}
{code:java}
Caused by: java.lang.UnsupportedOperationException: Unsupported type when 
convertTypeToSpec: MAPat 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1069)
at 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1091)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.castTo(SqlValidatorUtils.java:82)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForMultisetConstructor(SqlValidatorUtils.java:74)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForArrayConstructor(SqlValidatorUtils.java:39)
at 
org.apache.flink.table.planner.functions.sql.SqlArrayConstructor.inferReturnType(SqlArrayConstructor.java:44)
at 
org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:504)at 
org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:605)at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6218)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6203)
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:161)at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1861)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1852)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:74)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:69)
at 
org.apache.flink.table.types.inference.strategies.RootArgumentTypeStrategy.inferArgumentType(RootArgumentTypeStrategy.java:58)
at 
org.apache.flink.table.types.inference.strategies.SequenceInputTypeStrategy.inferInputTypes(SequenceInputTypeStrategy.java:76)
at 
org.apache.flink.table.planner.functions.inference.TypeInferenceOperandInference.inferOperandTypesOrError(TypeInferenceOperandInference.java:91)
at org.apache.flink.table. {code}

  was:
when i fix this https://issues.apache.org/jira/browse/FLINK-31377, and find 
another bug.

which is not fixed completely
{code:java}
SELECT array_contains(ARRAY[CAST(null AS MAP), MAP[1, 2]], MAP[1, 
2]); {code}
{code:java}
Caused by: java.lang.UnsupportedOperationException: Unsupported type when 
convertTypeToSpec: MAPat 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1069)
at 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1091)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.castTo(SqlValidatorUtils.java:82)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForMultisetConstructor(SqlValidatorUtils.java:74)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForArrayConstructor(SqlValidatorUtils.java:39)
at 
org.apache.flink.table.planner.functions.sql.SqlArrayConstructor.inferReturnType(SqlArrayConstructor.java:44)
at 
org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:504)at 
org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:605)at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6218)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6203)
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:161)at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1861)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1852)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:74)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:69)
at 
org.apache.flink.table.types.inference.strategies.RootArgumentTypeStrategy.inferArgumentType(RootArgumentTypeStrategy.java:58)
at 
org.apache.flink.table.types.inference.strategies.SequenceInputTypeStrategy.inferInputTypes(SequenceInputTypeStrategy.java:76)
at 
org.apache.flink.table.planner.functions.inference.TypeInferenceOperandInference.inferOperandTypesOrError(TypeInferenceOperandInference.java:91)
at org.apache.flink.table. {code}


> UnsupportedOperationException: Unsupported type when convertTypeToSpec: MAP
> 

[jira] [Created] (FLINK-31381) UnsupportedOperationException: Unsupported type when convertTypeToSpec: MAP

2023-03-09 Thread jackylau (Jira)
jackylau created FLINK-31381:


 Summary: UnsupportedOperationException: Unsupported type when 
convertTypeToSpec: MAP
 Key: FLINK-31381
 URL: https://issues.apache.org/jira/browse/FLINK-31381
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


when i fix this https://issues.apache.org/jira/browse/FLINK-31377, and find 
another bug.

which is not fixed completely
{code:java}
SELECT array_contains(ARRAY[CAST(null AS MAP), MAP[1, 2]], MAP[1, 
2]); {code}
{code:java}
Caused by: java.lang.UnsupportedOperationException: Unsupported type when 
convertTypeToSpec: MAPat 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1069)
at 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1091)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.castTo(SqlValidatorUtils.java:82)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForMultisetConstructor(SqlValidatorUtils.java:74)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForArrayConstructor(SqlValidatorUtils.java:39)
at 
org.apache.flink.table.planner.functions.sql.SqlArrayConstructor.inferReturnType(SqlArrayConstructor.java:44)
at 
org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:504)at 
org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:605)at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6218)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6203)
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:161)at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1861)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1852)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:74)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:69)
at 
org.apache.flink.table.types.inference.strategies.RootArgumentTypeStrategy.inferArgumentType(RootArgumentTypeStrategy.java:58)
at 
org.apache.flink.table.types.inference.strategies.SequenceInputTypeStrategy.inferInputTypes(SequenceInputTypeStrategy.java:76)
at 
org.apache.flink.table.planner.functions.inference.TypeInferenceOperandInference.inferOperandTypesOrError(TypeInferenceOperandInference.java:91)
at org.apache.flink.table. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31377:
-
Description: 
you can reproduce this error below. and reason is in ARRAY_CONTAINS
{code:java}
if the needle is a Map NOT NULL,and the array has null element.

this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
handle not null. so it throw exception
/*elementGetter = 
ArrayData.createElementGetter(needleDataType.getLogicalType());*/,

{code}
 
{code:java}
// code placeholder
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
.onFieldsWithData(
new Map[] {
null,
CollectionUtil.map(entry(1, "a"), entry(2, 
"b")),
CollectionUtil.map(entry(3, "c"), entry(4, 
"d")),
},
null)
.andDataTypes(
DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
DataTypes.STRING())),
DataTypes.STRING())
.testResult(
$("f0").arrayContains(
CollectionUtil.map(entry(3, "c"), 
entry(4, "d"))),
"ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
true,
DataTypes.BOOLEAN()));
}

{code}

  was:
you can reproduce this error below. and reason is in ARRAY_CONTAINS
{code:java}
if the needle is a Map NOT NULL,and the array has null element.

this will cause getElementOrNull(ArrayData array, int pos) only can handle not 
null. so it throw exception
/*elementGetter = 
ArrayData.createElementGetter(needleDataType.getLogicalType());*/,

{code}
 
{code:java}
// code placeholder
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
.onFieldsWithData(
new Map[] {
null,
CollectionUtil.map(entry(1, "a"), entry(2, 
"b")),
CollectionUtil.map(entry(3, "c"), entry(4, 
"d")),
},
null)
.andDataTypes(
DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
DataTypes.STRING())),
DataTypes.STRING())
.testResult(
$("f0").arrayContains(
CollectionUtil.map(entry(3, "c"), 
entry(4, "d"))),
"ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
true,
DataTypes.BOOLEAN()));
}

{code}


> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this bellowing will cause getElementOrNull(ArrayData array, int pos) only can 
> handle not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira

[jira] [Updated] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31377:
-
Description: 
you can reproduce this error below. and reason is in ARRAY_CONTAINS
{code:java}
if the needle is a Map NOT NULL,and the array has null element.

this will cause getElementOrNull(ArrayData array, int pos) only can handle not 
null. so it throw exception
/*elementGetter = 
ArrayData.createElementGetter(needleDataType.getLogicalType());*/,

{code}
 
{code:java}
// code placeholder
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
.onFieldsWithData(
new Map[] {
null,
CollectionUtil.map(entry(1, "a"), entry(2, 
"b")),
CollectionUtil.map(entry(3, "c"), entry(4, 
"d")),
},
null)
.andDataTypes(
DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
DataTypes.STRING())),
DataTypes.STRING())
.testResult(
$("f0").arrayContains(
CollectionUtil.map(entry(3, "c"), 
entry(4, "d"))),
"ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
true,
DataTypes.BOOLEAN()));
}

{code}

  was:
you can reproduce this error below. and reason is in ARRAY_CONTAINS
{code:java}
if the needle is a Map NOT NULL,and the array has null.

this will cause getElementOrNull(ArrayData array, int pos) only can handle not 
null. so it throw exception
/*elementGetter = 
ArrayData.createElementGetter(needleDataType.getLogicalType());*/,

{code}
 
{code:java}
// code placeholder
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
.onFieldsWithData(
new Map[] {
null,
CollectionUtil.map(entry(1, "a"), entry(2, 
"b")),
CollectionUtil.map(entry(3, "c"), entry(4, 
"d")),
},
null)
.andDataTypes(
DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
DataTypes.STRING())),
DataTypes.STRING())
.testResult(
$("f0").arrayContains(
CollectionUtil.map(entry(3, "c"), 
entry(4, "d"))),
"ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
true,
DataTypes.BOOLEAN()));
}

{code}


> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null element.
> this will cause getElementOrNull(ArrayData array, int pos) only can handle 
> not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31377:
-
Description: 
you can reproduce this error below. and reason is in ARRAY_CONTAINS
{code:java}
if the needle is a Map NOT NULL,and the array has null.

this will cause getElementOrNull(ArrayData array, int pos) only can handle not 
null. so it throw exception
/*elementGetter = 
ArrayData.createElementGetter(needleDataType.getLogicalType());*/,

{code}
 
{code:java}
// code placeholder
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
.onFieldsWithData(
new Map[] {
null,
CollectionUtil.map(entry(1, "a"), entry(2, 
"b")),
CollectionUtil.map(entry(3, "c"), entry(4, 
"d")),
},
null)
.andDataTypes(
DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
DataTypes.STRING())),
DataTypes.STRING())
.testResult(
$("f0").arrayContains(
CollectionUtil.map(entry(3, "c"), 
entry(4, "d"))),
"ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
true,
DataTypes.BOOLEAN()));
}

{code}

  was:
{code:java}
// code placeholder
when i use  
/*elementGetter = 
ArrayData.createElementGetter(needleDataType.getLogicalType());*/, 

if the element has map which is null
Object getElementOrNull(ArrayData array, int pos);

{code}


> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> you can reproduce this error below. and reason is in ARRAY_CONTAINS
> {code:java}
> if the needle is a Map NOT NULL,and the array has null.
> this will cause getElementOrNull(ArrayData array, int pos) only can handle 
> not null. so it throw exception
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/,
> {code}
>  
> {code:java}
> // code placeholder
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.ARRAY_CONTAINS)
> .onFieldsWithData(
> new Map[] {
> null,
> CollectionUtil.map(entry(1, "a"), entry(2, 
> "b")),
> CollectionUtil.map(entry(3, "c"), entry(4, 
> "d")),
> },
> null)
> .andDataTypes(
> DataTypes.ARRAY(DataTypes.MAP(DataTypes.INT(), 
> DataTypes.STRING())),
> DataTypes.STRING())
> .testResult(
> $("f0").arrayContains(
> CollectionUtil.map(entry(3, "c"), 
> entry(4, "d"))),
> "ARRAY_CONTAINS(f0, MAP[3, 'c', 4, 'd'])",
> true,
> DataTypes.BOOLEAN()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698318#comment-17698318
 ] 

jackylau edited comment on FLINK-31377 at 3/9/23 11:19 AM:
---

hi [~twalthr] ,this is a bug from array_contains, when i develop array_remove i 
found.
{code:java}
ArrayData.createElementGetter(needleDataType.getLogicalType()) {code}
when the needle is MAP NOT NULL. then ArrayData.createElementGetter will not 
process null in arrays. and will throw expcetion

 


was (Author: jackylau):
hi [~twalthr] ,this is a bug from array_contains, when i develop array_remove i 
found.
{code:java}
ArrayData.createElementGetter(needleDataType.getLogicalType()) {code}
when the needle is MAP NOT NULL. then ArrayData.createElementGetter will not 
process null is arrays. and will throw expcetion

 

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // code placeholder
> when i use  
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/, 
> if the element has map which is null
> Object getElementOrNull(ArrayData array, int pos);
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31377:
-
Description: 
{code:java}
// code placeholder
when i use  
/*elementGetter = 
ArrayData.createElementGetter(needleDataType.getLogicalType());*/, 

if the element has map which is null
Object getElementOrNull(ArrayData array, int pos);

{code}

  was:
{code:java}
// code placeholder
when i use , if the element has map which is null
Object getElementOrNull(ArrayData array, int pos);{code}


> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // code placeholder
> when i use  
> /*elementGetter = 
> ArrayData.createElementGetter(needleDataType.getLogicalType());*/, 
> if the element has map which is null
> Object getElementOrNull(ArrayData array, int pos);
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698318#comment-17698318
 ] 

jackylau commented on FLINK-31377:
--

hi [~twalthr] ,this is a bug from array_contains, when i develop array_remove i 
found.
{code:java}
ArrayData.createElementGetter(needleDataType.getLogicalType()) {code}
when the needle is MAP NOT NULL. then ArrayData.createElementGetter will not 
process null is arrays. and will throw expcetion

 

> BinaryArrayData getArray/getMap should Handle null correctly AssertionError: 
> valueArraySize (-6) should >= 0 
> -
>
> Key: FLINK-31377
> URL: https://issues.apache.org/jira/browse/FLINK-31377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> // code placeholder
> when i use , if the element has map which is null
> Object getElementOrNull(ArrayData array, int pos);{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31166) array_contains element type error

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698315#comment-17698315
 ] 

jackylau commented on FLINK-31166:
--

have fixed this pr info [~Sergey Nuyanzin] , thanks for your detail review very 
much and i have learned a lot 

> array_contains element type error
> -
>
> Key: FLINK-31166
> URL: https://issues.apache.org/jira/browse/FLINK-31166
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-21-18-37-45-202.png, 
> image-2023-02-21-18-41-19-385.png, image-2023-02-22-09-56-59-257.png
>
>
> {{ARRAY_CONTAINS}} works ok for the case when both haystack elements and 
> needle are not nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1], 0);{code}
> it works ok when both haystack elements and needle are nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1, NULL], CAST(NULL AS INT));{code}
> it works ok when haystack elements are nullable and needle is not nullable 
> e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1, NULL], 1);{code}
> and it does NOT work when haystack elements are not nullable and needle is 
> nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1], CAST(NULL AS INT));{code}
>  
> !image-2023-02-22-09-56-59-257.png!
>  
> !image-2023-02-21-18-41-19-385.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31166) array_contains element type error

2023-03-09 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-31166:
-
Description: 
{{ARRAY_CONTAINS}} works ok for the case when both haystack elements and needle 
are not nullable e.g.
{code:sql}
SELECT array_contains(ARRAY[0, 1], 0);{code}
it works ok when both haystack elements and needle are nullable e.g.
{code:sql}
SELECT array_contains(ARRAY[0, 1, NULL], CAST(NULL AS INT));{code}
it works ok when haystack elements are nullable and needle is not nullable e.g.
{code:sql}
SELECT array_contains(ARRAY[0, 1, NULL], 1);{code}
and it does NOT work when haystack elements are not nullable and needle is 
nullable e.g.
{code:sql}
SELECT array_contains(ARRAY[0, 1], CAST(NULL AS INT));{code}
 

!image-2023-02-22-09-56-59-257.png!

 

!image-2023-02-21-18-41-19-385.png!

  was:
!image-2023-02-22-09-56-59-257.png!

 

!image-2023-02-21-18-41-19-385.png!


> array_contains element type error
> -
>
> Key: FLINK-31166
> URL: https://issues.apache.org/jira/browse/FLINK-31166
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-21-18-37-45-202.png, 
> image-2023-02-21-18-41-19-385.png, image-2023-02-22-09-56-59-257.png
>
>
> {{ARRAY_CONTAINS}} works ok for the case when both haystack elements and 
> needle are not nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1], 0);{code}
> it works ok when both haystack elements and needle are nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1, NULL], CAST(NULL AS INT));{code}
> it works ok when haystack elements are nullable and needle is not nullable 
> e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1, NULL], 1);{code}
> and it does NOT work when haystack elements are not nullable and needle is 
> nullable e.g.
> {code:sql}
> SELECT array_contains(ARRAY[0, 1], CAST(NULL AS INT));{code}
>  
> !image-2023-02-22-09-56-59-257.png!
>  
> !image-2023-02-21-18-41-19-385.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31166) array_contains element type error

2023-03-09 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698314#comment-17698314
 ] 

jackylau commented on FLINK-31166:
--

hi [~Sergey Nuyanzin] thanks for your description. yeap, you are right. and i 
will describe the Jira will later 

> array_contains element type error
> -
>
> Key: FLINK-31166
> URL: https://issues.apache.org/jira/browse/FLINK-31166
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-21-18-37-45-202.png, 
> image-2023-02-21-18-41-19-385.png, image-2023-02-22-09-56-59-257.png
>
>
> !image-2023-02-22-09-56-59-257.png!
>  
> !image-2023-02-21-18-41-19-385.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)
jackylau created FLINK-31377:


 Summary: BinaryArrayData getArray/getMap should Handle null 
correctly AssertionError: valueArraySize (-6) should >= 0 
 Key: FLINK-31377
 URL: https://issues.apache.org/jira/browse/FLINK-31377
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


{code:java}
// code placeholder
when i use , if the element has map which is null
Object getElementOrNull(ArrayData array, int pos);{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26945) Add DATE_SUB supported in SQL & Table API

2023-03-08 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697823#comment-17697823
 ] 

jackylau commented on FLINK-26945:
--

[~jark] , yeap. data_sub can be completely replaced by minus, data_add can be 
completely replaced by plus. 

> Add DATE_SUB supported in SQL & Table API
> -
>
> Key: FLINK-26945
> URL: https://issues.apache.org/jira/browse/FLINK-26945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Returns the date {{numDays}} before {{{}startDate{}}}.
> Syntax:
> {code:java}
> date_sub(startDate, numDays) {code}
> Arguments:
>  * {{{}startDate{}}}: A DATE expression.
>  * {{{}numDays{}}}: An INTEGER expression.
> Returns:
> A DATE.
> If {{numDays}} is negative abs(num_days) are added to {{{}startDate{}}}.
> If the result date overflows the date range the function raises an error.
> Examples:
> {code:java}
> > SELECT date_sub('2016-07-30', 1);
>  2016-07-29 {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >