[jira] [Resolved] (FLINK-35437) BlockStatementGrouper uses lots of memory

2024-05-24 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz resolved FLINK-35437.
--
Resolution: Fixed

Fixed in 3d40bd7dd197b12b7b156bd758b4129148e885d1

> BlockStatementGrouper uses lots of memory
> -
>
> Key: FLINK-35437
> URL: https://issues.apache.org/jira/browse/FLINK-35437
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> For deeply nested {{if else}} statements {{BlockStatementGrouper}} uses loads 
> of memory and fails with OOM quickly.
> When running JMs with around 400mb a query like:
> {code}
> select case when orderid = 0 then 1 when orderid = 1 then 2 when orderid
> = 2 then 3 when orderid = 3 then 4 when orderid = 4 then 5 when orderid = 
> 5 then
> 6 when orderid = 6 then 7 when orderid = 7 then 8 when orderid = 8 then 9 
> when
> orderid = 9 then 10 when orderid = 10 then 11 when orderid = 11 then 12 
> when orderid
> = 12 then 13 when orderid = 13 then 14 when orderid = 14 then 15 when 
> orderid
> = 15 then 16 when orderid = 16 then 17 when orderid = 17 then 18 when 
> orderid
> = 18 then 19 when orderid = 19 then 20 when orderid = 20 then 21 when 
> orderid
> = 21 then 22 when orderid = 22 then 23 when orderid = 23 then 24 when 
> orderid
> = 24 then 25 when orderid = 25 then 26 when orderid = 26 then 27 when 
> orderid
> = 27 then 28 when orderid = 28 then 29 when orderid = 29 then 30 when 
> orderid
> = 30 then 31 when orderid = 31 then 32 when orderid = 32 then 33 when 
> orderid
> = 33 then 34 when orderid = 34 then 35 when orderid = 35 then 36 when 
> orderid
> = 36 then 37 when orderid = 37 then 38 when orderid = 38 then 39 when 
> orderid
> = 39 then 40 when orderid = 40 then 41 when orderid = 41 then 42 when 
> orderid
> = 42 then 43 when orderid = 43 then 44 when orderid = 44 then 45 when 
> orderid
> = 45 then 46 when orderid = 46 then 47 when orderid = 47 then 48 when 
> orderid
> = 48 then 49 when orderid = 49 then 50 when orderid = 50 then 51 when 
> orderid
> = 51 then 52 when orderid = 52 then 53 when orderid = 53 then 54 when 
> orderid
> = 54 then 55 when orderid = 55 then 56 when orderid = 56 then 57 when 
> orderid
> = 57 then 58 when orderid = 58 then 59 when orderid = 59 then 60 when 
> orderid
> = 60 then 61 when orderid = 61 then 62 when orderid = 62 then 63 when 
> orderid
> = 63 then 64 when orderid = 64 then 65 when orderid = 65 then 66 when 
> orderid
> = 66 then 67 when orderid = 67 then 68 when orderid = 68 then 69 when 
> orderid
> = 69 then 70 when orderid = 70 then 71 when orderid = 71 then 72 when 
> orderid
> = 72 then 73 when orderid = 73 then 74 when orderid = 74 then 75 when 
> orderid
> = 75 then 76 when orderid = 76 then 77 when orderid = 77 then 78 when 
> orderid
> = 78 then 79 when orderid = 79 then 80 when orderid = 80 then 81 when 
> orderid
> = 81 then 82 when orderid = 82 then 83 when orderid = 83 then 84 when 
> orderid
> = 84 then 85 when orderid = 85 then 86 when orderid = 86 then 87 when 
> orderid
> = 87 then 88 when orderid = 88 then 89 when orderid = 89 then 90 when 
> orderid
> = 90 then 91 when orderid = 91 then 92 when orderid = 92 then 93 when 
> orderid
> = 93 then 94 when orderid = 94 then 95 when orderid = 95 then 96 when 
> orderid
> = 96 then 97 when orderid = 97 then 98 when orderid = 98 then 99 when 
> orderid
> = 99 then 100 when orderid = 100 then 101 when orderid = 101 then 102 
> when orderid
> = 102 then 103 when orderid = 103 then 104 when orderid = 104 then 105 
> when orderid
> = 105 then 106 when orderid = 106 then 107 when orderid = 107 then 108 
> when orderid
> = 108 then 109 when orderid = 109 then 110 when orderid = 110 then 111 
> when orderid
> = 111 then 112 when orderid = 112 then 113 when orderid = 113 then 114 
> when orderid
> = 114 then 115 when orderid = 115 then 116 when orderid = 116 then 117 
> when orderid
> = 117 then 118 when orderid = 118 then 119 when orderid = 119 then 120 
> when orderid
> = 120 then 121 when orderid = 121 then 122 when orderid = 122 then 123 
> when orderid
> = 123 then 124 when orderid = 124 then 125 when orderid = 125 then 126 
> when orderid
> = 126 then 127 when orderid = 127 then 128 when orderid = 128 then 129 
> when orderid
> = 129 then 130 when orderid = 130 then 131 when orderid = 131 then 132 
> when orderid
> = 132 then 133 when orderid = 133 then 134 when orderid = 134 then 135 
> when orderid
> = 135 then 136 when 

[jira] [Closed] (FLINK-35216) Support for RETURNING clause of JSON_QUERY

2024-05-23 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-35216.

Resolution: Implemented

Implemented in 0737220959fe52ee22535e7db55b015a46a6294e

> Support for RETURNING clause of JSON_QUERY
> --
>
> Key: FLINK-35216
> URL: https://issues.apache.org/jira/browse/FLINK-35216
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> SQL standard says JSON_QUERY should support RETURNING clause similar to 
> JSON_VALUE. Calcite supports the clause for JSON_VALUE already, but not for 
> the JSON_QUERY.
> {code}
>  ::=
>   JSON_QUERY 
>   
>   [  ]
>   [  WRAPPER ]
>   [  QUOTES [ ON SCALAR STRING ] ]
>   [  ON EMPTY ]
>   [  ON ERROR ]
>   
>  ::=
>   RETURNING 
>   [ FORMAT  ]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35437) BlockStatementGrouper uses lots of memory

2024-05-23 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-35437:
-
Description: 
For deeply nested {{if else}} statements {{BlockStatementGrouper}} uses loads 
of memory and fails with OOM quickly.

When running JMs with around 400mb a query like:
{code}
select case when orderid = 0 then 1 when orderid = 1 then 2 when orderid
= 2 then 3 when orderid = 3 then 4 when orderid = 4 then 5 when orderid = 5 
then
6 when orderid = 6 then 7 when orderid = 7 then 8 when orderid = 8 then 9 
when
orderid = 9 then 10 when orderid = 10 then 11 when orderid = 11 then 12 
when orderid
= 12 then 13 when orderid = 13 then 14 when orderid = 14 then 15 when 
orderid
= 15 then 16 when orderid = 16 then 17 when orderid = 17 then 18 when 
orderid
= 18 then 19 when orderid = 19 then 20 when orderid = 20 then 21 when 
orderid
= 21 then 22 when orderid = 22 then 23 when orderid = 23 then 24 when 
orderid
= 24 then 25 when orderid = 25 then 26 when orderid = 26 then 27 when 
orderid
= 27 then 28 when orderid = 28 then 29 when orderid = 29 then 30 when 
orderid
= 30 then 31 when orderid = 31 then 32 when orderid = 32 then 33 when 
orderid
= 33 then 34 when orderid = 34 then 35 when orderid = 35 then 36 when 
orderid
= 36 then 37 when orderid = 37 then 38 when orderid = 38 then 39 when 
orderid
= 39 then 40 when orderid = 40 then 41 when orderid = 41 then 42 when 
orderid
= 42 then 43 when orderid = 43 then 44 when orderid = 44 then 45 when 
orderid
= 45 then 46 when orderid = 46 then 47 when orderid = 47 then 48 when 
orderid
= 48 then 49 when orderid = 49 then 50 when orderid = 50 then 51 when 
orderid
= 51 then 52 when orderid = 52 then 53 when orderid = 53 then 54 when 
orderid
= 54 then 55 when orderid = 55 then 56 when orderid = 56 then 57 when 
orderid
= 57 then 58 when orderid = 58 then 59 when orderid = 59 then 60 when 
orderid
= 60 then 61 when orderid = 61 then 62 when orderid = 62 then 63 when 
orderid
= 63 then 64 when orderid = 64 then 65 when orderid = 65 then 66 when 
orderid
= 66 then 67 when orderid = 67 then 68 when orderid = 68 then 69 when 
orderid
= 69 then 70 when orderid = 70 then 71 when orderid = 71 then 72 when 
orderid
= 72 then 73 when orderid = 73 then 74 when orderid = 74 then 75 when 
orderid
= 75 then 76 when orderid = 76 then 77 when orderid = 77 then 78 when 
orderid
= 78 then 79 when orderid = 79 then 80 when orderid = 80 then 81 when 
orderid
= 81 then 82 when orderid = 82 then 83 when orderid = 83 then 84 when 
orderid
= 84 then 85 when orderid = 85 then 86 when orderid = 86 then 87 when 
orderid
= 87 then 88 when orderid = 88 then 89 when orderid = 89 then 90 when 
orderid
= 90 then 91 when orderid = 91 then 92 when orderid = 92 then 93 when 
orderid
= 93 then 94 when orderid = 94 then 95 when orderid = 95 then 96 when 
orderid
= 96 then 97 when orderid = 97 then 98 when orderid = 98 then 99 when 
orderid
= 99 then 100 when orderid = 100 then 101 when orderid = 101 then 102 when 
orderid
= 102 then 103 when orderid = 103 then 104 when orderid = 104 then 105 when 
orderid
= 105 then 106 when orderid = 106 then 107 when orderid = 107 then 108 when 
orderid
= 108 then 109 when orderid = 109 then 110 when orderid = 110 then 111 when 
orderid
= 111 then 112 when orderid = 112 then 113 when orderid = 113 then 114 when 
orderid
= 114 then 115 when orderid = 115 then 116 when orderid = 116 then 117 when 
orderid
= 117 then 118 when orderid = 118 then 119 when orderid = 119 then 120 when 
orderid
= 120 then 121 when orderid = 121 then 122 when orderid = 122 then 123 when 
orderid
= 123 then 124 when orderid = 124 then 125 when orderid = 125 then 126 when 
orderid
= 126 then 127 when orderid = 127 then 128 when orderid = 128 then 129 when 
orderid
= 129 then 130 when orderid = 130 then 131 when orderid = 131 then 132 when 
orderid
= 132 then 133 when orderid = 133 then 134 when orderid = 134 then 135 when 
orderid
= 135 then 136 when orderid = 136 then 137 when orderid = 137 then 138 when 
orderid
= 138 then 139 when orderid = 139 then 140 when orderid = 140 then 141 when 
orderid
= 141 then 142 when orderid = 142 then 143 when orderid = 143 then 144 when 
orderid
= 144 then 145 when orderid = 145 then 146 when orderid = 146 then 147 when 
orderid
= 147 then 148 when orderid = 148 then 149 when orderid = 149 then 150 when 
orderid
= 150 then 151 when orderid = 151 then 152 when orderid = 152 then 153 when 
orderid
= 153 then 154 when orderid = 154 then 155 when orderid = 155 then 156 when 
orderid
= 156 then 157 when orderid = 157 then 158 when orderid = 158 then 159 when 
orderid
= 159 then 160 when orderid = 160 then 161 when orderid = 161 then 162 when 

[jira] [Created] (FLINK-35437) BlockStatementGrouper uses lots of memory

2024-05-23 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35437:


 Summary: BlockStatementGrouper uses lots of memory
 Key: FLINK-35437
 URL: https://issues.apache.org/jira/browse/FLINK-35437
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


For deeply nested {{if else}} statements {{BlockStatementGrouper}} uses loads 
of memory and fails with OOM quickly.

When running JMs with around 400mb a query like:
```
select case when orderid = 0 then 1 when orderid = 1 then 2 when orderid
= 2 then 3 when orderid = 3 then 4 when orderid = 4 then 5 when orderid = 5 
then
6 when orderid = 6 then 7 when orderid = 7 then 8 when orderid = 8 then 9 
when
orderid = 9 then 10 when orderid = 10 then 11 when orderid = 11 then 12 
when orderid
= 12 then 13 when orderid = 13 then 14 when orderid = 14 then 15 when 
orderid
= 15 then 16 when orderid = 16 then 17 when orderid = 17 then 18 when 
orderid
= 18 then 19 when orderid = 19 then 20 when orderid = 20 then 21 when 
orderid
= 21 then 22 when orderid = 22 then 23 when orderid = 23 then 24 when 
orderid
= 24 then 25 when orderid = 25 then 26 when orderid = 26 then 27 when 
orderid
= 27 then 28 when orderid = 28 then 29 when orderid = 29 then 30 when 
orderid
= 30 then 31 when orderid = 31 then 32 when orderid = 32 then 33 when 
orderid
= 33 then 34 when orderid = 34 then 35 when orderid = 35 then 36 when 
orderid
= 36 then 37 when orderid = 37 then 38 when orderid = 38 then 39 when 
orderid
= 39 then 40 when orderid = 40 then 41 when orderid = 41 then 42 when 
orderid
= 42 then 43 when orderid = 43 then 44 when orderid = 44 then 45 when 
orderid
= 45 then 46 when orderid = 46 then 47 when orderid = 47 then 48 when 
orderid
= 48 then 49 when orderid = 49 then 50 when orderid = 50 then 51 when 
orderid
= 51 then 52 when orderid = 52 then 53 when orderid = 53 then 54 when 
orderid
= 54 then 55 when orderid = 55 then 56 when orderid = 56 then 57 when 
orderid
= 57 then 58 when orderid = 58 then 59 when orderid = 59 then 60 when 
orderid
= 60 then 61 when orderid = 61 then 62 when orderid = 62 then 63 when 
orderid
= 63 then 64 when orderid = 64 then 65 when orderid = 65 then 66 when 
orderid
= 66 then 67 when orderid = 67 then 68 when orderid = 68 then 69 when 
orderid
= 69 then 70 when orderid = 70 then 71 when orderid = 71 then 72 when 
orderid
= 72 then 73 when orderid = 73 then 74 when orderid = 74 then 75 when 
orderid
= 75 then 76 when orderid = 76 then 77 when orderid = 77 then 78 when 
orderid
= 78 then 79 when orderid = 79 then 80 when orderid = 80 then 81 when 
orderid
= 81 then 82 when orderid = 82 then 83 when orderid = 83 then 84 when 
orderid
= 84 then 85 when orderid = 85 then 86 when orderid = 86 then 87 when 
orderid
= 87 then 88 when orderid = 88 then 89 when orderid = 89 then 90 when 
orderid
= 90 then 91 when orderid = 91 then 92 when orderid = 92 then 93 when 
orderid
= 93 then 94 when orderid = 94 then 95 when orderid = 95 then 96 when 
orderid
= 96 then 97 when orderid = 97 then 98 when orderid = 98 then 99 when 
orderid
= 99 then 100 when orderid = 100 then 101 when orderid = 101 then 102 when 
orderid
= 102 then 103 when orderid = 103 then 104 when orderid = 104 then 105 when 
orderid
= 105 then 106 when orderid = 106 then 107 when orderid = 107 then 108 when 
orderid
= 108 then 109 when orderid = 109 then 110 when orderid = 110 then 111 when 
orderid
= 111 then 112 when orderid = 112 then 113 when orderid = 113 then 114 when 
orderid
= 114 then 115 when orderid = 115 then 116 when orderid = 116 then 117 when 
orderid
= 117 then 118 when orderid = 118 then 119 when orderid = 119 then 120 when 
orderid
= 120 then 121 when orderid = 121 then 122 when orderid = 122 then 123 when 
orderid
= 123 then 124 when orderid = 124 then 125 when orderid = 125 then 126 when 
orderid
= 126 then 127 when orderid = 127 then 128 when orderid = 128 then 129 when 
orderid
= 129 then 130 when orderid = 130 then 131 when orderid = 131 then 132 when 
orderid
= 132 then 133 when orderid = 133 then 134 when orderid = 134 then 135 when 
orderid
= 135 then 136 when orderid = 136 then 137 when orderid = 137 then 138 when 
orderid
= 138 then 139 when orderid = 139 then 140 when orderid = 140 then 141 when 
orderid
= 141 then 142 when orderid = 142 then 143 when orderid = 143 then 144 when 
orderid
= 144 then 145 when orderid = 145 then 146 when orderid = 146 then 147 when 
orderid
= 147 then 148 when orderid = 148 then 149 when orderid = 149 then 150 when 
orderid
= 150 then 151 when orderid = 151 then 152 when orderid = 152 then 153 when 

[jira] [Closed] (FLINK-32706) Add SPLIT(STRING) support in SQL & Table API

2024-05-22 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32706.

Fix Version/s: 1.20.0
   Resolution: Implemented

Implemented in 4c6571d075b1d1ff5e7b9d7ec3bf625329155fbf

> Add SPLIT(STRING) support in SQL & Table API
> 
>
> Key: FLINK-32706
> URL: https://issues.apache.org/jira/browse/FLINK-32706
> Project: Flink
>  Issue Type: Improvement
>Reporter: Hanyu Zheng
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.20.0
>
>
> SPLIT Function
> Description
> Splits a string into an array of substrings, based on a delimiter.
> Syntax
> The syntax for the SPLIT function is:
> {code:java}
> SPLIT(col1, delimiter){code}
> Splits a string into an array of substrings based on a delimiter. If the 
> delimiter is not found, then the original string is returned as the only 
> element in the array. If the delimiter is empty, then all characters in the 
> string are split. If either, string or delimiter, are NULL, then a NULL value 
> is returned.
> If the delimiter is found at the beginning or end of the string, or there are 
> contiguous delimiters, then an empty space is added to the array.
> Example
> Let's look at some  function examples and explore how to use the SPLIT 
> function.
> For example:
>  
> {code:java}
> SELECT SPLIT('abcdefg', 'c');
> Result: ['ab', 'defg']
> {code}
> see also:
> 1. ksqlDB Split function
> ksqlDB provides a scalar function named {{SPLIT}} which splits a string into 
> an array of substrings based on a delimiter.
> Syntax: {{SPLIT(string, delimiter)}}
> For example: {{SPLIT('a,b,c', ',')}} will return {{{}['a', 'b', 'c']{}}}.
> [https://docs.ksqldb.io/en/0.8.1-ksqldb/developer-guide/ksqldb-reference/scalar-functions/#split]
> 2. Apache Hive Split function
> Hive offers a function named {{split}} which splits a string around a 
> specified delimiter and returns an array of strings.
> Syntax: {{array split(string str, string pat)}}
> For example: {{split('a,b,c', ',')}} will return {{{}["a", "b", "c"]{}}}.
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
> 3. Spark SQL Split function
> Spark SQL also offers a function named {{{}split{}}}, similar to the one in 
> Hive.
> Syntax: {{split(str, pattern[, limit])}}
> Here, {{limit}} is an optional parameter to specify the maximum length of the 
> returned array.
> For example: {{split('oneAtwoBthreeC', '[ABC]', 2)}} will return {{{}["one", 
> "twoBthreeC"]{}}}.
> [https://spark.apache.org/docs/latest/api/sql/index.html#split]
> 4. Presto Split function
> Presto offers a function named {{split}} which splits a string around a 
> regular expression and returns an array of strings.
> Syntax: {{array split(string str, string regex)}}
> For example: {{split('a.b.c', '\.')}} will return {{{}["a", "b", "c"]{}}}.
> [https://prestodb.io/docs/current/functions/string.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35155) Introduce TableRuntimeException

2024-05-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-35155.

Resolution: Implemented

Implemented in 1904b215e36e4fd48e48ece7ffdf2f1470653130

> Introduce TableRuntimeException
> ---
>
> Key: FLINK-35155
> URL: https://issues.apache.org/jira/browse/FLINK-35155
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The `throwException` internal function throws a {{RuntimeException}}. It 
> would be nice to have a specific kind of exception thrown from there, so that 
> it's easier to classify those.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35216) Support for RETURNING clause of JSON_QUERY

2024-04-23 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35216:


 Summary: Support for RETURNING clause of JSON_QUERY
 Key: FLINK-35216
 URL: https://issues.apache.org/jira/browse/FLINK-35216
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


SQL standard says JSON_QUERY should support RETURNING clause similar to 
JSON_VALUE. Calcite supports the clause for JSON_VALUE already, but not for the 
JSON_QUERY.

{code}
 ::=
  JSON_QUERY 
  
  [  ]
  [  WRAPPER ]
  [  QUOTES [ ON SCALAR STRING ] ]
  [  ON EMPTY ]
  [  ON ERROR ]
  

 ::=
  RETURNING 
  [ FORMAT  ]
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35155) Introduce TableRuntimeException

2024-04-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35155:


 Summary: Introduce TableRuntimeException
 Key: FLINK-35155
 URL: https://issues.apache.org/jira/browse/FLINK-35155
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


The `throwException` internal function throws a {{RuntimeException}}. It would 
be nice to have a specific kind of exception thrown from there, so that it's 
easier to classify those.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33676) Implement restore tests for WindowAggregate node

2024-04-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33676.

Fix Version/s: 1.20.0
   Resolution: Implemented

Implemented in 
0dc2a0f74cf8d8ac10a54fa6c4f0c1f32dc2e273..68cc61a86187021c61e7f51ccff8c5912125d013

> Implement restore tests for WindowAggregate node
> 
>
> Key: FLINK-33676
> URL: https://issues.apache.org/jira/browse/FLINK-33676
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35021) AggregateQueryOperations produces wrong asSerializableString representation

2024-04-05 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-35021.

Resolution: Fixed

Fixed in 77215aaf6ca7ccbff7bd3752e59068ac9956d549

> AggregateQueryOperations produces wrong asSerializableString representation
> ---
>
> Key: FLINK-35021
> URL: https://issues.apache.org/jira/browse/FLINK-35021
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> A table API query:
> {code}
> env.fromValues(1, 2, 3)
> .as("number")
> .select(col("number").count())
> .insertInto(TEST_TABLE_API)
> {code}
> produces
> {code}
> INSERT INTO `default`.`timo_eu_west_1`.`table_api_basic_api` SELECT `EXPR$0` 
> FROM (
> SELECT (COUNT(`number`)) AS `EXPR$0` FROM (
> SELECT `f0` AS `number` FROM (
> SELECT `f0` FROM (VALUES 
> (1),
> (2),
> (3)
> ) VAL$0(`f0`)
> )
> )
> GROUP BY 
> )
> {code}
> which is missing a grouping expression



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35021) AggregateQueryOperations produces wrong asSerializableString representation

2024-04-05 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35021:


 Summary: AggregateQueryOperations produces wrong 
asSerializableString representation
 Key: FLINK-35021
 URL: https://issues.apache.org/jira/browse/FLINK-35021
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


A table API query:
{code}
env.fromValues(1, 2, 3)
.as("number")
.select(col("number").count())
.insertInto(TEST_TABLE_API)
{code}

produces

{code}
INSERT INTO `default`.`timo_eu_west_1`.`table_api_basic_api` SELECT `EXPR$0` 
FROM (
SELECT (COUNT(`number`)) AS `EXPR$0` FROM (
SELECT `f0` AS `number` FROM (
SELECT `f0` FROM (VALUES 
(1),
(2),
(3)
) VAL$0(`f0`)
)
)
GROUP BY 
)
{code}

which is missing a grouping expression



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33805) Implement restore tests for OverAggregate node

2024-03-28 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33805.

Fix Version/s: 1.20.0
   Resolution: Fixed

Implemented in 
b3334d1527aab6c196752b63c3139ff5529598cc..bf60c8813598d3119375cec057930240642699d4

> Implement restore tests for OverAggregate node
> --
>
> Key: FLINK-33805
> URL: https://issues.apache.org/jira/browse/FLINK-33805
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34938) Incorrect behaviour for comparison functions

2024-03-27 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34938.

Resolution: Fixed

Implemented in 7ea5bcce6a58b69543b571e9746d7374ded028c5

> Incorrect behaviour for comparison functions
> 
>
> Key: FLINK-34938
> URL: https://issues.apache.org/jira/browse/FLINK-34938
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> There are a few issues with comparison functions.
> Some versions throw:
> {code}
> Incomparable types: TIMESTAMP_LTZ(3) NOT NULL and TIMESTAMP(3)
> {code}
> Results of some depend on the comparison order, because the least restrictive 
> precision is not calculated correctly.
> E.g.
> {code}
> final Instant ltz3 = Instant.ofEpochMilli(1_123);
> final Instant ltz0 = Instant.ofEpochMilli(1_000);
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.EQUALS)
> .onFieldsWithData(ltz3, ltz0)
> .andDataTypes(TIMESTAMP_LTZ(3), TIMESTAMP_LTZ(0))
> // compare same type, but different precision, should 
> always adjust to the higher precision
> .testResult($("f0").isEqual($("f1")), "f0 = f1", 
> false, DataTypes.BOOLEAN())
> .testResult($("f1").isEqual($("f0")), "f1 = f0", true 
> /* but should be false */, DataTypes.BOOLEAN())
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34938) Incorrect behaviour for comparison functions

2024-03-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34938:


 Summary: Incorrect behaviour for comparison functions
 Key: FLINK-34938
 URL: https://issues.apache.org/jira/browse/FLINK-34938
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


There are a few issues with comparison functions.

Some versions throw:
{code}
Incomparable types: TIMESTAMP_LTZ(3) NOT NULL and TIMESTAMP(3)
{code}

Results of some depend on the comparison order, because the least restrictive 
precision is not calculated correctly.

E.g.

{code}
final Instant ltz3 = Instant.ofEpochMilli(1_123);
final Instant ltz0 = Instant.ofEpochMilli(1_000);

TestSetSpec.forFunction(BuiltInFunctionDefinitions.EQUALS)
.onFieldsWithData(ltz3, ltz0)
.andDataTypes(TIMESTAMP_LTZ(3), TIMESTAMP_LTZ(0))
// compare same type, but different precision, should 
always adjust to the higher precision
.testResult($("f0").isEqual($("f1")), "f0 = f1", false, 
DataTypes.BOOLEAN())
.testResult($("f1").isEqual($("f0")), "f1 = f0", true 
/* but should be false */, DataTypes.BOOLEAN())
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34910) Can not plan window join without projections

2024-03-21 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34910.

Resolution: Fixed

Fixed in 709bf93534fcdfd2b4452667af450f1748bf1ccc

> Can not plan window join without projections
> 
>
> Key: FLINK-34910
> URL: https://issues.apache.org/jira/browse/FLINK-34910
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When running:
> {code}
>   @Test
>   def testWindowJoinWithoutProjections(): Unit = {
> val sql =
>   """
> |SELECT *
> |FROM
> |  TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
> MINUTE)) AS L
> |JOIN
> |  TABLE(TUMBLE(TABLE MyTable2, DESCRIPTOR(rowtime), INTERVAL '15' 
> MINUTE)) AS R
> |ON L.window_start = R.window_start AND L.window_end = R.window_end 
> AND L.a = R.a
>   """.stripMargin
> util.verifyRelPlan(sql)
>   }
> {code}
> It fails with:
> {code}
> FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME_MATERIALIZE(proctime) AS 
> proctime, window_start, window_end, window_time, a0, b0, c0, rowtime0, 
> PROCTIME_MATERIALIZE(proctime0) AS proctime0, window_start0, window_end0, 
> window_time0])
> +- FlinkLogicalCorrelate(correlation=[$cor0], joinType=[inner], 
> requiredColumns=[{}])
>:- FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR($3), 
> 90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
> b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
> proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
> *ROWTIME* window_time)])
>:  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
> 1000:INTERVAL SECOND)])
>: +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS 
> proctime])
>:+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, MyTable]], fields=[a, b, c, rowtime])
>+- 
> FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR(CAST($3):TIMESTAMP(3)),
>  90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
> b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
> proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
> *ROWTIME* window_time)])
>   +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
> 1000:INTERVAL SECOND)])
>  +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS 
> proctime])
> +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, MyTable2]], fields=[a, b, c, rowtime])
> Failed to get time attribute index from DESCRIPTOR(CAST($3):TIMESTAMP(3)). 
> This is a bug, please file a JIRA issue.
> Please check the documentation for the set of currently supported SQL 
> features.
> {code}
> In prior versions this had another problem of ambiguous {{rowtime}} column, 
> but this has been fixed by [FLINK-32648]. In versions < 1.19 
> WindowTableFunctions were incorrectly scoped, because they were not extending 
> from Calcite's SqlWindowTableFunction and the scoping implemented in 
> SqlValidatorImpl#convertFrom was incorrect. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34910) Can not plan window join without projections

2024-03-21 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34910:


 Summary: Can not plan window join without projections
 Key: FLINK-34910
 URL: https://issues.apache.org/jira/browse/FLINK-34910
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


When running:
{code}
  @Test
  def testWindowJoinWithoutProjections(): Unit = {
val sql =
  """
|SELECT *
|FROM
|  TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE)) AS L
|JOIN
|  TABLE(TUMBLE(TABLE MyTable2, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE)) AS R
|ON L.window_start = R.window_start AND L.window_end = R.window_end AND 
L.a = R.a
  """.stripMargin
util.verifyRelPlan(sql)
  }
{code}

It fails with:
{code}
FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME_MATERIALIZE(proctime) AS 
proctime, window_start, window_end, window_time, a0, b0, c0, rowtime0, 
PROCTIME_MATERIALIZE(proctime0) AS proctime0, window_start0, window_end0, 
window_time0])
+- FlinkLogicalCorrelate(correlation=[$cor0], joinType=[inner], 
requiredColumns=[{}])
   :- FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR($3), 
90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) b, 
BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* proctime, 
TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) *ROWTIME* 
window_time)])
   :  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
1000:INTERVAL SECOND)])
   : +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS proctime])
   :+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, MyTable]], fields=[a, b, c, rowtime])
   +- 
FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR(CAST($3):TIMESTAMP(3)),
 90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
*ROWTIME* window_time)])
  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
1000:INTERVAL SECOND)])
 +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS proctime])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, MyTable2]], fields=[a, b, c, rowtime])

Failed to get time attribute index from DESCRIPTOR(CAST($3):TIMESTAMP(3)). This 
is a bug, please file a JIRA issue.
Please check the documentation for the set of currently supported SQL features.
{code}

In prior versions this had another problem of ambiguous {{rowtime}} column, but 
this has been fixed by [FLINK-32648]. In versions < 1.19 WindowTableFunctions 
were incorrectly scoped, because they were not extending from Calcite's 
SqlWindowTableFunction and the scoping implemented in 
SqlValidatorImpl#convertFrom was incorrect. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32701) Potential Memory Leak in Flink CEP due to Persistent Starting States in NFAState

2024-03-21 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32701.

Fix Version/s: 1.20.0
   Resolution: Fixed

Fixed in a9cde49118bab4b32b2d1ae1f97beb94eb967f9b

> Potential Memory Leak in Flink CEP due to Persistent Starting States in 
> NFAState
> 
>
> Key: FLINK-32701
> URL: https://issues.apache.org/jira/browse/FLINK-32701
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.17.0, 1.16.1, 1.16.2, 1.17.1
>Reporter: Puneet Duggal
>Assignee: Puneet Duggal
>Priority: Major
>  Labels: CEP, auto-deprioritized-critical, cep
> Fix For: 1.20.0
>
> Attachments: Screenshot 2023-07-26 at 11.45.06 AM.png, Screenshot 
> 2023-07-26 at 11.50.28 AM.png
>
>
> Our team has encountered a potential memory leak issue while working with the 
> Complex Event Processing (CEP) library in Flink v1.17.
> h2. Context
> The CEP Operator maintains a keyed state called NFAState, which holds two 
> queues: one for partial matches and one for completed matches. When a key is 
> first encountered, the CEP creates a starting computation state and stores it 
> in the partial matches queue. As more events occur that match the defined 
> conditions (e.g., a TAKE condition), additional computation states get added 
> to the queue, with their specific type (normal, pending, end) depending on 
> the pattern sequence.
> However, I have noticed that the starting computation state remains in the 
> partial matches queue even after the pattern sequence has been completely 
> matched. This is also the case for keys that have already timed out. As a 
> result, the state gets stored for all keys that the CEP ever encounters, 
> leading to a continual increase in the checkpoint size.
> h2.  How to reproduce this
>  # Pattern Sequence - A not_followed_by B within 5 mins
>  # Time Characteristic - EventTime
>  # StateBackend - HashMapStateBackend
> On my local machine, I started this pipeline and started sending events at 
> the rate of 10 events per second (only A) and as expected after 5 mins, CEP 
> started sending pattern matched output with the same rate. But the issue was 
> that after every 2 mins (checkpoint interval), checkpoint size kept on 
> increasing. Expectation was that after 5 mins (2-3 checkpoints), checkpoint 
> size will remain constant since any window of 5 mins will consist of the same 
> number of unique keys (older ones will get matched or timed out hence removed 
> from state). But as you can see below attached images, checkpoint size kept 
> on increasing till 40 checkpoints (around 1.5hrs).
> P.S. - After 3 checkpoints (6 mins), the checkpoint size was around 1.78MB. 
> Hence assumption is that ideal checkpoint size for a 5 min window should be 
> less than 1.78MB.
> As you can see after 39 checkpoints, I triggered a savepoint for this 
> pipeline. After that I used a savepoint reader to investigate what all is 
> getting stored in CEP states. Below code investigates NFAState of CEPOperator 
> for potential memory leak.
> {code:java}
> import lombok.AllArgsConstructor;
> import lombok.Data;
> import lombok.NoArgsConstructor;
> import org.apache.flink.api.common.state.ValueState;
> import org.apache.flink.api.common.state.ValueStateDescriptor;
> import org.apache.flink.cep.nfa.NFAState;
> import org.apache.flink.cep.nfa.NFAStateSerializer;
> import org.apache.flink.configuration.Configuration;
> import org.apache.flink.runtime.state.filesystem.FsStateBackend;
> import org.apache.flink.state.api.OperatorIdentifier;
> import org.apache.flink.state.api.SavepointReader;
> import org.apache.flink.state.api.functions.KeyedStateReaderFunction;
> import org.apache.flink.streaming.api.datastream.DataStream;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.util.Collector;
> import org.junit.jupiter.api.Test;
> import java.io.Serializable;
> import java.util.Objects;
> public class NFAStateReaderTest {
> private static final String NFA_STATE_NAME = "nfaStateName";
> @Test
> public void testNfaStateReader() throws Exception {
> StreamExecutionEnvironment environment = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> SavepointReader savepointReader =
> SavepointReader.read(environment, 
> "file:///opt/flink/savepoints/savepoint-093404-9bc0a38654df", new 
> FsStateBackend("file:///abc"));
> DataStream stream = 
> savepointReader.readKeyedState(OperatorIdentifier.forUid("select_pattern_events"),
>  new NFAStateReaderTest.NFAStateReaderFunction());
> stream.print();
> environment.execute();
> }
> static class 

[jira] [Assigned] (FLINK-32701) Potential Memory Leak in Flink CEP due to Persistent Starting States in NFAState

2024-03-21 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-32701:


Assignee: Puneet Duggal

> Potential Memory Leak in Flink CEP due to Persistent Starting States in 
> NFAState
> 
>
> Key: FLINK-32701
> URL: https://issues.apache.org/jira/browse/FLINK-32701
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.17.0, 1.16.1, 1.16.2, 1.17.1
>Reporter: Puneet Duggal
>Assignee: Puneet Duggal
>Priority: Major
>  Labels: CEP, auto-deprioritized-critical, cep
> Attachments: Screenshot 2023-07-26 at 11.45.06 AM.png, Screenshot 
> 2023-07-26 at 11.50.28 AM.png
>
>
> Our team has encountered a potential memory leak issue while working with the 
> Complex Event Processing (CEP) library in Flink v1.17.
> h2. Context
> The CEP Operator maintains a keyed state called NFAState, which holds two 
> queues: one for partial matches and one for completed matches. When a key is 
> first encountered, the CEP creates a starting computation state and stores it 
> in the partial matches queue. As more events occur that match the defined 
> conditions (e.g., a TAKE condition), additional computation states get added 
> to the queue, with their specific type (normal, pending, end) depending on 
> the pattern sequence.
> However, I have noticed that the starting computation state remains in the 
> partial matches queue even after the pattern sequence has been completely 
> matched. This is also the case for keys that have already timed out. As a 
> result, the state gets stored for all keys that the CEP ever encounters, 
> leading to a continual increase in the checkpoint size.
> h2.  How to reproduce this
>  # Pattern Sequence - A not_followed_by B within 5 mins
>  # Time Characteristic - EventTime
>  # StateBackend - HashMapStateBackend
> On my local machine, I started this pipeline and started sending events at 
> the rate of 10 events per second (only A) and as expected after 5 mins, CEP 
> started sending pattern matched output with the same rate. But the issue was 
> that after every 2 mins (checkpoint interval), checkpoint size kept on 
> increasing. Expectation was that after 5 mins (2-3 checkpoints), checkpoint 
> size will remain constant since any window of 5 mins will consist of the same 
> number of unique keys (older ones will get matched or timed out hence removed 
> from state). But as you can see below attached images, checkpoint size kept 
> on increasing till 40 checkpoints (around 1.5hrs).
> P.S. - After 3 checkpoints (6 mins), the checkpoint size was around 1.78MB. 
> Hence assumption is that ideal checkpoint size for a 5 min window should be 
> less than 1.78MB.
> As you can see after 39 checkpoints, I triggered a savepoint for this 
> pipeline. After that I used a savepoint reader to investigate what all is 
> getting stored in CEP states. Below code investigates NFAState of CEPOperator 
> for potential memory leak.
> {code:java}
> import lombok.AllArgsConstructor;
> import lombok.Data;
> import lombok.NoArgsConstructor;
> import org.apache.flink.api.common.state.ValueState;
> import org.apache.flink.api.common.state.ValueStateDescriptor;
> import org.apache.flink.cep.nfa.NFAState;
> import org.apache.flink.cep.nfa.NFAStateSerializer;
> import org.apache.flink.configuration.Configuration;
> import org.apache.flink.runtime.state.filesystem.FsStateBackend;
> import org.apache.flink.state.api.OperatorIdentifier;
> import org.apache.flink.state.api.SavepointReader;
> import org.apache.flink.state.api.functions.KeyedStateReaderFunction;
> import org.apache.flink.streaming.api.datastream.DataStream;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.util.Collector;
> import org.junit.jupiter.api.Test;
> import java.io.Serializable;
> import java.util.Objects;
> public class NFAStateReaderTest {
> private static final String NFA_STATE_NAME = "nfaStateName";
> @Test
> public void testNfaStateReader() throws Exception {
> StreamExecutionEnvironment environment = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> SavepointReader savepointReader =
> SavepointReader.read(environment, 
> "file:///opt/flink/savepoints/savepoint-093404-9bc0a38654df", new 
> FsStateBackend("file:///abc"));
> DataStream stream = 
> savepointReader.readKeyedState(OperatorIdentifier.forUid("select_pattern_events"),
>  new NFAStateReaderTest.NFAStateReaderFunction());
> stream.print();
> environment.execute();
> }
> static class NFAStateReaderFunction extends 
> KeyedStateReaderFunction {
> private ValueState 

[jira] [Closed] (FLINK-34745) Parsing temporal table join throws cryptic exceptions

2024-03-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34745.

Resolution: Fixed

Fixed in 4142c4386a92f1ec5016583f4832f8869782765e

> Parsing temporal table join throws cryptic exceptions
> -
>
> Key: FLINK-34745
> URL: https://issues.apache.org/jira/browse/FLINK-34745
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> 1. Wrong expression type in {{AS OF}}:
> {code}
> SELECT * " +
>   "FROM Orders AS o JOIN " +
>   "RatesHistoryWithPK FOR SYSTEM_TIME AS OF 'o.rowtime' AS r " +
>   "ON o.currency = r.currency
> {code}
> throws: 
> {code}
> java.lang.AssertionError: cannot convert CHAR literal to class 
> org.apache.calcite.util.TimestampString
> {code}
> 2. Not a simple table reference in {{AS OF}}
> {code}
> SELECT * " +
>   "FROM Orders AS o JOIN " +
>   "RatesHistoryWithPK FOR SYSTEM_TIME AS OF o.rowtime + INTERVAL '1' 
> SECOND AS r " +
>   "ON o.currency = r.currency
> {code}
> throws:
> {code}
> java.lang.AssertionError: no unique expression found for {id: o.rowtime, 
> prefix: 1}; count is 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34745) Parsing temporal table join throws cryptic exceptions

2024-03-19 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-34745:
-
Description: 
1. Wrong expression type in {{AS OF}}:
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF 'o.rowtime' AS r " +
  "ON o.currency = r.currency
{code}

throws: 

{code}
java.lang.AssertionError: cannot convert CHAR literal to class 
org.apache.calcite.util.TimestampString
{code}

2. Not a simple table reference in {{AS OF}}
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF o.rowtime + INTERVAL '1' SECOND 
AS r " +
  "ON o.currency = r.currency
{code}

throws:
{code}
java.lang.AssertionError: no unique expression found for {id: o.rowtime, 
prefix: 1}; count is 0
{code}

  was:
1. Wrong expression type in `AS OF`:
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF 'o.rowtime' AS r " +
  "ON o.currency = r.currency
{code}

throws: 

{code}
java.lang.AssertionError: cannot convert CHAR literal to class 
org.apache.calcite.util.TimestampString
{code}

2. Not a table simple table reference
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF o.rowtime + INTERVAL '1' SECOND 
AS r " +
  "ON o.currency = r.currency
{code}

throws:
{code}
java.lang.AssertionError: no unique expression found for {id: o.rowtime, 
prefix: 1}; count is 0
{code}


> Parsing temporal table join throws cryptic exceptions
> -
>
> Key: FLINK-34745
> URL: https://issues.apache.org/jira/browse/FLINK-34745
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.20.0
>
>
> 1. Wrong expression type in {{AS OF}}:
> {code}
> SELECT * " +
>   "FROM Orders AS o JOIN " +
>   "RatesHistoryWithPK FOR SYSTEM_TIME AS OF 'o.rowtime' AS r " +
>   "ON o.currency = r.currency
> {code}
> throws: 
> {code}
> java.lang.AssertionError: cannot convert CHAR literal to class 
> org.apache.calcite.util.TimestampString
> {code}
> 2. Not a simple table reference in {{AS OF}}
> {code}
> SELECT * " +
>   "FROM Orders AS o JOIN " +
>   "RatesHistoryWithPK FOR SYSTEM_TIME AS OF o.rowtime + INTERVAL '1' 
> SECOND AS r " +
>   "ON o.currency = r.currency
> {code}
> throws:
> {code}
> java.lang.AssertionError: no unique expression found for {id: o.rowtime, 
> prefix: 1}; count is 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34745) Parsing temporal table join throws cryptic exceptions

2024-03-19 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34745:


 Summary: Parsing temporal table join throws cryptic exceptions
 Key: FLINK-34745
 URL: https://issues.apache.org/jira/browse/FLINK-34745
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


1. Wrong expression type in `AS OF`:
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF 'o.rowtime' AS r " +
  "ON o.currency = r.currency
{code}

throws: 

{code}
java.lang.AssertionError: cannot convert CHAR literal to class 
org.apache.calcite.util.TimestampString
{code}

2. Not a table simple table reference
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF o.rowtime + INTERVAL '1' SECOND 
AS r " +
  "ON o.currency = r.currency
{code}

throws:
{code}
java.lang.AssertionError: no unique expression found for {id: o.rowtime, 
prefix: 1}; count is 0
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31663) Add ARRAY_EXCEPT supported in SQL & Table API

2024-03-08 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824688#comment-17824688
 ] 

Dawid Wysakowicz commented on FLINK-31663:
--

I don't see how `array_union` is related to `array_except`. Those are two 
separate functions. Yes, we should've investigated it further when implementing 
`ARRAY_UNION`, but I think it's too late by now, imo. Unless [~MartijnVisser] 
thinks otherwise and we should reinvestigate `array_union`.

> Add ARRAY_EXCEPT supported in SQL & Table API
> -
>
> Key: FLINK-31663
> URL: https://issues.apache.org/jira/browse/FLINK-31663
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31663) Add ARRAY_EXCEPT supported in SQL & Table API

2024-03-08 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824656#comment-17824656
 ] 

Dawid Wysakowicz commented on FLINK-31663:
--

[~jackylau] Please see the discussion here: 
https://github.com/apache/flink/pull/23173#discussion_r1491044219

> Add ARRAY_EXCEPT supported in SQL & Table API
> -
>
> Key: FLINK-31663
> URL: https://issues.apache.org/jira/browse/FLINK-31663
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31663) Add ARRAY_EXCEPT supported in SQL & Table API

2024-03-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-31663.

Fix Version/s: 1.20.0
   Resolution: Fixed

Implemented in 2429c296a60bf0a0e8a4acebc04a059008708d1f

> Add ARRAY_EXCEPT supported in SQL & Table API
> -
>
> Key: FLINK-31663
> URL: https://issues.apache.org/jira/browse/FLINK-31663
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34268) Add a test to verify if restore test exists for ExecNode

2024-03-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34268.

Fix Version/s: 1.20.0
   Resolution: Implemented

Implemented in https://issues.apache.org/jira/browse/FLINK-34268

> Add a test to verify if restore test exists for ExecNode
> 
>
> Key: FLINK-34268
> URL: https://issues.apache.org/jira/browse/FLINK-34268
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34493) Migrate ReplaceMinusWithAntiJoinRule

2024-03-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34493.

Resolution: Implemented

Implemented in 6a12668bcfe651fa938517eb2da4d537ce6ce668

> Migrate ReplaceMinusWithAntiJoinRule
> 
>
> Key: FLINK-34493
> URL: https://issues.apache.org/jira/browse/FLINK-34493
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34493) Migrate ReplaceMinusWithAntiJoinRule

2024-03-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-34493:


Assignee: Jacky Lau

> Migrate ReplaceMinusWithAntiJoinRule
> 
>
> Key: FLINK-34493
> URL: https://issues.apache.org/jira/browse/FLINK-34493
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34118) Implement restore tests for Sort node

2024-02-23 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34118.

Fix Version/s: 1.20.0
   Resolution: Implemented

Implemented in 
fe3d9a42995cfee0dfd90e8031768cb130543189..faacf7e28bd9a43723303d0bd4a6ee9adebcb5bb

> Implement restore tests for Sort node
> -
>
> Key: FLINK-34118
> URL: https://issues.apache.org/jira/browse/FLINK-34118
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34507) JSON functions have wrong operand checker

2024-02-23 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34507:


 Summary: JSON functions have wrong operand checker
 Key: FLINK-34507
 URL: https://issues.apache.org/jira/browse/FLINK-34507
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.1
Reporter: Dawid Wysakowicz


I believe that all JSON functions (`JSON_VALUE`, `JSON_QUERY`, ...) have wrong 
operand checker.

As far as I can tell the first argument (the JSON) should be a `STRING` 
argument. That's what all other systems do (some accept clob/blob additionally 
e.g. ORACLE).

We via Calcite accept `ANY` type there, which I believe is wrong: 
https://github.com/apache/calcite/blob/c49792f9c72159571f898c5fca1e26cba9870b07/core/src/main/java/org/apache/calcite/sql/fun/SqlJsonValueFunction.java#L61



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33517) Implement restore tests for Value node

2024-02-22 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33517.

Fix Version/s: 1.20.0
   (was: 1.19.0)
   Resolution: Implemented

Implemented in 
263e7bf690ec1a7e371447460917c02766526151..aaaea64d60d90448b07ca525f757172da1222983

> Implement restore tests for Value node
> --
>
> Key: FLINK-33517
> URL: https://issues.apache.org/jira/browse/FLINK-33517
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31663) Add ARRAY_EXCEPT supported in SQL & Table API

2024-02-21 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-31663:


Assignee: Hanyu Zheng  (was: Bonnie Varghese)

> Add ARRAY_EXCEPT supported in SQL & Table API
> -
>
> Key: FLINK-31663
> URL: https://issues.apache.org/jira/browse/FLINK-31663
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2024-02-21 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-31664:


Assignee: Jacky Lau

> Add ARRAY_INTERSECT supported in SQL & Table API
> 
>
> Key: FLINK-31664
> URL: https://issues.apache.org/jira/browse/FLINK-31664
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26948) Add SORT_ARRAY supported in SQL & Table API

2024-02-16 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-26948.

Fix Version/s: 1.20.0
 Assignee: Hanyu Zheng  (was: Jörn Kottmann)
   Resolution: Implemented

Implemented in 620e5975985944a02886b82362a2bc1774c733e3

> Add SORT_ARRAY supported in SQL & Table API
> ---
>
> Key: FLINK-26948
> URL: https://issues.apache.org/jira/browse/FLINK-26948
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.20.0
>
>
> Returns the array in {{expr}} in sorted order.
> Syntax:
> {code:java}
> sort_array(expr [, ascendingOrder] ) {code}
> Arguments:
>  * {{{}expr{}}}: An ARRAY expression of sortable elements.
>  * {{{}ascendingOrder{}}}: An optional BOOLEAN expression defaulting to 
> {{{}true{}}}.
> Returns:
> The result type matches {{{}expr{}}}.
> Sorts the input array in ascending or descending order according to the 
> natural ordering of the array elements. {{NULL}} elements are placed at the 
> beginning of the returned array in ascending order or at the end of the 
> returned array in descending order.
> Examples:
> {code:java}
> > SELECT sort_array(array('b', 'd', NULL, 'c', 'a'), true);
>  [NULL,a,b,c,d] {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34038) IncrementalGroupAggregateRestoreTest.testRestore fails

2024-02-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz resolved FLINK-34038.
--
Fix Version/s: 1.20.0
   Resolution: Fixed

> IncrementalGroupAggregateRestoreTest.testRestore fails
> --
>
> Key: FLINK-34038
> URL: https://issues.apache.org/jira/browse/FLINK-34038
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: test-stability
> Fix For: 1.20.0
>
>
> {{IncrementalGroupAggregateRestoreTest.testRestore}} fails on {{master}}:
> {code}
> Jan 08 18:53:18 18:53:18.406 [ERROR] Tests run: 3, Failures: 1, Errors: 0, 
> Skipped: 1, Time elapsed: 8.706 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.plan.nodes.exec.stream.IncrementalGroupAggregateRestoreTest
> Jan 08 18:53:18 18:53:18.406 [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.IncrementalGroupAggregateRestoreTest.testRestore(TableTestProgram,
>  ExecNodeMetadata)[2] -- Time elapsed: 1.368 s <<< FAILURE!
> Jan 08 18:53:18 java.lang.AssertionError: 
> Jan 08 18:53:18 
> Jan 08 18:53:18 Expecting actual:
> Jan 08 18:53:18   ["+I[1, 5, 2, 3]",
> Jan 08 18:53:18 "+I[2, 2, 1, 1]",
> Jan 08 18:53:18 "-U[1, 5, 2, 3]",
> Jan 08 18:53:18 "+U[1, 3, 2, 2]",
> Jan 08 18:53:18 "-U[1, 3, 2, 2]",
> Jan 08 18:53:18 "+U[1, 9, 3, 4]"]
> Jan 08 18:53:18 to contain exactly in any order:
> Jan 08 18:53:18   ["+I[1, 5, 2, 3]", "+I[2, 2, 1, 1]", "-U[1, 5, 2, 3]", 
> "+U[1, 9, 3, 4]"]
> Jan 08 18:53:18 but the following elements were unexpected:
> Jan 08 18:53:18   ["+U[1, 3, 2, 2]", "-U[1, 3, 2, 2]"]
> Jan 08 18:53:18 
> Jan 08 18:53:18   at 
> org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:292)
> Jan 08 18:53:18   at java.lang.reflect.Method.invoke(Method.java:498)
> [...]
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56110=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=10822



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34248) Implement restore tests for ChangelogNormalize node

2024-02-15 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817640#comment-17817640
 ] 

Dawid Wysakowicz commented on FLINK-34248:
--

Implemented in 
e76ccdc1fd8b15a5aac4968fd89643b0b17e1a48..6e93394b4f2c22e5c50858242c17bcbd8fcf45c3

> Implement restore tests for ChangelogNormalize node
> ---
>
> Key: FLINK-34248
> URL: https://issues.apache.org/jira/browse/FLINK-34248
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34248) Implement restore tests for ChangelogNormalize node

2024-02-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34248.

Fix Version/s: 1.20.0
   Resolution: Implemented

> Implement restore tests for ChangelogNormalize node
> ---
>
> Key: FLINK-34248
> URL: https://issues.apache.org/jira/browse/FLINK-34248
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34000) Implement restore tests for IncrementalGroupAggregate node

2024-02-13 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-34000:
-
Fix Version/s: 1.20.0
   (was: 1.19.0)

> Implement restore tests for IncrementalGroupAggregate node
> --
>
> Key: FLINK-34000
> URL: https://issues.apache.org/jira/browse/FLINK-34000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34000) Implement restore tests for IncrementalGroupAggregate node

2024-02-13 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34000.

Resolution: Implemented

> Implement restore tests for IncrementalGroupAggregate node
> --
>
> Key: FLINK-34000
> URL: https://issues.apache.org/jira/browse/FLINK-34000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34000) Implement restore tests for IncrementalGroupAggregate node

2024-02-13 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816966#comment-17816966
 ] 

Dawid Wysakowicz commented on FLINK-34000:
--

Merged an improved version in 
14d5dbc4c53b2e200dc57e3f4c053583f2419b14..5844092408d21023a738077d0922cc75f1e634d7

> Implement restore tests for IncrementalGroupAggregate node
> --
>
> Key: FLINK-34000
> URL: https://issues.apache.org/jira/browse/FLINK-34000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33958) Implement restore tests for IntervalJoin node

2024-02-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33958.

Resolution: Fixed

We merged a fix for the test:
* master: 1fbf92dfc9ee0e111d6ec740fe87fae27ef87d8b
* 1.19: 04d3b1b1423676dc87c366841b1e521beb9953dc

> Implement restore tests for IntervalJoin node
> -
>
> Key: FLINK-33958
> URL: https://issues.apache.org/jira/browse/FLINK-33958
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24239) Event time temporal join should support values from array, map, row, etc. as join key

2024-02-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-24239.

Resolution: Implemented

Implemented in 01cdc703ee6fa56bdfdf799d016c0e882e9e5d99

> Event time temporal join should support values from array, map, row, etc. as 
> join key
> -
>
> Key: FLINK-24239
> URL: https://issues.apache.org/jira/browse/FLINK-24239
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.12.6, 1.13.3, 1.14.1, 1.15.0
>Reporter: Caizhi Weng
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> This ticket is from the [mailing 
> list|https://lists.apache.org/thread.html/r90cab9c5026e527357d58db70d7e9b5875e57b942738f032bd54bfd3%40%3Cuser-zh.flink.apache.org%3E].
> Currently in event time temporal join when join keys are from an array, map 
> or row, an exception will be thrown saying "Currently the join key in 
> Temporal Table Join can not be empty". This is quite confusing for users as 
> they've already set the join keys.
> Add the following test case to {{TableEnvironmentITCase}} to reproduce this 
> issue.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   tEnv.executeSql(
> """
>   |CREATE TABLE A (
>   |  a MAP,
>   |  ts TIMESTAMP(3),
>   |  WATERMARK FOR ts AS ts
>   |) WITH (
>   |  'connector' = 'values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |CREATE TABLE B (
>   |  id INT,
>   |  ts TIMESTAMP(3),
>   |  WATERMARK FOR ts AS ts
>   |) WITH (
>   |  'connector' = 'values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql("SELECT * FROM A LEFT JOIN B FOR SYSTEM_TIME AS OF A.ts AS 
> b ON A.a['ID'] = id").print()
> }
> {code}
> The exception stack is
> {code:java}
> org.apache.flink.table.api.ValidationException: Currently the join key in 
> Temporal Table Join can not be empty.
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromGeneralTemporalTableRule.onMatch(LogicalCorrelateToJoinFromTemporalTableRule.scala:272)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.immutable.Range.foreach(Range.scala:160)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> 

[jira] [Closed] (FLINK-21949) Support ARRAY_AGG aggregate function

2024-02-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-21949.

Fix Version/s: 1.20.0
   (was: 1.19.0)
   Resolution: Implemented

Implemented in 042a4d2d8a8cec10ea9c287c1ebf7769bd469b22

> Support ARRAY_AGG aggregate function
> 
>
> Key: FLINK-21949
> URL: https://issues.apache.org/jira/browse/FLINK-21949
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Some nosql databases like mongodb and elasticsearch support nested data types.
> Aggregating multiple rows into ARRAY is a common requirement.
> The CollectToArray function is similar to Collect, except that it returns 
> ARRAY instead of MULTISET.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34399:


 Summary: Release Testing: Verify FLINK-33644 Make QueryOperations 
SQL serializable
 Key: FLINK-34399
 URL: https://issues.apache.org/jira/browse/FLINK-34399
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz


Test suggestions:
1. Write a few Table API programs.
2. Call Table.getQueryOperation#asSerializableString, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:


Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-34399:
-
Description: 
Test suggestions:
1. Write a few Table API programs.
2. Call Table.getQueryOperation#asSerializableString, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:

{code}
Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()
{code}

  was:
Test suggestions:
1. Write a few Table API programs.
2. Call Table.getQueryOperation#asSerializableString, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:


Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()


> Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable
> -
>
> Key: FLINK-34399
> URL: https://issues.apache.org/jira/browse/FLINK-34399
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> Test suggestions:
> 1. Write a few Table API programs.
> 2. Call Table.getQueryOperation#asSerializableString, manually verify the 
> produced SQL query
> 3. Check the produced SQL query is runnable and produces the same results as 
> the Table API program:
> {code}
> Table table = tEnv.from("a") ...
> String sqlQuery = table.getQueryOperation().asSerializableString();
> //verify the sqlQuery is runnable
> tEnv.sqlQuery(sqlQuery).execute().collect()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34302) Release Testing Instructions: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814892#comment-17814892
 ] 

Dawid Wysakowicz commented on FLINK-34302:
--

Test suggestions:
1. Write a few Table API programs.
2. Call {{Table.getQueryOperation#asSerializableString}}, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:

{code}

Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()
{code}

> Release Testing Instructions: Verify FLINK-33644 Make QueryOperations SQL 
> serializable
> --
>
> Key: FLINK-34302
> URL: https://issues.apache.org/jira/browse/FLINK-34302
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Dawid Wysakowicz
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33958) Implement restore tests for IntervalJoin node

2024-02-05 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814300#comment-17814300
 ] 

Dawid Wysakowicz commented on FLINK-33958:
--

[~bvarghese] Did you have time to look into the issue?

> Implement restore tests for IntervalJoin node
> -
>
> Key: FLINK-33958
> URL: https://issues.apache.org/jira/browse/FLINK-33958
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-24239) Event time temporal join should support values from array, map, row, etc. as join key

2024-02-02 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-24239:


Assignee: Dawid Wysakowicz

> Event time temporal join should support values from array, map, row, etc. as 
> join key
> -
>
> Key: FLINK-24239
> URL: https://issues.apache.org/jira/browse/FLINK-24239
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.12.6, 1.13.3, 1.14.1, 1.15.0
>Reporter: Caizhi Weng
>Assignee: Dawid Wysakowicz
>Priority: Major
>
> This ticket is from the [mailing 
> list|https://lists.apache.org/thread.html/r90cab9c5026e527357d58db70d7e9b5875e57b942738f032bd54bfd3%40%3Cuser-zh.flink.apache.org%3E].
> Currently in event time temporal join when join keys are from an array, map 
> or row, an exception will be thrown saying "Currently the join key in 
> Temporal Table Join can not be empty". This is quite confusing for users as 
> they've already set the join keys.
> Add the following test case to {{TableEnvironmentITCase}} to reproduce this 
> issue.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   tEnv.executeSql(
> """
>   |CREATE TABLE A (
>   |  a MAP,
>   |  ts TIMESTAMP(3),
>   |  WATERMARK FOR ts AS ts
>   |) WITH (
>   |  'connector' = 'values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |CREATE TABLE B (
>   |  id INT,
>   |  ts TIMESTAMP(3),
>   |  WATERMARK FOR ts AS ts
>   |) WITH (
>   |  'connector' = 'values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql("SELECT * FROM A LEFT JOIN B FOR SYSTEM_TIME AS OF A.ts AS 
> b ON A.a['ID'] = id").print()
> }
> {code}
> The exception stack is
> {code:java}
> org.apache.flink.table.api.ValidationException: Currently the join key in 
> Temporal Table Join can not be empty.
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromGeneralTemporalTableRule.onMatch(LogicalCorrelateToJoinFromTemporalTableRule.scala:272)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.immutable.Range.foreach(Range.scala:160)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram.optimize(FlinkGroupProgram.scala:55)
>  

[jira] [Commented] (FLINK-34271) Fix the potential failure test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT

2024-01-31 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812608#comment-17812608
 ] 

Dawid Wysakowicz commented on FLINK-34271:
--

> But we don’t have tests to cover potential changes about json plan , right?

I'll need to double check if we need that check. After all we don't necessarily 
need to maintain exactly the same plan, but we want to make sure the job can be 
restored.

> I think the current tests in RestoreTestBase could be modified to regenerate 
> a new savepoint metadata as needed every time

That's not an option. We must test with a savepoint from an older version. 
That's the entire idea of backwards compatibility.

> Fix the potential failure test about 
> GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT
> --
>
> Key: FLINK-34271
> URL: https://issues.apache.org/jira/browse/FLINK-34271
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> The underlying reason is that a previous PR introduced a test with state TTL 
> as follows in the SQL: 
> {code:java}
> .runSql(
> "INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '4d') */"
> + "b, "
> + "COUNT(*) AS cnt, "
> + "AVG(a) FILTER (WHERE a > 1) AS avg_a, "
> + "MIN(c) AS min_c "
> + "FROM source_t GROUP BY b"){code}
> When the savepoint metadata was generated for the first time, the metadata 
> recorded the time when a certain key was accessed. If the test is rerun after 
> the TTL has expired, the state of this key in the metadata will be cleared, 
> resulting in an incorrect test outcome.
> To rectify this issue, I think the current tests in RestoreTestBase could be 
> modified to regenerate a new savepoint metadata as needed every time. 
> However, this seems to deviate from the original design purpose of 
> RestoreTestBase.
> For my test, I will work around this by removing the data 
> "consumedBeforeRestore", as I am only interested in testing the generation of 
> an expected JSON plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34153) Set ALWAYS ChainingStrategy in TemporalSort

2024-01-19 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34153.

Resolution: Implemented

Implemented in 3d088f6e154d1c380b8c2273ede69f76e12df598

> Set ALWAYS ChainingStrategy in TemporalSort
> ---
>
> Key: FLINK-34153
> URL: https://issues.apache.org/jira/browse/FLINK-34153
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Similarly to FLINK-27992 we should ALWAYS chaining strategy in TemporalSort 
> operator



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34153) Set ALWAYS ChainingStrategy in TemporalSort

2024-01-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34153:


 Summary: Set ALWAYS ChainingStrategy in TemporalSort
 Key: FLINK-34153
 URL: https://issues.apache.org/jira/browse/FLINK-34153
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


Similarly to FLINK-27992 we should ALWAYS chaining strategy in TemporalSort 
operator



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-27992) cep StreamExecMatch need check the parallelism and maxParallelism of the two transformation in it

2024-01-18 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-27992.

Resolution: Fixed

> cep StreamExecMatch need check the parallelism and maxParallelism of the two 
> transformation in it
> -
>
> Key: FLINK-27992
> URL: https://issues.apache.org/jira/browse/FLINK-27992
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.16.0
>Reporter: Jacky Lau
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available
>
> StreamExecMatch node has two transformation (StreamRecordTimestampInserter -> 
> Match), the upstream of StreamExecMatch is hash edge when use set different 
> parallelism and maxParallelism it will cause problem.
> because the window operator using downstream node's max parallelism compute 
> keygroup and cep operator  using max parallelism of itself and it may not 
> equal
> such as:
> window - --(hash edge)>  StreamRecordTimestampInserter --(forward edge)–> Cep 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27992) cep StreamExecMatch need check the parallelism and maxParallelism of the two transformation in it

2024-01-18 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808231#comment-17808231
 ] 

Dawid Wysakowicz commented on FLINK-27992:
--

Fixed in 201571b486f405358a31e077247241892d537198

> cep StreamExecMatch need check the parallelism and maxParallelism of the two 
> transformation in it
> -
>
> Key: FLINK-27992
> URL: https://issues.apache.org/jira/browse/FLINK-27992
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.16.0
>Reporter: Jacky Lau
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available
>
> StreamExecMatch node has two transformation (StreamRecordTimestampInserter -> 
> Match), the upstream of StreamExecMatch is hash edge when use set different 
> parallelism and maxParallelism it will cause problem.
> because the window operator using downstream node's max parallelism compute 
> keygroup and cep operator  using max parallelism of itself and it may not 
> equal
> such as:
> window - --(hash edge)>  StreamRecordTimestampInserter --(forward edge)–> Cep 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27992) cep StreamExecMatch need check the parallelism and maxParallelism of the two transformation in it

2024-01-18 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808230#comment-17808230
 ] 

Dawid Wysakowicz commented on FLINK-27992:
--

Both operators use the same parallelism. Match uses the parallelism of the 
input. There is a different issue though that Match uses 
`ChainingStrategy.HEAD` which puts `StreamRecordInserter` and `Match` into 
separate chains adding unwanted `FORWARD` exchange.

> cep StreamExecMatch need check the parallelism and maxParallelism of the two 
> transformation in it
> -
>
> Key: FLINK-27992
> URL: https://issues.apache.org/jira/browse/FLINK-27992
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.16.0
>Reporter: Jacky Lau
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available
>
> StreamExecMatch node has two transformation (StreamRecordTimestampInserter -> 
> Match), the upstream of StreamExecMatch is hash edge when use set different 
> parallelism and maxParallelism it will cause problem.
> because the window operator using downstream node's max parallelism compute 
> keygroup and cep operator  using max parallelism of itself and it may not 
> equal
> such as:
> window - --(hash edge)>  StreamRecordTimestampInserter --(forward edge)–> Cep 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-27992) cep StreamExecMatch need check the parallelism and maxParallelism of the two transformation in it

2024-01-17 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-27992:


Assignee: Dawid Wysakowicz

> cep StreamExecMatch need check the parallelism and maxParallelism of the two 
> transformation in it
> -
>
> Key: FLINK-27992
> URL: https://issues.apache.org/jira/browse/FLINK-27992
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / CEP
>Affects Versions: 1.16.0
>Reporter: Jacky Lau
>Assignee: Dawid Wysakowicz
>Priority: Major
>
> StreamExecMatch node has two transformation (StreamRecordTimestampInserter -> 
> Match), the upstream of StreamExecMatch is hash edge when use set different 
> parallelism and maxParallelism it will cause problem.
> because the window operator using downstream node's max parallelism compute 
> keygroup and cep operator  using max parallelism of itself and it may not 
> equal
> such as:
> window - --(hash edge)>  StreamRecordTimestampInserter --(forward edge)–> Cep 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33958) Implement restore tests for IntervalJoin node

2024-01-16 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33958.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
3688654152282bbd2ae79546b94a3429fef1bd3f..534df6490e0fc179173efabd882ff10a749d508f

> Implement restore tests for IntervalJoin node
> -
>
> Key: FLINK-33958
> URL: https://issues.apache.org/jira/browse/FLINK-33958
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31481) Support enhanced show databases syntax

2024-01-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-31481:


Assignee: Jeyhun Karimov  (was: Jeyhun Karimov)

> Support enhanced show databases syntax
> --
>
> Key: FLINK-31481
> URL: https://issues.apache.org/jira/browse/FLINK-31481
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> As FLIP discussed. To avoid bloat, this ticket supports ShowDatabases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31481) Support enhanced show databases syntax

2024-01-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-31481:


Assignee: Jeyhun Karimov

> Support enhanced show databases syntax
> --
>
> Key: FLINK-31481
> URL: https://issues.apache.org/jira/browse/FLINK-31481
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
>
> As FLIP discussed. To avoid bloat, this ticket supports ShowDatabases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31481) Support enhanced show databases syntax

2024-01-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-31481.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 6c050e92040802f6866eb54120f5070c34af7a4a

> Support enhanced show databases syntax
> --
>
> Key: FLINK-31481
> URL: https://issues.apache.org/jira/browse/FLINK-31481
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Ran Tao
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> As FLIP discussed. To avoid bloat, this ticket supports ShowDatabases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32815) Add HASHCODE support in Table API

2024-01-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-32815:


Assignee: Hanyu Zheng

> Add HASHCODE support in Table API
> -
>
> Key: FLINK-32815
> URL: https://issues.apache.org/jira/browse/FLINK-32815
> Project: Flink
>  Issue Type: Improvement
>Reporter: Hanyu Zheng
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available, stale-major
>
> *This is an implementation of HASHCODE internal function*
> The {{hashcode}} function generates a hash code for a given input value, 
> including support for computing hash values of binary data types. It creates 
> a unique integer that represents the value passed to the function.
> *Brief change log*
>  * {{HASHCODE}} for Table API 
> *Syntax:*
> {code:java}
> HASHCODE(value){code}
> *Arguments:*
>  * value: the value to be hashed.
> *Returns:* The function returns a unique integer representing the hash code 
> of the value. If the input argument is NULL, the function returns NULL.
> Because it is an internal function, so it will not support sql anymore.
> *see also:*
> Java: 
> [https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#hashCode--]
> Python: [https://docs.python.org/3/library/functions.html#hash]
> C#: [https://docs.microsoft.com/en-us/dotnet/api/system.object.gethashcode]
> SQL Server: 
> [https://docs.microsoft.com/en-us/sql/t-sql/functions/checksum-transact-sql]
> MySQL: 
> [https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_md5]
> PostgreSQL: [https://www.postgresql.org/docs/current/pgcrypto-hash.html]
> Oracle: 
> [https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ORA-HASH.html]
> Google Cloud BigQuery: 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#farm_fingerprint]
> AWS Redshift: [https://docs.aws.amazon.com/redshift/latest/dg/MD5.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32815) Add HASHCODE support in Table API

2024-01-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32815.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
36542c134b1d4482b37850033dc66a209fd42331..4499553ce7d35b9782c54b66245a4ccb0627cbfc

> Add HASHCODE support in Table API
> -
>
> Key: FLINK-32815
> URL: https://issues.apache.org/jira/browse/FLINK-32815
> Project: Flink
>  Issue Type: Improvement
>Reporter: Hanyu Zheng
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available, stale-major
> Fix For: 1.19.0
>
>
> *This is an implementation of HASHCODE internal function*
> The {{hashcode}} function generates a hash code for a given input value, 
> including support for computing hash values of binary data types. It creates 
> a unique integer that represents the value passed to the function.
> *Brief change log*
>  * {{HASHCODE}} for Table API 
> *Syntax:*
> {code:java}
> HASHCODE(value){code}
> *Arguments:*
>  * value: the value to be hashed.
> *Returns:* The function returns a unique integer representing the hash code 
> of the value. If the input argument is NULL, the function returns NULL.
> Because it is an internal function, so it will not support sql anymore.
> *see also:*
> Java: 
> [https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#hashCode--]
> Python: [https://docs.python.org/3/library/functions.html#hash]
> C#: [https://docs.microsoft.com/en-us/dotnet/api/system.object.gethashcode]
> SQL Server: 
> [https://docs.microsoft.com/en-us/sql/t-sql/functions/checksum-transact-sql]
> MySQL: 
> [https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_md5]
> PostgreSQL: [https://www.postgresql.org/docs/current/pgcrypto-hash.html]
> Oracle: 
> [https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ORA-HASH.html]
> Google Cloud BigQuery: 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#farm_fingerprint]
> AWS Redshift: [https://docs.aws.amazon.com/redshift/latest/dg/MD5.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33979) Implement restore tests for TableSink node

2024-01-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33979.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
9a9b9ce81ca05398f8891c918c74294402462f5c..413aed084974efe708833b3e1cfeb7a3f5ce544c

> Implement restore tests for TableSink node
> --
>
> Key: FLINK-33979
> URL: https://issues.apache.org/jira/browse/FLINK-33979
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32256) Add ARRAY_MIN support in SQL & Table API

2024-01-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32256.

Resolution: Implemented

Implemented in 2c70bd346d37f96f01007c89e3eb66e919c0c0a8

> Add ARRAY_MIN support in SQL & Table API
> 
>
> Key: FLINK-32256
> URL: https://issues.apache.org/jira/browse/FLINK-32256
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Find the minimum among all elements in the array for which ordering is 
> supported.
> Syntax:
> array_min(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
> Examples:
> {code:sql}
> SELECT array_min(array(1, 20, NULL, 3));
> -- 1
> {code}
> See also
> spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34005) Implement restore tests for MiniBatchAssigner node

2024-01-11 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34005.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 881062f352f8bf8c21ab7cbea95e111fd82fdf20

> Implement restore tests for MiniBatchAssigner node
> --
>
> Key: FLINK-34005
> URL: https://issues.apache.org/jira/browse/FLINK-34005
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33518) Implement restore tests for WatermarkAssigner node

2024-01-11 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33518.

Resolution: Implemented

Implemented in 
23629a80c574a9f998b41e258b8e656274714c9d..c233ed2599188ba63e361b1b4525d9f322965f65

> Implement restore tests for WatermarkAssigner node
> --
>
> Key: FLINK-33518
> URL: https://issues.apache.org/jira/browse/FLINK-33518
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33969) Implement restore tests for TableSourceScan node

2024-01-11 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33969.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
d6209a18bbcfa7d2a4027a55757c0426997651cb..e98ded88876e4b95922123481c38b215ab15b3e3

> Implement restore tests for TableSourceScan node
> 
>
> Key: FLINK-33969
> URL: https://issues.apache.org/jira/browse/FLINK-33969
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33896) Implement restore tests for Correlate node

2024-01-10 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33896.

Resolution: Fixed

Implemented in 
c89933e99d5087f81389560663984012733d3bf8..263f3283724a5081e41f679659fa6a5819350739

> Implement restore tests for Correlate node
> --
>
> Key: FLINK-33896
> URL: https://issues.apache.org/jira/browse/FLINK-33896
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34000) Implement restore tests for IncrementalGroupAggregate node

2024-01-09 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804639#comment-17804639
 ] 

Dawid Wysakowicz edited comment on FLINK-34000 at 1/9/24 8:53 AM:
--

Reverted in 
ba49e50e14c6d78c11ee87afbb851da471d3db68..e27a4cbc74beba7dff8a408dcff38d816ff70457
 because of FLINK-34038


was (Author: dawidwys):
Reverted in 
ba49e50e14c6d78c11ee87afbb851da471d3db68..e27a4cbc74beba7dff8a408dcff38d816ff70457

> Implement restore tests for IncrementalGroupAggregate node
> --
>
> Key: FLINK-34000
> URL: https://issues.apache.org/jira/browse/FLINK-34000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34000) Implement restore tests for IncrementalGroupAggregate node

2024-01-09 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804639#comment-17804639
 ] 

Dawid Wysakowicz commented on FLINK-34000:
--

Reverted in 
ba49e50e14c6d78c11ee87afbb851da471d3db68..e27a4cbc74beba7dff8a408dcff38d816ff70457

> Implement restore tests for IncrementalGroupAggregate node
> --
>
> Key: FLINK-34000
> URL: https://issues.apache.org/jira/browse/FLINK-34000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-34000) Implement restore tests for IncrementalGroupAggregate node

2024-01-09 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reopened FLINK-34000:
--

> Implement restore tests for IncrementalGroupAggregate node
> --
>
> Key: FLINK-34000
> URL: https://issues.apache.org/jira/browse/FLINK-34000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34038) IncrementalGroupAggregateRestoreTest.testRestore fails

2024-01-09 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804637#comment-17804637
 ] 

Dawid Wysakowicz commented on FLINK-34038:
--

I'll revert the offending committs. [~bvarghese] Could you take a look at the 
failure? 

> IncrementalGroupAggregateRestoreTest.testRestore fails
> --
>
> Key: FLINK-34038
> URL: https://issues.apache.org/jira/browse/FLINK-34038
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> {{IncrementalGroupAggregateRestoreTest.testRestore}} fails on {{master}}:
> {code}
> Jan 08 18:53:18 18:53:18.406 [ERROR] Tests run: 3, Failures: 1, Errors: 0, 
> Skipped: 1, Time elapsed: 8.706 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.plan.nodes.exec.stream.IncrementalGroupAggregateRestoreTest
> Jan 08 18:53:18 18:53:18.406 [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.IncrementalGroupAggregateRestoreTest.testRestore(TableTestProgram,
>  ExecNodeMetadata)[2] -- Time elapsed: 1.368 s <<< FAILURE!
> Jan 08 18:53:18 java.lang.AssertionError: 
> Jan 08 18:53:18 
> Jan 08 18:53:18 Expecting actual:
> Jan 08 18:53:18   ["+I[1, 5, 2, 3]",
> Jan 08 18:53:18 "+I[2, 2, 1, 1]",
> Jan 08 18:53:18 "-U[1, 5, 2, 3]",
> Jan 08 18:53:18 "+U[1, 3, 2, 2]",
> Jan 08 18:53:18 "-U[1, 3, 2, 2]",
> Jan 08 18:53:18 "+U[1, 9, 3, 4]"]
> Jan 08 18:53:18 to contain exactly in any order:
> Jan 08 18:53:18   ["+I[1, 5, 2, 3]", "+I[2, 2, 1, 1]", "-U[1, 5, 2, 3]", 
> "+U[1, 9, 3, 4]"]
> Jan 08 18:53:18 but the following elements were unexpected:
> Jan 08 18:53:18   ["+U[1, 3, 2, 2]", "-U[1, 3, 2, 2]"]
> Jan 08 18:53:18 
> Jan 08 18:53:18   at 
> org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:292)
> Jan 08 18:53:18   at java.lang.reflect.Method.invoke(Method.java:498)
> [...]
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56110=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=10822



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34000) Implement restore tests for IncrementalGroupAggregate node

2024-01-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34000.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
df71d07188e745553b8174297ec7989f05cebf7a..0df5ab5a3318d21e8be3ab9237900664e3741013

> Implement restore tests for IncrementalGroupAggregate node
> --
>
> Key: FLINK-34000
> URL: https://issues.apache.org/jira/browse/FLINK-34000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33860) Implement restore tests for WindowTableFunction node

2023-12-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33860.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
1a8b8d512c213ba330f28eca663bc77e2369b61b..cabb28d25c4c58af3ee23fc4a63f9564aefd6146

> Implement restore tests for WindowTableFunction node
> 
>
> Key: FLINK-33860
> URL: https://issues.apache.org/jira/browse/FLINK-33860
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33644) FLIP-393: Make QueryOperations SQL serializable

2023-12-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33644.

Resolution: Implemented

> FLIP-393: Make QueryOperations SQL serializable
> ---
>
> Key: FLINK-33644
> URL: https://issues.apache.org/jira/browse/FLINK-33644
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> https://cwiki.apache.org/confluence/x/4guZE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33823) Serialize PlannerQueryOperation into SQL

2023-12-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33823.

Resolution: Implemented

Implemented in 5919251d7a94264a6a72c31de0716b3f72d65437

> Serialize PlannerQueryOperation into SQL
> 
>
> Key: FLINK-33823
> URL: https://issues.apache.org/jira/browse/FLINK-33823
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33861) Implement restore tests for WindowRank node

2023-12-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33861.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in aa5766e257a8b40e15d08eafa1e005837694772b

> Implement restore tests for WindowRank node
> ---
>
> Key: FLINK-33861
> URL: https://issues.apache.org/jira/browse/FLINK-33861
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33818) Implement restore tests for WindowDeduplicate node

2023-12-18 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33818.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
5799d8d06220608b52f5748549882966fe5b1ae3..011f777036540d0f027b04306714bf9e64003a97

> Implement restore tests for WindowDeduplicate node
> --
>
> Key: FLINK-33818
> URL: https://issues.apache.org/jira/browse/FLINK-33818
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33767) Implement restore tests for TemporalJoin node

2023-12-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33767.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
46d817d8d297b50fe91b5fb9471bda791a6f4319..20a328d80a1dbc50974cf3de9f4b6178246f6dee

> Implement restore tests for TemporalJoin node
> -
>
> Key: FLINK-33767
> URL: https://issues.apache.org/jira/browse/FLINK-33767
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-33644) FLIP-393: Make QueryOperations SQL serializable

2023-12-14 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reopened FLINK-33644:
--

> FLIP-393: Make QueryOperations SQL serializable
> ---
>
> Key: FLINK-33644
> URL: https://issues.apache.org/jira/browse/FLINK-33644
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> https://cwiki.apache.org/confluence/x/4guZE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33823) Serialize PlannerQueryOperation into SQL

2023-12-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33823:


 Summary: Serialize PlannerQueryOperation into SQL
 Key: FLINK-33823
 URL: https://issues.apache.org/jira/browse/FLINK-33823
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33754) Serialize QueryOperations into SQL

2023-12-14 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796685#comment-17796685
 ] 

Dawid Wysakowicz edited comment on FLINK-33754 at 12/14/23 11:25 AM:
-

[~libenchao] I overlooked this operation. Sorry, I will add support for that 
operation as well.


was (Author: dawidwys):
I overlooked this operation. Sorry, I will add support for that operation as 
well.

> Serialize QueryOperations into SQL
> --
>
> Key: FLINK-33754
> URL: https://issues.apache.org/jira/browse/FLINK-33754
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33754) Serialize QueryOperations into SQL

2023-12-14 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796685#comment-17796685
 ] 

Dawid Wysakowicz commented on FLINK-33754:
--

I overlooked this operation. Sorry, I will add support for that operation as 
well.

> Serialize QueryOperations into SQL
> --
>
> Key: FLINK-33754
> URL: https://issues.apache.org/jira/browse/FLINK-33754
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33808) Implement restore tests for WindowJoin node

2023-12-14 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33808.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
b691a2ee33e8d94b291f6632024bd801a17841a7..01b3db6f8b229ad9683b5c7c2b528f183e25aa3b

> Implement restore tests for WindowJoin node
> ---
>
> Key: FLINK-33808
> URL: https://issues.apache.org/jira/browse/FLINK-33808
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33644) FLIP-393: Make QueryOperations SQL serializable

2023-12-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33644.

Resolution: Implemented

> FLIP-393: Make QueryOperations SQL serializable
> ---
>
> Key: FLINK-33644
> URL: https://issues.apache.org/jira/browse/FLINK-33644
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> https://cwiki.apache.org/confluence/x/4guZE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33754) Serialize QueryOperations into SQL

2023-12-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33754.

Resolution: Implemented

Implemented in 3532f59cb9484a67e1b441e2875a26eb3691221f

> Serialize QueryOperations into SQL
> --
>
> Key: FLINK-33754
> URL: https://issues.apache.org/jira/browse/FLINK-33754
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33441) Implement restore tests for ExecUnion node

2023-12-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33441.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
f362dcc9d4e14cfa30a27881158ec9431dd9e274..f2460363303e49621589b1cb2b45347c8ee5dd4f

> Implement restore tests for ExecUnion node
> --
>
> Key: FLINK-33441
> URL: https://issues.apache.org/jira/browse/FLINK-33441
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33470) Implement restore tests for Join node

2023-12-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33470.

Fix Version/s: 1.19.0
   Resolution: Implemented

> Implement restore tests for Join node
> -
>
> Key: FLINK-33470
> URL: https://issues.apache.org/jira/browse/FLINK-33470
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33647) Implement restore tests for LookupJoin node

2023-12-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33647.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
7e4a2f3fe6c558a08ec68dcb8e21ba43e85f3cf1..c49ab9ae42940c58579326683735ba79512f604e

> Implement restore tests for LookupJoin node
> ---
>
> Key: FLINK-33647
> URL: https://issues.apache.org/jira/browse/FLINK-33647
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33757) Implement restore tests for Rank node

2023-12-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33757.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in 
314b418efea8f35d39b05abef5361289b054b6a7..be5cf3c9d679ff141a1041774070c66b46b866a7

> Implement restore tests for Rank node
> -
>
> Key: FLINK-33757
> URL: https://issues.apache.org/jira/browse/FLINK-33757
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33777) ParquetTimestampITCase>FsStreamingSinkITCaseBase failing in CI

2023-12-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33777.

Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in 0e515dce78832dbbbf5fce9c8cdd113bbb62cdf0

> ParquetTimestampITCase>FsStreamingSinkITCaseBase failing in CI
> --
>
> Key: FLINK-33777
> URL: https://issues.apache.org/jira/browse/FLINK-33777
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Jim Hughes
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> From this CI run: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55334=logs=2e8cb2f7-b2d3-5c62-9c05-cd756d33a819=2dd510a3-5041-5201-6dc3-54d310f68906]
> {code:java}
> Dec 07 19:57:30 19:57:30.026 [ERROR] Errors: 
> Dec 07 19:57:30 19:57:30.026 [ERROR] 
> ParquetTimestampITCase>FsStreamingSinkITCaseBase.testNonPart:84->FsStreamingSinkITCaseBase.testPartitionCustomFormatDate:151->FsStreamingSinkITCaseBase.test:186
>  » Validation 
> Dec 07 19:57:30 19:57:30.026 [ERROR] 
> ParquetTimestampITCase>FsStreamingSinkITCaseBase.testPart:89->FsStreamingSinkITCaseBase.testPartitionCustomFormatDate:151->FsStreamingSinkITCaseBase.test:186
>  » Validation 
> Dec 07 19:57:30 19:57:30.026 [ERROR] 
> ParquetTimestampITCase>FsStreamingSinkITCaseBase.testPartitionWithBasicDate:126->FsStreamingSinkITCaseBase.test:186
>  » Validation  {code}
> The errors each appear somewhat similar:
> {code:java}
> Dec 07 19:54:43 19:54:43.934 [ERROR] 
> org.apache.flink.formats.parquet.ParquetTimestampITCase.testPartitionWithBasicDate
>  Time elapsed: 1.822 s <<< ERROR! 
> Dec 07 19:54:43 org.apache.flink.table.api.ValidationException: Unable to 
> find a field named 'f0' in the physical data type derived from the given type 
> information for schema declaration. Make sure that the type information is 
> not a generic raw type. Currently available fields are: [a, b, c, d, e] 
> Dec 07 19:54:43 at 
> org.apache.flink.table.catalog.SchemaTranslator.patchDataTypeFromColumn(SchemaTranslator.java:350)
>  
> Dec 07 19:54:43 at 
> org.apache.flink.table.catalog.SchemaTranslator.patchDataTypeFromDeclaredSchema(SchemaTranslator.java:337)
>  
> Dec 07 19:54:43 at 
> org.apache.flink.table.catalog.SchemaTranslator.createConsumingResult(SchemaTranslator.java:235)
>  
> Dec 07 19:54:43 at 
> org.apache.flink.table.catalog.SchemaTranslator.createConsumingResult(SchemaTranslator.java:180)
>  
> Dec 07 19:54:43 at 
> org.apache.flink.table.api.bridge.internal.AbstractStreamTableEnvironmentImpl.fromStreamInternal(AbstractStreamTableEnvironmentImpl.java:141)
>  
> Dec 07 19:54:43 at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl.createTemporaryView(StreamTableEnvironmentImpl.scala:121)
>  
> Dec 07 19:54:43 at 
> org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.test(FsStreamingSinkITCaseBase.scala:186)
>  
> Dec 07 19:54:43 at 
> org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.testPartitionWithBasicDate(FsStreamingSinkITCaseBase.scala:126)
>   {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33782) GroupAggregateRestoreTest fails

2023-12-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33782.

Resolution: Fixed

Fixed in ca72f6302bd3d760d2b47cd8b1b8f2e48705117c

> GroupAggregateRestoreTest fails
> ---
>
> Key: FLINK-33782
> URL: https://issues.apache.org/jira/browse/FLINK-33782
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55321=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=10778]
> {code:java}
> Dec 07 13:49:11 13:49:11.037 [ERROR] Tests run: 9, Failures: 0, Errors: 8, 
> Skipped: 1, Time elapsed: 4.213 s <<< FAILURE! - in 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest
> Dec 07 13:49:11 13:49:11.037 [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest.testRestore(TableTestProgram,
>  ExecNodeMetadata)[1]  Time elapsed: 0.17 s  <<< ERROR!
> Dec 07 13:49:11 org.apache.flink.table.api.TableException: Cannot load Plan 
> from file 
> '/__w/1/s/flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-group-aggregate_1/group-aggregate-simple/plan/group-aggregate-simple.json'.
> Dec 07 13:49:11   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.loadPlan(TableEnvironmentImpl.java:760)
> Dec 07 13:49:11   at 
> org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:279)
> Dec 07 13:49:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 07 13:49:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 07 13:49:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 07 13:49:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33782) GroupAggregateRestoreTest fails

2023-12-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33782:
-
Fix Version/s: 1.19.0

> GroupAggregateRestoreTest fails
> ---
>
> Key: FLINK-33782
> URL: https://issues.apache.org/jira/browse/FLINK-33782
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55321=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=10778]
> {code:java}
> Dec 07 13:49:11 13:49:11.037 [ERROR] Tests run: 9, Failures: 0, Errors: 8, 
> Skipped: 1, Time elapsed: 4.213 s <<< FAILURE! - in 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest
> Dec 07 13:49:11 13:49:11.037 [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest.testRestore(TableTestProgram,
>  ExecNodeMetadata)[1]  Time elapsed: 0.17 s  <<< ERROR!
> Dec 07 13:49:11 org.apache.flink.table.api.TableException: Cannot load Plan 
> from file 
> '/__w/1/s/flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-group-aggregate_1/group-aggregate-simple/plan/group-aggregate-simple.json'.
> Dec 07 13:49:11   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.loadPlan(TableEnvironmentImpl.java:760)
> Dec 07 13:49:11   at 
> org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:279)
> Dec 07 13:49:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 07 13:49:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 07 13:49:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 07 13:49:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33782) GroupAggregateRestoreTest fails

2023-12-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33782:


Assignee: Bonnie Varghese

> GroupAggregateRestoreTest fails
> ---
>
> Key: FLINK-33782
> URL: https://issues.apache.org/jira/browse/FLINK-33782
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55321=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=10778]
> {code:java}
> Dec 07 13:49:11 13:49:11.037 [ERROR] Tests run: 9, Failures: 0, Errors: 8, 
> Skipped: 1, Time elapsed: 4.213 s <<< FAILURE! - in 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest
> Dec 07 13:49:11 13:49:11.037 [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest.testRestore(TableTestProgram,
>  ExecNodeMetadata)[1]  Time elapsed: 0.17 s  <<< ERROR!
> Dec 07 13:49:11 org.apache.flink.table.api.TableException: Cannot load Plan 
> from file 
> '/__w/1/s/flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-group-aggregate_1/group-aggregate-simple/plan/group-aggregate-simple.json'.
> Dec 07 13:49:11   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.loadPlan(TableEnvironmentImpl.java:760)
> Dec 07 13:49:11   at 
> org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:279)
> Dec 07 13:49:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 07 13:49:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 07 13:49:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 07 13:49:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33667) Implement restore tests for MatchRecognize node

2023-12-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33667:
-
Fix Version/s: 1.19.0

> Implement restore tests for MatchRecognize node
> ---
>
> Key: FLINK-33667
> URL: https://issues.apache.org/jira/browse/FLINK-33667
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33758) Implement restore tests for TemporalSort node

2023-12-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33758:
-
Fix Version/s: 1.19.0

> Implement restore tests for TemporalSort node
> -
>
> Key: FLINK-33758
> URL: https://issues.apache.org/jira/browse/FLINK-33758
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33480) Implement restore tests for GroupAggregate node

2023-12-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33480:
-
Fix Version/s: 1.19.0

> Implement restore tests for GroupAggregate node
> ---
>
> Key: FLINK-33480
> URL: https://issues.apache.org/jira/browse/FLINK-33480
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33480) Implement restore tests for GroupAggregate node

2023-12-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33480.

Resolution: Implemented

Implemented in 
193b1c68976cdfbd66147278f23d7d427d9b5562..fac3ac786674f9b6ce5716902e74b1533ccb1c0a

> Implement restore tests for GroupAggregate node
> ---
>
> Key: FLINK-33480
> URL: https://issues.apache.org/jira/browse/FLINK-33480
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33488) Implement restore tests for Deduplicate node

2023-12-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33488:
-
Fix Version/s: 1.19.0

> Implement restore tests for Deduplicate node
> 
>
> Key: FLINK-33488
> URL: https://issues.apache.org/jira/browse/FLINK-33488
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33758) Implement restore tests for TemporalSort node

2023-12-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33758.

Resolution: Implemented

Implemented in f751a00fd6f0e70187d2a9ae2ccd6a728d9a2c64

> Implement restore tests for TemporalSort node
> -
>
> Key: FLINK-33758
> URL: https://issues.apache.org/jira/browse/FLINK-33758
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33599) Run restore tests with RocksDB state backend

2023-12-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33599.

Resolution: Implemented

Implemented in 43fec308b3298ed2aad639b94140c9a2173c10cd

> Run restore tests with RocksDB state backend
> 
>
> Key: FLINK-33599
> URL: https://issues.apache.org/jira/browse/FLINK-33599
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33667) Implement restore tests for MatchRecognize node

2023-12-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33667.

Resolution: Implemented

Implemented in 60cc00e5e6abc0b7309a48a37e171dae9fa98183

> Implement restore tests for MatchRecognize node
> ---
>
> Key: FLINK-33667
> URL: https://issues.apache.org/jira/browse/FLINK-33667
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >