[jira] [Assigned] (FLINK-11556) Translate "Contribute Documentation" page into Chinese.

2019-02-08 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11556:
-

Assignee: xuqianjin

> Translate "Contribute Documentation" page into Chinese.
> ---
>
> Key: FLINK-11556
> URL: https://issues.apache.org/jira/browse/FLINK-11556
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Assignee: xuqianjin
>Priority: Major
>
> Translate "Contribute Documentation" page into Chinese.
> The markdown file is located in: flink-web/contribute-documentation.zh.md
> The url link is: https://flink.apache.org/zh/contribute-documentation.html
> Please adjust the links in the page to Chinese pages when translating. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11555) Translate "Contributing Code" page into Chinese

2019-02-08 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11555:
-

Assignee: xuqianjin

> Translate "Contributing Code" page into Chinese
> ---
>
> Key: FLINK-11555
> URL: https://issues.apache.org/jira/browse/FLINK-11555
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Assignee: xuqianjin
>Priority: Major
>
> Translate "Contributing Code" page into Chinese.
> The markdown file is located in: flink-web/contribute-code.zh.md
> The url link is: https://flink.apache.org/zh/contribute-code.html
> Please adjust the links in the page to Chinese pages when translating. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11065) Migrate flink-table runtime classes

2019-01-29 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755004#comment-16755004
 ] 

xuqianjin commented on FLINK-11065:
---

I see..Thanks for your point, [~twalthr]

best

qianjin

> Migrate flink-table runtime classes
> ---
>
> Key: FLINK-11065
> URL: https://issues.apache.org/jira/browse/FLINK-11065
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Priority: Major
>
> This issue covers the third step of the migration plan mentioned in 
> [FLIP-28|https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free].
> All runtime classes have little dependencies to other classes.
> This issue tracks efforts of porting runtime classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11063) Make flink-table Scala-free

2019-01-29 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11063:
-

Assignee: (was: xuqianjin)

> Make flink-table Scala-free
> ---
>
> Key: FLINK-11063
> URL: https://issues.apache.org/jira/browse/FLINK-11063
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.7.0
>Reporter: Timo Walther
>Priority: Major
>
> Currently, the Table & SQL API is implemented in Scala. This decision was 
> made a long-time ago when the initial code base was created as part of a 
> master's thesis. The community kept Scala because of the nice language 
> features that enable a fluent Table API like {{table.select('field.trim())}} 
> and because Scala allows for quick prototyping (e.g. multi-line comments for 
> code generation). The committers enforced not splitting the code-base into 
> two programming languages.
> However, nowadays the {{flink-table}} module more and more becomes an 
> important part in the Flink ecosystem. Connectors, formats, and SQL client 
> are actually implemented in Java but need to interoperate with 
> {{flink-table}} which makes these modules dependent on Scala. As mentioned in 
> an earlier mail thread, using Scala for API classes also exposes member 
> variables and methods in Java that should not be exposed to users. Java is 
> still the most important API language and right now we treat it as a 
> second-class citizen.
> In order to not introduce more technical debt, the community aims to make the 
> {{flink-table}} module Scala-free as a long-term goal. This will be a 
> continuous effort that can not be finished within one release. We aim for 
> avoiding API-breaking changes.
> A full description can be found in the corresponding 
> [FLIP-28|https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free].
> FLIP-28 also contains a rough roadmap and serves as migration guidelines.
> This Jira issue is an umbrella issue for tracking the efforts and possible 
> migration blockers.
> *+Update+*: Due to the big code contribution of Alibaba into Flink SQL. We 
> will only perform porting of API classes for now. This is mostly tracked by 
> FLINK-11448.
> FLIP-28 is legacy and has been integrated into 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11063) Make flink-table Scala-free

2019-01-29 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11063:
-

Assignee: xuqianjin

> Make flink-table Scala-free
> ---
>
> Key: FLINK-11063
> URL: https://issues.apache.org/jira/browse/FLINK-11063
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.7.0
>Reporter: Timo Walther
>Assignee: xuqianjin
>Priority: Major
>
> Currently, the Table & SQL API is implemented in Scala. This decision was 
> made a long-time ago when the initial code base was created as part of a 
> master's thesis. The community kept Scala because of the nice language 
> features that enable a fluent Table API like {{table.select('field.trim())}} 
> and because Scala allows for quick prototyping (e.g. multi-line comments for 
> code generation). The committers enforced not splitting the code-base into 
> two programming languages.
> However, nowadays the {{flink-table}} module more and more becomes an 
> important part in the Flink ecosystem. Connectors, formats, and SQL client 
> are actually implemented in Java but need to interoperate with 
> {{flink-table}} which makes these modules dependent on Scala. As mentioned in 
> an earlier mail thread, using Scala for API classes also exposes member 
> variables and methods in Java that should not be exposed to users. Java is 
> still the most important API language and right now we treat it as a 
> second-class citizen.
> In order to not introduce more technical debt, the community aims to make the 
> {{flink-table}} module Scala-free as a long-term goal. This will be a 
> continuous effort that can not be finished within one release. We aim for 
> avoiding API-breaking changes.
> A full description can be found in the corresponding 
> [FLIP-28|https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free].
> FLIP-28 also contains a rough roadmap and serves as migration guidelines.
> This Jira issue is an umbrella issue for tracking the efforts and possible 
> migration blockers.
> *+Update+*: Due to the big code contribution of Alibaba into Flink SQL. We 
> will only perform porting of API classes for now. This is mostly tracked by 
> FLINK-11448.
> FLIP-28 is legacy and has been integrated into 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11064) Setup a new flink-table module structure

2019-01-28 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754080#comment-16754080
 ] 

xuqianjin commented on FLINK-11064:
---

hi [~twalthr] That's good. I want to be involved.

best 

qianjin

> Setup a new flink-table module structure
> 
>
> Key: FLINK-11064
> URL: https://issues.apache.org/jira/browse/FLINK-11064
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This issue covers the first step of the migration plan mentioned in 
> [FLIP-28|https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free].
> Move all files to their corresponding modules as they are. No migration 
> happens at this stage. Modules might contain both Scala and Java classes. 
> Classes that should be placed in `flink-table-common` but are in Scala so far 
> remain in `flink-table-api-planner` for now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11434) Add the JSON_LENGTH function

2019-01-26 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin closed FLINK-11434.
-
Resolution: Fixed

> Add the JSON_LENGTH function
> 
>
> Key: FLINK-11434
> URL: https://issues.apache.org/jira/browse/FLINK-11434
> Project: Flink
>  Issue Type: Improvement
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> {{JSON_LENGTH(*json_doc*[, *path*\])}}
> Returns the length of a JSON document, or, if a _path_ argument is given, the 
> length of the value within the document identified by the path. Returns 
> {{NULL}} if any argument is {{NULL}} or the _path_ argument does not identify 
> a value in the document. An error occurs if the _json_doc_ argument is not a 
> valid JSON document or the _path_ argument is not a valid path expression or 
> contains a {{*}} or {{**}} wildcard.
> The length of a document is determined as follows:
>  * The length of a scalar is 1.
>  * The length of an array is the number of array elements.
>  * The length of an object is the number of object members.
>  * The length does not count the length of nested arrays or objects.
> SELECT JSON_LENGTH('[1, 2, \{"a": 3}]');
> +-+
> | JSON_LENGTH('[1, 2, \{"a": 3}]') |
> +-+
> |                               3 |
> +-+
> SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}');
> +-+
> | JSON_LENGTH('\{"a": 1, "b": {"c": 30}}') |
> +-+
> |                                       2 |
> +-+
> SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b');
> ++
> | JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b') |
> ++
> |                                              1 |
> ++



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11434) Add the JSON_LENGTH function

2019-01-26 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11434:
-

 Summary: Add the JSON_LENGTH function
 Key: FLINK-11434
 URL: https://issues.apache.org/jira/browse/FLINK-11434
 Project: Flink
  Issue Type: Improvement
Reporter: xuqianjin
Assignee: xuqianjin


{{JSON_LENGTH(*json_doc*[, *path*\])}}

Returns the length of a JSON document, or, if a _path_ argument is given, the 
length of the value within the document identified by the path. Returns 
{{NULL}} if any argument is {{NULL}} or the _path_ argument does not identify a 
value in the document. An error occurs if the _json_doc_ argument is not a 
valid JSON document or the _path_ argument is not a valid path expression or 
contains a {{*}} or {{**}} wildcard.

The length of a document is determined as follows:
 * The length of a scalar is 1.

 * The length of an array is the number of array elements.

 * The length of an object is the number of object members.

 * The length does not count the length of nested arrays or objects.

SELECT JSON_LENGTH('[1, 2, \{"a": 3}]');
+-+
| JSON_LENGTH('[1, 2, \{"a": 3}]') |
+-+
|                               3 |
+-+
SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}');
+-+
| JSON_LENGTH('\{"a": 1, "b": {"c": 30}}') |
+-+
|                                       2 |
+-+
SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b');
++
| JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b') |
++
|                                              1 |
++



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10994) The bug of timestampadd handles time

2019-01-14 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-10994:
-

Assignee: xuqianjin

> The bug of timestampadd handles time
> 
>
> Key: FLINK-10994
> URL: https://issues.apache.org/jira/browse/FLINK-10994
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2, 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>  Labels: pull-request-available
>
> The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.Long
> at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
>  at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
>  at 
> org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)
> I think it should meet the following conditions:
> ||expression||Expect the result||
> |timestampadd(MINUTE, -1, time '00:00:00')|23:59:00|
> |timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
> |timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
> |timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
> |timestampadd(HOUR, 1, time '23:59:59')|00:59:59|
> This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
> The following is the link.
>  CALCITE-2699



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11296) Support truncate in TableAPI & SQL

2019-01-12 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11296:
--
Summary: Support truncate in TableAPI & SQL  (was: Support truncate in 
TableAPI)

> Support truncate in TableAPI & SQL
> --
>
> Key: FLINK-11296
> URL: https://issues.apache.org/jira/browse/FLINK-11296
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add {{truncate}} support in TableAPI, Add support as follows:
> ||expression||Expect the result||
> |truncate(cast(42.345 as decimal(2, 3)), 2)|42.34|
> |truncate(42, -1)|40|
> |truncate(42.324)|42.0|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11296) Support truncate in TableAPI

2019-01-11 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11296:
--
Description: 
Add {{truncate}} support in TableAPI, Add support as follows:
||expression||Expect the result||
|truncate(cast(42.345 as decimal(2, 3)), 2)|42.34|
|truncate(42, -1)|40|
|truncate(42.324)|42.0|

  was:
Add {{truncate}} support in TableAPI, Add support as follows:
||expression||Expect the result||
|truncate(cast(42.345 as decimal(2, 3)), 2)|42.34|
|truncate(42, -1)|40|
|truncate(42.324)|42|


> Support truncate in TableAPI
> 
>
> Key: FLINK-11296
> URL: https://issues.apache.org/jira/browse/FLINK-11296
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add {{truncate}} support in TableAPI, Add support as follows:
> ||expression||Expect the result||
> |truncate(cast(42.345 as decimal(2, 3)), 2)|42.34|
> |truncate(42, -1)|40|
> |truncate(42.324)|42.0|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11296) Support truncate in TableAPI

2019-01-11 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740152#comment-16740152
 ] 

xuqianjin commented on FLINK-11296:
---

hi [~jark]  There are some differences between the two functions:

TRUNCATE(X,D)

Returns the number X, truncated to D decimal places. If D is 0, the result has 
no decimal point or fractional part. D can be negative to cause D digits left 
of the decimal point of the value X to become zero. 

ROUND(X,D)  ,X is the number you're dealing with, and d is how many decimal 
places you're keeping
The round function is used to round the data, with carry processing.

Best,

qianjin

> Support truncate in TableAPI
> 
>
> Key: FLINK-11296
> URL: https://issues.apache.org/jira/browse/FLINK-11296
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add {{truncate}} support in TableAPI, Add support as follows:
> ||expression||Expect the result||
> |truncate(cast(42.345 as decimal(2, 3)), 2)|42.34|
> |truncate(42, -1)|40|
> |truncate(42.324)|42|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11296) Support truncate in TableAPI

2019-01-10 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11296:
--
Component/s: Table API & SQL

> Support truncate in TableAPI
> 
>
> Key: FLINK-11296
> URL: https://issues.apache.org/jira/browse/FLINK-11296
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> Add {{truncate}} support in TableAPI, Add support as follows:
> ||expression||Expect the result||
> |truncate(cast(42.345 as decimal(2, 3)), 2)|42.34|
> |truncate(42, -1)|40|
> |truncate(42.324)|42|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11296) Support truncate in TableAPI

2019-01-09 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11296:
-

 Summary: Support truncate in TableAPI
 Key: FLINK-11296
 URL: https://issues.apache.org/jira/browse/FLINK-11296
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.7.1
Reporter: xuqianjin
Assignee: xuqianjin


Add {{truncate}} support in TableAPI, Add support as follows:
||expression||Expect the result||
|truncate(cast(42.345 as decimal(2, 3)), 2)|42.34|
|truncate(42, -1)|40|
|truncate(42.324)|42|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10134) UTF-16 support for TextInputFormat

2019-01-09 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-10134:
-

Assignee: xuqianjin

> UTF-16 support for TextInputFormat
> --
>
> Key: FLINK-10134
> URL: https://issues.apache.org/jira/browse/FLINK-10134
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.4.2
>Reporter: David Dreyfus
>Assignee: xuqianjin
>Priority: Critical
>  Labels: pull-request-available
>
> It does not appear that Flink supports a charset encoding of "UTF-16". It 
> particular, it doesn't appear that Flink consumes the Byte Order Mark (BOM) 
> to establish whether a UTF-16 file is UTF-16LE or UTF-16BE.
>  
> TextInputFormat.setCharset("UTF-16") calls DelimitedInputFormat.setCharset(), 
> which sets TextInputFormat.charsetName and then modifies the previously set 
> delimiterString to construct the proper byte string encoding of the the 
> delimiter. This same charsetName is also used in TextInputFormat.readRecord() 
> to interpret the bytes read from the file.
>  
> There are two problems that this implementation would seem to have when using 
> UTF-16.
>  # delimiterString.getBytes(getCharset()) in DelimitedInputFormat.java will 
> return a Big Endian byte sequence including the Byte Order Mark (BOM). The 
> actual text file will not contain a BOM at each line ending, so the delimiter 
> will never be read. Moreover, if the actual byte encoding of the file is 
> Little Endian, the bytes will be interpreted incorrectly.
>  # TextInputFormat.readRecord() will not see a BOM each time it decodes a 
> byte sequence with the String(bytes, offset, numBytes, charset) call. 
> Therefore, it will assume Big Endian, which may not always be correct. [1] 
> [https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/io/TextInputFormat.java#L95]
>  
> While there are likely many solutions, I would think that all of them would 
> have to start by reading the BOM from the file when a Split is opened and 
> then using that BOM to modify the specified encoding to a BOM specific one 
> when the caller doesn't specify one, and to overwrite the caller's 
> specification if the BOM is in conflict with the caller's specification. That 
> is, if the BOM indicates Little Endian and the caller indicates UTF-16BE, 
> Flink should rewrite the charsetName as UTF-16LE.
>  I hope this makes sense and that I haven't been testing incorrectly or 
> misreading the code.
>  
> I've verified the problem on version 1.4.2. I believe the problem exists on 
> all versions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11279) Invalid week interval parsing in ExpressionParser

2019-01-08 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737218#comment-16737218
 ] 

xuqianjin commented on FLINK-11279:
---

hi [~twalthr] Thank you very much

Best,

qianjin

> Invalid week interval parsing in ExpressionParser
> -
>
> Key: FLINK-11279
> URL: https://issues.apache.org/jira/browse/FLINK-11279
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.7.0, 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.2, 1.8.0
>
> Attachments: 20190108123404.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Execute the following code:
>     testAllApis("2016-03-31".toDate - 1.week,
>       "'2016-03-31'.toDate - 1.week",
>       "timestampadd(WEEK, -1, date '2016-03-31')",
>       "2016-03-24")
> Please see the screenshot for the error report.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11279) The bug of Error parsing ExpressionParser

2019-01-07 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11279:
--
Attachment: (was: 微信截图_20190108123404.png)

> The bug of Error parsing ExpressionParser
> -
>
> Key: FLINK-11279
> URL: https://issues.apache.org/jira/browse/FLINK-11279
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Affects Versions: 1.6.3, 1.7.0, 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
> Attachments: 20190108123404.png
>
>
> Execute the following code:
>     testAllApis("2016-03-31".toDate - 1.week,
>       "'2016-03-31'.toDate - 1.week",
>       "timestampadd(WEEK, -1, date '2016-03-31')",
>       "2016-03-24")
> Please see the screenshot for the error report.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11279) The bug of Error parsing ExpressionParser

2019-01-07 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11279:
--
Attachment: 20190108123404.png

> The bug of Error parsing ExpressionParser
> -
>
> Key: FLINK-11279
> URL: https://issues.apache.org/jira/browse/FLINK-11279
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Affects Versions: 1.6.3, 1.7.0, 1.7.1
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
> Attachments: 20190108123404.png
>
>
> Execute the following code:
>     testAllApis("2016-03-31".toDate - 1.week,
>       "'2016-03-31'.toDate - 1.week",
>       "timestampadd(WEEK, -1, date '2016-03-31')",
>       "2016-03-24")
> Please see the screenshot for the error report.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11279) The bug of Error parsing ExpressionParser

2019-01-07 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11279:
-

 Summary: The bug of Error parsing ExpressionParser
 Key: FLINK-11279
 URL: https://issues.apache.org/jira/browse/FLINK-11279
 Project: Flink
  Issue Type: Task
  Components: Table API  SQL
Affects Versions: 1.7.1, 1.7.0, 1.6.3
Reporter: xuqianjin
Assignee: xuqianjin
 Attachments: 微信截图_20190108123404.png

Execute the following code:
    testAllApis("2016-03-31".toDate - 1.week,
      "'2016-03-31'.toDate - 1.week",
      "timestampadd(WEEK, -1, date '2016-03-31')",
      "2016-03-24")
Please see the screenshot for the error report.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11248) Support Row/CRow state schema evolution

2019-01-02 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731895#comment-16731895
 ] 

xuqianjin commented on FLINK-11248:
---

hi [~kisimple] Can you add some description to this jira?

Thank you

qianjin

> Support Row/CRow state schema evolution
> ---
>
> Key: FLINK-11248
> URL: https://issues.apache.org/jira/browse/FLINK-11248
> Project: Flink
>  Issue Type: Sub-task
>  Components: Type Serialization System
>Reporter: boshu Zheng
>Assignee: boshu Zheng
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-26 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16729344#comment-16729344
 ] 

xuqianjin commented on FLINK-11219:
---

hi [~Zentol] I tried removing the exclusions and relocating jackson in calcite, 
which code runs through. Compilation on CI is still problematic.

> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
>  Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
>  2. Need to upgrade flink jackson in the shaded version
>  Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-26 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16729053#comment-16729053
 ] 

xuqianjin edited comment on FLINK-11219 at 12/26/18 2:36 PM:
-

hi [~Zentol] However, the Jackson version required by calcite1.18.0 is 3.9.6, 
while the Jackson version in flink is 2.7.3 at present.I have tried not to 
upgrade the Jackson version in flink. Changing calcite version to 1.18.0 would 
throw an exception in which Jackson could not be found.

thanks

qianjin


was (Author: x1q1j1):
[~Zentol] However, the Jackson version required by calcite1.18.0 is 3.9.6, 
while the Jackson version in flink is 2.7.3 at present.

> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
>  Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
>  2. Need to upgrade flink jackson in the shaded version
>  Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-26 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16729053#comment-16729053
 ] 

xuqianjin commented on FLINK-11219:
---

[~Zentol] However, the Jackson version required by calcite1.18.0 is 3.9.6, 
while the Jackson version in flink is 2.7.3 at present.

> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
>  Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
>  2. Need to upgrade flink jackson in the shaded version
>  Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-26 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11219:
-

Assignee: xuqianjin

> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
>  Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
>  2. Need to upgrade flink jackson in the shaded version
>  Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-25 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728882#comment-16728882
 ] 

xuqianjin commented on FLINK-11219:
---

I have been in the flink - shaded added a PR:
https://github.com/apache/flink-shaded/pull/54

> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
>  Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
>  2. Need to upgrade flink jackson in the shaded version
>  Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-25 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11219:
--
Component/s: Build System

> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
> Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
> 2. Need to upgrade flink-Jackson in the shaded version
> Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-25 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11219:
--
Description: 
1. Upgrade Jackson version to 2.9.6
 Because the Jackson version supported by calcite1.18.0 has been upgraded to 
Jackson 2.9.6.
 2. Need to upgrade flink jackson in the shaded version
 Because many flink dependency from the flink-shaded.

  was:
1. Upgrade Jackson version to 2.9.6
 Because the Jackson version supported by calcite1.18.0 has been upgraded to 
Jackson 2.9.6.
 2. Need to upgrade flink Jackson in the shaded version
 Because many flink dependency from the flink-shaded.


> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
>  Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
>  2. Need to upgrade flink jackson in the shaded version
>  Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-25 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11219:
--
Description: 
1. Upgrade Jackson version to 2.9.6
 Because the Jackson version supported by calcite1.18.0 has been upgraded to 
Jackson 2.9.6.
 2. Need to upgrade flink Jackson in the shaded version
 Because many flink dependency from the flink-shaded.

  was:
1. Upgrade Jackson version to 2.9.6
Because the Jackson version supported by calcite1.18.0 has been upgraded to 
Jackson 2.9.6.
2. Need to upgrade flink-Jackson in the shaded version
Because many flink dependency from the flink-shaded.


> Upgrade Jackson dependency to 2.9.6
> ---
>
> Key: FLINK-11219
> URL: https://issues.apache.org/jira/browse/FLINK-11219
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: xuqianjin
>Priority: Major
>
> 1. Upgrade Jackson version to 2.9.6
>  Because the Jackson version supported by calcite1.18.0 has been upgraded to 
> Jackson 2.9.6.
>  2. Need to upgrade flink Jackson in the shaded version
>  Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11219) Upgrade Jackson dependency to 2.9.6

2018-12-25 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11219:
-

 Summary: Upgrade Jackson dependency to 2.9.6
 Key: FLINK-11219
 URL: https://issues.apache.org/jira/browse/FLINK-11219
 Project: Flink
  Issue Type: Task
Reporter: xuqianjin


1. Upgrade Jackson version to 2.9.6
Because the Jackson version supported by calcite1.18.0 has been upgraded to 
Jackson 2.9.6.
2. Need to upgrade flink-Jackson in the shaded version
Because many flink dependency from the flink-shaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10076) Upgrade Calcite dependency to 1.18

2018-12-21 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16727202#comment-16727202
 ] 

xuqianjin commented on FLINK-10076:
---

hi [~fhueske] Well, thank you very much. Let me verify the function I modified 
in calcite1.18.0

> Upgrade Calcite dependency to 1.18
> --
>
> Key: FLINK-10076
> URL: https://issues.apache.org/jira/browse/FLINK-10076
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11120) The bug of timestampadd handles time

2018-12-21 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11120:
--
Description: 
The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
at 
org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)

I think it should meet the following conditions:
||expression||Expect the result||
|timestampadd(MINUTE, -1, time '00:00:00')|23:59:00|
|timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
|timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
|timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
|timestampadd(HOUR, 1, time '23:59:59')|00:59:59|

This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
The following is the link.
CALCITE-2699

> The bug of timestampadd  handles time
> -
>
> Key: FLINK-11120
> URL: https://issues.apache.org/jira/browse/FLINK-11120
> Project: Flink
>  Issue Type: Sub-task
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.Long
> at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
> at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
> at 
> org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
> at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
> at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
> at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)
> I think it should meet the following conditions:
> ||expression||Expect the result||
> |timestampadd(MINUTE, -1, time '00:00:00')|23:59:00|
> |timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
> |timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
> |timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
> |timestampadd(HOUR, 1, time '23:59:59')|00:59:59|
> This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
> The following is the link.
> CALCITE-2699



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11120) The bug of timestampadd handles time

2018-12-13 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11120:
-

Assignee: xuqianjin

> The bug of timestampadd  handles time
> -
>
> Key: FLINK-11120
> URL: https://issues.apache.org/jira/browse/FLINK-11120
> Project: Flink
>  Issue Type: Sub-task
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11099) Migrate flink-table runtime CRow Types classes

2018-12-11 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11099:
-

Assignee: xuqianjin

> Migrate flink-table runtime  CRow  Types classes
> 
>
> Key: FLINK-11099
> URL: https://issues.apache.org/jira/browse/FLINK-11099
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11097) Migrate flink-table runtime InputFormat classes

2018-12-11 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reassigned FLINK-11097:
-

Assignee: xuqianjin

> Migrate flink-table runtime InputFormat classes 
> 
>
> Key: FLINK-11097
> URL: https://issues.apache.org/jira/browse/FLINK-11097
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> As discussed in FLINK-11065, this is a subtask which migrates flink-table
> org.apache.flink.table.runtime.io Scala files in the directory to java in 
> module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-11120) The bug of timestampadd handles time

2018-12-10 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin reopened FLINK-11120:
---

> The bug of timestampadd  handles time
> -
>
> Key: FLINK-11120
> URL: https://issues.apache.org/jira/browse/FLINK-11120
> Project: Flink
>  Issue Type: Sub-task
>Reporter: xuqianjin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11120) The bug of timestampadd handles time

2018-12-10 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin closed FLINK-11120.
-
Resolution: Duplicate

> The bug of timestampadd  handles time
> -
>
> Key: FLINK-11120
> URL: https://issues.apache.org/jira/browse/FLINK-11120
> Project: Flink
>  Issue Type: Sub-task
>Reporter: xuqianjin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11120) The bug of timestampadd handles time

2018-12-10 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11120:
-

 Summary: The bug of timestampadd  handles time
 Key: FLINK-11120
 URL: https://issues.apache.org/jira/browse/FLINK-11120
 Project: Flink
  Issue Type: Sub-task
Reporter: xuqianjin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11065) Migrate flink-table runtime classes

2018-12-07 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16713070#comment-16713070
 ] 

xuqianjin commented on FLINK-11065:
---

hi [~twalthr] [~hequn8128] I added two subtasks, and I wanted to try to 
implement it.

thanks

qianjin

> Migrate flink-table runtime classes
> ---
>
> Key: FLINK-11065
> URL: https://issues.apache.org/jira/browse/FLINK-11065
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Priority: Major
>
> This issue covers the third step of the migration plan mentioned in 
> [FLIP-28|https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free].
> All runtime classes have little dependencies to other classes.
> This issue tracks efforts of porting runtime classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11099) Migrate flink-table runtime CRow Types classes

2018-12-07 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11099:
--
Priority: Major  (was: Minor)

> Migrate flink-table runtime  CRow  Types classes
> 
>
> Key: FLINK-11099
> URL: https://issues.apache.org/jira/browse/FLINK-11099
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11097) Migrate flink-table runtime InputFormat classes

2018-12-07 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712719#comment-16712719
 ] 

xuqianjin commented on FLINK-11097:
---

hi [~Tison]  Thanks for your help. I have submitted the email

> Migrate flink-table runtime InputFormat classes 
> 
>
> Key: FLINK-11097
> URL: https://issues.apache.org/jira/browse/FLINK-11097
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Major
>
> As discussed in FLINK-11065, this is a subtask which migrates flink-table
> org.apache.flink.table.runtime.io Scala files in the directory to java in 
> module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11098) Migrate flink-table runtime Row Types classes

2018-12-07 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin closed FLINK-11098.
-
Resolution: Duplicate

> Migrate flink-table runtime Row Types classes
> -
>
> Key: FLINK-11098
> URL: https://issues.apache.org/jira/browse/FLINK-11098
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Major
>
> As discussed in FLINK-11065, this is a subtask which migrates flink-table 
> org.apache.flink.table.runtime.types Scala files in the directory to java in 
> module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11099) Migrate flink-table runtime CRow Types classes

2018-12-07 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712681#comment-16712681
 ] 

xuqianjin commented on FLINK-11099:
---

I will try to finish it

> Migrate flink-table runtime  CRow  Types classes
> 
>
> Key: FLINK-11099
> URL: https://issues.apache.org/jira/browse/FLINK-11099
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11099) Migrate flink-table runtime CRow Types classes

2018-12-07 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11099:
--
Summary: Migrate flink-table runtime  CRow  Types classes  (was: Migrate 
flink-table runtime Row Types classes)

> Migrate flink-table runtime  CRow  Types classes
> 
>
> Key: FLINK-11099
> URL: https://issues.apache.org/jira/browse/FLINK-11099
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11098) Migrate flink-table runtime Row Types classes

2018-12-07 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11098:
-

 Summary: Migrate flink-table runtime Row Types classes
 Key: FLINK-11098
 URL: https://issues.apache.org/jira/browse/FLINK-11098
 Project: Flink
  Issue Type: New Feature
  Components: Table API  SQL
Reporter: xuqianjin


As discussed in FLINK-11065, this is a subtask which migrates flink-table 
org.apache.flink.table.runtime.types Scala files in the directory to java in 
module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11099) Migrate flink-table runtime Row Types classes

2018-12-07 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11099:
-

 Summary: Migrate flink-table runtime Row Types classes
 Key: FLINK-11099
 URL: https://issues.apache.org/jira/browse/FLINK-11099
 Project: Flink
  Issue Type: Sub-task
  Components: Table API  SQL
Reporter: xuqianjin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11097) Migrate flink-table InputFormat classes

2018-12-07 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11097:
--
Issue Type: New Feature  (was: Task)
   Summary: Migrate flink-table InputFormat classes   (was: Migrate 
flink-table runtime io classes)

> Migrate flink-table InputFormat classes 
> 
>
> Key: FLINK-11097
> URL: https://issues.apache.org/jira/browse/FLINK-11097
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.7.0
>Reporter: xuqianjin
>Priority: Major
>
> As discussed in FLINK-11065, this is a subtask which migrates flink-table
> org.apache.flink.table.runtime.io Scala files in the directory to java in 
> module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11097) Migrate flink-table runtime io classes

2018-12-07 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712615#comment-16712615
 ] 

xuqianjin commented on FLINK-11097:
---

I will try to finish it

> Migrate flink-table runtime io classes
> --
>
> Key: FLINK-11097
> URL: https://issues.apache.org/jira/browse/FLINK-11097
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.7.0
>Reporter: xuqianjin
>Priority: Major
>
> As discussed in FLINK-11065, this is a subtask which migrates flink-table
> org.apache.flink.table.runtime.io Scala files in the directory to java in 
> module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11097) Migrate flink-table runtime io classes

2018-12-07 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-11097:
--
Issue Type: New Feature  (was: Bug)

> Migrate flink-table runtime io classes
> --
>
> Key: FLINK-11097
> URL: https://issues.apache.org/jira/browse/FLINK-11097
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.7.0
>Reporter: xuqianjin
>Priority: Major
>
> As discussed in FLINK-11065, this is a subtask which migrates flink-table
> org.apache.flink.table.runtime.io Scala files in the directory to java in 
> module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11097) Migrate flink-table runtime io classes

2018-12-07 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11097:
-

 Summary: Migrate flink-table runtime io classes
 Key: FLINK-11097
 URL: https://issues.apache.org/jira/browse/FLINK-11097
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Affects Versions: 1.7.0
Reporter: xuqianjin


As discussed in FLINK-11065, this is a subtask which migrates flink-table

org.apache.flink.table.runtime.io Scala files in the directory to java in 
module flink-table-runtime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712349#comment-16712349
 ] 

xuqianjin commented on FLINK-9740:
--

hi [~twalthr] I have resubmitted this design

thanks

qianjin

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: Discuss [FLINK-9740] Support group windows over 
> intervals of months.pdf
>
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-9740:
-
Attachment: Discuss [FLINK-9740] Support group windows over intervals of 
months.pdf

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: Discuss [FLINK-9740] Support group windows over 
> intervals of months.pdf
>
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712219#comment-16712219
 ] 

xuqianjin commented on FLINK-9740:
--

hi [~twalthr] I'm sorry that I found some details need to be considered when I 
looked at them last night, so I deleted the uploaded attachment. I plan to 
resubmit a design today

thanks

qianjin

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-9740:
-
Attachment: (was: About [FLINK-9740] Support group windows over 
intervals of months.pdf)

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-9740:
-
Comment: was deleted

(was: hi [~twalthr] The attachment has been submitted)

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711442#comment-16711442
 ] 

xuqianjin commented on FLINK-9740:
--

hi [~twalthr] The attachment has been submitted

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: About [FLINK-9740] Support group windows over intervals 
> of months.pdf
>
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711440#comment-16711440
 ] 

xuqianjin commented on FLINK-9740:
--

@

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: About [FLINK-9740] Support group windows over intervals 
> of months.pdf
>
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-9740:
-
Comment: was deleted

(was: @)

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: About [FLINK-9740] Support group windows over intervals 
> of months.pdf
>
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9740) Support group windows over intervals of months

2018-12-06 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-9740:
-
Attachment: About [FLINK-9740] Support group windows over intervals of 
months.pdf

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: About [FLINK-9740] Support group windows over intervals 
> of months.pdf
>
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9740) Support group windows over intervals of months

2018-11-30 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705634#comment-16705634
 ] 

xuqianjin commented on FLINK-9740:
--

hi [~liurenjie1024] [~twalthr] I want to try to solve it

> Support group windows over intervals of months 
> ---
>
> Key: FLINK-9740
> URL: https://issues.apache.org/jira/browse/FLINK-9740
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Timo Walther
>Assignee: Renjie Liu
>Priority: Major
>
> Currently, time-based group windows can be defined using intervals of 
> milliseconds such as {{.window(Tumble over 10.minutes on 'rowtime as 'w)}}. 
> For some use cases it might useful to define windows of months (esp. in 
> event-time) that work even with leap years and other special time cases.
> The following should be supported in Table API & SQL:
> {{.window(Tumble over 1.month on 'rowtime as 'w)}}
> {{.window(Tumble over 1.quarter on 'rowtime as 'w)}}
> {{.window(Tumble over 1.year on 'rowtime as 'w)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11017) Time interval for window aggregations in SQL is wrongly translated if specified with YEAR_MONTH resolution

2018-11-29 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704307#comment-16704307
 ] 

xuqianjin commented on FLINK-11017:
---

[~dawidwys] I want to try to solve it.

> Time interval for window aggregations in SQL is wrongly translated if 
> specified with YEAR_MONTH resolution
> --
>
> Key: FLINK-11017
> URL: https://issues.apache.org/jira/browse/FLINK-11017
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.6.3, 1.8.0, 1.7.1
>
>
> If a time interval was specified with {{YEAR TO MONTH}} resolution like e.g.:
> {code}
> SELECT * 
> FROM Mytable
> GROUP BY 
> TUMBLE(rowtime, INTERVAL '1-2' YEAR TO MONTH)
> {code}
> it will be wrongly translated to 14 milliseconds window. We should allow for 
> only DAY TO SECOND resolution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-10999) Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast to java.lang.Long

2018-11-29 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16702983#comment-16702983
 ] 

xuqianjin edited comment on FLINK-10999 at 11/29/18 10:25 AM:
--

hi [~fhueske] Thank you very much. I can indeed support rowtime and proctime in 
a table, but I convert and execute according to his code successively. Proctime 
is added to the first table, and rowtime is added to the second table after 
converting to stream, I find that the execution plan is one layer short of 
rowtime operation conversion.


was (Author: x1q1j1):
hi [~fhueske] Thank you very much. I can indeed support rowtime and proctime in 
a table, but I convert and execute according to his code successively. First 
proctime and then rowtime, I find that the execution plan is one layer short of 
rowtime operation conversion.

> Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast 
> to java.lang.Long
> ---
>
> Key: FLINK-10999
> URL: https://issues.apache.org/jira/browse/FLINK-10999
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.5, 1.6.2
>Reporter: ideal-hp
>Priority: Major
>
> Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
> java.lang.Long
>  at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:27)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:95)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:46)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
>  at DataStreamCalcRule$15.processElement(Unknown Source)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:66)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>  at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10999) Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast to java.lang.Long

2018-11-29 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16702983#comment-16702983
 ] 

xuqianjin commented on FLINK-10999:
---

hi [~fhueske] Thank you very much. I can indeed support rowtime and proctime in 
a table, but I convert and execute according to his code successively. First 
proctime and then rowtime, I find that the execution plan is one layer short of 
rowtime operation conversion.

> Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast 
> to java.lang.Long
> ---
>
> Key: FLINK-10999
> URL: https://issues.apache.org/jira/browse/FLINK-10999
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.5, 1.6.2
>Reporter: ideal-hp
>Priority: Major
>
> Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
> java.lang.Long
>  at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:27)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:95)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:46)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
>  at DataStreamCalcRule$15.processElement(Unknown Source)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:66)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>  at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10999) Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast to java.lang.Long

2018-11-29 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16702877#comment-16702877
 ] 

xuqianjin commented on FLINK-10999:
---

[~harbby] You see in the execution plan that you can't add rowtime and proctime 
in turn.I'm not sure if this needs to be changed.

> Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast 
> to java.lang.Long
> ---
>
> Key: FLINK-10999
> URL: https://issues.apache.org/jira/browse/FLINK-10999
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.5, 1.6.2
>Reporter: ideal-hp
>Priority: Major
>
> Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
> java.lang.Long
>  at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:27)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:95)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:46)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
>  at DataStreamCalcRule$15.processElement(Unknown Source)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:66)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>  at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10999) Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast to java.lang.Long

2018-11-28 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16702784#comment-16702784
 ] 

xuqianjin commented on FLINK-10999:
---

hi [~harbby] After tracking, two problems were found:
1.Proctime and rowtime can't be used together in a table, you can only use 
either.
2. The type returned by rowtime and proctime is timestamp, and you need to 
specify this type.

I updated the update in detail at [https://github.com/harbby/sylph/issues/23]

> Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast 
> to java.lang.Long
> ---
>
> Key: FLINK-10999
> URL: https://issues.apache.org/jira/browse/FLINK-10999
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.5, 1.6.2
>Reporter: ideal-hp
>Priority: Major
>
> Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
> java.lang.Long
>  at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:27)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:95)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:46)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
>  at DataStreamCalcRule$15.processElement(Unknown Source)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:66)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>  at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10999) Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast to java.lang.Long

2018-11-24 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16697743#comment-16697743
 ] 

xuqianjin commented on FLINK-10999:
---

[~harbby] I understand your description. Thank you very much. I will try to 
track this problem.

> Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast 
> to java.lang.Long
> ---
>
> Key: FLINK-10999
> URL: https://issues.apache.org/jira/browse/FLINK-10999
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.5, 1.6.2
>Reporter: ideal-hp
>Priority: Major
>
> Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
> java.lang.Long
>  at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:27)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:95)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:46)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
>  at DataStreamCalcRule$15.processElement(Unknown Source)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:66)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>  at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10999) Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast to java.lang.Long

2018-11-24 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16697701#comment-16697701
 ] 

xuqianjin commented on FLINK-10999:
---

[~harbby]It is not easy to understand your question from your description and 
log.

> Adding time multiple times causes Runtime : java.sql.Timestamp cannot be cast 
> to java.lang.Long
> ---
>
> Key: FLINK-10999
> URL: https://issues.apache.org/jira/browse/FLINK-10999
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.5, 1.6.2
>Reporter: ideal-hp
>Priority: Major
>
> Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
> java.lang.Long
>  at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:27)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:95)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:46)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>  at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>  at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
>  at DataStreamCalcRule$15.processElement(Unknown Source)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:66)
>  at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>  at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10994) The bug of timestampadd handles time

2018-11-23 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10994:
--
Description: 
The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
 at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
 at 
org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)

I think it should meet the following conditions:
||expression||Expect the result||
|timestampadd(MINUTE, -1, time '00:00:00')|23:59:00|
|timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
|timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
|timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
|timestampadd(HOUR, 1, time '23:59:59')|00:59:59|

This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
The following is the link.
 CALCITE-2699

  was:
The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
 at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
 at 
org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)

Compared with {{mysql}} database, I think it should meet the following 
conditions:
||expression||Expect the result||
|timestampadd(MINUTE, -1, time '00:00:00')|-00:01:00|
|timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
|timestampadd(MINUTE, 1, time '23:59:59')| 24:00:59|
|timestampadd(SECOND, 1, time '23:59:59')|24:00:00|
|timestampadd(HOUR, 1, time '23:59:59')|24:59:59|

This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
The following is the link.
 CALCITE-2699


> The bug of timestampadd handles time
> 
>
> Key: FLINK-10994
> URL: https://issues.apache.org/jira/browse/FLINK-10994
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2, 1.7.1
>Reporter: xuqianjin
>Priority: Major
>
> The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.Long
> at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
>  at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
>  at 
> org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)
> I think it should meet the following conditions:
> ||expression||Expect the result||
> |timestampadd(MINUTE, -1, time '00:00:00')|23:59:00|
> |timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
> |timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
> |timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
> |timestampadd(HOUR, 1, time '23:59:59')|00:59:59|
> This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
> The following is the link.
>  CALCITE-2699



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10994) The bug of timestampadd handles time

2018-11-23 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10994:
--
Description: 
The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
 at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
 at 
org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)

Compared with {{mysql}} database, I think it should meet the following 
conditions:
||expression||Expect the result||
|timestampadd(MINUTE, -1, time '00:00:00')|-00:01:00|
|timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
|timestampadd(MINUTE, 1, time '23:59:59')| 24:00:59|
|timestampadd(SECOND, 1, time '23:59:59')|24:00:00|
|timestampadd(HOUR, 1, time '23:59:59')|24:59:59|

This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
The following is the link.
 CALCITE-2699

  was:
The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
 at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
 at 
org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)

Compared with M{{ysql}} database, I think it should meet the following 
conditions:
||expression||Expect the result||
|timestampadd(MINUTE, -1, time '00:00:00')|NULL|
|timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
|timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
|timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
|timestampadd(HOUR, 1, time '23:59:59')|00:59:59|


This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
The following is the link.
CALCITE-2699


> The bug of timestampadd handles time
> 
>
> Key: FLINK-10994
> URL: https://issues.apache.org/jira/browse/FLINK-10994
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2, 1.7.1
>Reporter: xuqianjin
>Priority: Major
>
> The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.Long
> at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
>  at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
>  at 
> org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)
> Compared with {{mysql}} database, I think it should meet the following 
> conditions:
> ||expression||Expect the result||
> |timestampadd(MINUTE, -1, time '00:00:00')|-00:01:00|
> |timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
> |timestampadd(MINUTE, 1, time '23:59:59')| 24:00:59|
> |timestampadd(SECOND, 1, time '23:59:59')|24:00:00|
> |timestampadd(HOUR, 1, time '23:59:59')|24:59:59|
> This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
> The following is the link.
>  CALCITE-2699



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10994) The bug of timestampadd handles time

2018-11-22 Thread xuqianjin (JIRA)
xuqianjin created FLINK-10994:
-

 Summary: The bug of timestampadd handles time
 Key: FLINK-10994
 URL: https://issues.apache.org/jira/browse/FLINK-10994
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Affects Versions: 1.6.2, 1.7.1
Reporter: xuqianjin


The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
 at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
 at 
org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
 at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)

Compared with M{{ysql}} database, I think it should meet the following 
conditions:
||expression||Expect the result||
|timestampadd(MINUTE, -1, time '00:00:00')|NULL|
|timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
|timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
|timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
|timestampadd(HOUR, 1, time '23:59:59')|00:59:59|


This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
The following is the link.
CALCITE-2699



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9477) Support SQL 2016 JSON functions in Flink SQL

2018-11-22 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695732#comment-16695732
 ] 

xuqianjin commented on FLINK-9477:
--

[~suez1224]This is fixed on the version of calcite1.18.0. You need to upgrade 
the version of calcite to the version of 1.18.x.

> Support SQL 2016 JSON functions in Flink SQL
> 
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-21 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin closed FLINK-10926.
-
   Resolution: Fixed
Fix Version/s: 1.7.0

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2
>Reporter: xuqianjin
>Priority: Minor
> Fix For: 1.7.0
>
> Attachments: image-2018-11-19-18-33-47-389.png, 
> image-2018-11-19-22-23-09-554.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly
>  
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
> '2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 95: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692592#comment-16692592
 ] 

xuqianjin commented on FLINK-10926:
---

 [~xueyu7452] 

Well, I will check again when the version of 1.7 comes out.

Thank you very much

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png, 
> image-2018-11-19-22-23-09-554.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly
>  
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
> '2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 95: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692571#comment-16692571
 ] 

xuqianjin commented on FLINK-10926:
---

hi [~xueyu7452] It was found in version 1.6.2 but I'm sure it's the same in 
other versions

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png, 
> image-2018-11-19-22-23-09-554.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly
>  
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
> '2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 95: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Affects Version/s: 1.6.2

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.2
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png, 
> image-2018-11-19-22-23-09-554.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly
>  
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
> '2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 95: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691760#comment-16691760
 ] 

xuqianjin commented on FLINK-10926:
---

[~xueyu] 

val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
'2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")

I'm still reporting an error with this call

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png, 
> image-2018-11-19-22-23-09-554.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly
>  
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
> '2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 95: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Description: 
Use the following SQL statement:

val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
09:00:00','2012-08-30 12:00:00')")

The following errors occurred:

Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 72: No match found for function signature 
TIMESTAMPDIFF(, , )
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

Expect to be able to return the time difference correctly

 

val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
'2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")

Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 95: No match found for function signature 
TIMESTAMPDIFF(, , )
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

  was:
Use the following SQL statement:

val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
09:00:00','2012-08-30 12:00:00')")

The following errors occurred:

Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 72: No match found for function signature 
TIMESTAMPDIFF(, , )
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

Expect to be able to return the time difference correctly


> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png, 
> image-2018-11-19-22-23-09-554.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly
>  
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF (MINUTE, TIMESTAMP 
> '2012-08-24 09:00:00', TIMESTAMP '2012-08-30 12:00:00')")
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 95: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Attachment: image-2018-11-19-22-23-09-554.png

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png, 
> image-2018-11-19-22-23-09-554.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Description: 
Use the following SQL statement:

val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
09:00:00','2012-08-30 12:00:00')")

The following errors occurred:

Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 72: No match found for function signature 
TIMESTAMPDIFF(, , )
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

Expect to be able to return the time difference correctly

  was:
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

The following errors occurred:

Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 72: No match found for function signature 
TIMESTAMPDIFF(, , )
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

Expect to be able to return the time difference correctly


> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png
>
>
> Use the following SQL statement:
> val result3 = tEnv.sqlQuery("select TIMESTAMPDIFF(MINUTE,'2012-08-24 
> 09:00:00','2012-08-30 12:00:00')")
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Description: 
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

The following errors occurred:

Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 72: No match found for function signature 
TIMESTAMPDIFF(, , )
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

Expect to be able to return the time difference correctly

  was:
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

Expect to be able to return the time difference correctly

!image-2018-11-19-18-33-47-389.png!


> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png
>
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> The following errors occurred:
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 72: No match found for function signature 
> TIMESTAMPDIFF(, , )
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Attachment: (was: image-2018-11-19-18-30-59-158.png)

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-33-47-389.png
>
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly
> !image-2018-11-19-18-33-47-389.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Description: 
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

Expect to be able to return the time difference correctly

!image-2018-11-19-18-33-47-389.png!

  was:
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

Expect to be able to return the time difference correctly

!image-2018-11-19-18-30-59-158.png!


> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-30-59-158.png, 
> image-2018-11-19-18-33-47-389.png
>
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly
> !image-2018-11-19-18-33-47-389.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Attachment: image-2018-11-19-18-33-47-389.png

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-30-59-158.png, 
> image-2018-11-19-18-33-47-389.png
>
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly
> !image-2018-11-19-18-30-59-158.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Fix the problem for function TIMESTAMPDIFF in Table

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
 Attachment: image-2018-11-19-18-30-59-158.png
Description: 
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

Expect to be able to return the time difference correctly

!image-2018-11-19-18-30-59-158.png!

  was:
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

Expect to be able to return the time difference correctly

 Issue Type: Bug  (was: Task)
Summary: Fix the problem for function TIMESTAMPDIFF in Table  (was: Add 
TIMESTAMPDIFF math function supported in Table API and SQL)

> Fix the problem for function TIMESTAMPDIFF in Table
> ---
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
> Attachments: image-2018-11-19-18-30-59-158.png
>
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly
> !image-2018-11-19-18-30-59-158.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Add TIMESTAMPDIFF math function supported in Table API and SQL

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Description: 
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 12:00:00')}}

Expect to be able to return the time difference correctly

  was:
Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, 'the 2012-08-24 09:00:00', '2012-08-30 
12:00:00')}}

Expect to be able to return the time difference correctly


> Add TIMESTAMPDIFF math function supported in Table API and SQL
> --
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-10926) Add TIMESTAMPDIFF math function supported in Table API and SQL

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Comment: was deleted

(was: I think this function is necessary, I want to try to implement it by 
myself.)

> Add TIMESTAMPDIFF math function supported in Table API and SQL
> --
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, '2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10926) Add TIMESTAMPDIFF math function supported in Table API and SQL

2018-11-19 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691503#comment-16691503
 ] 

xuqianjin commented on FLINK-10926:
---

I think this function is necessary, I want to try to implement it by myself.

> Add TIMESTAMPDIFF math function supported in Table API and SQL
> --
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, 'the 2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10926) Add TIMESTAMPDIFF math function supported in Table API and SQL

2018-11-19 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10926:
--
Summary: Add TIMESTAMPDIFF math function supported in Table API and SQL  
(was: Add TIMESTAMPDIFFmath function supported in Table API and SQL)

> Add TIMESTAMPDIFF math function supported in Table API and SQL
> --
>
> Key: FLINK-10926
> URL: https://issues.apache.org/jira/browse/FLINK-10926
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: xuqianjin
>Priority: Minor
>
> Use the following SQL statement:
> {{select TIMESTAMPDIFF (MINUTE, 'the 2012-08-24 09:00:00', '2012-08-30 
> 12:00:00')}}
> Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10926) Add TIMESTAMPDIFFmath function supported in Table API and SQL

2018-11-19 Thread xuqianjin (JIRA)
xuqianjin created FLINK-10926:
-

 Summary: Add TIMESTAMPDIFFmath function supported in Table API and 
SQL
 Key: FLINK-10926
 URL: https://issues.apache.org/jira/browse/FLINK-10926
 Project: Flink
  Issue Type: Task
  Components: Table API  SQL
Reporter: xuqianjin


Use the following SQL statement:

{{select TIMESTAMPDIFF (MINUTE, 'the 2012-08-24 09:00:00', '2012-08-30 
12:00:00')}}

Expect to be able to return the time difference correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10009) Fix the casting problem for function TIMESTAMPADD in Table

2018-11-19 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691450#comment-16691450
 ] 

xuqianjin commented on FLINK-10009:
---

[~RuidongLi] [~xccui] This task has not been fixed yet, I want to try to fix it

> Fix the casting problem for function TIMESTAMPADD in Table
> --
>
> Key: FLINK-10009
> URL: https://issues.apache.org/jira/browse/FLINK-10009
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Xingcan Cui
>Assignee: Ruidong Li
>Priority: Major
>
> There seems to be a bug in {{TIMESTAMPADD}} function. For example, 
> {{TIMESTAMPADD(MINUTE, 1, DATE '2016-06-15')}} throws a 
> {{ClassCastException}} ( java.lang.Integer cannot be cast to java.lang.Long). 
> Actually, it tries to cast an integer date to a long timestamp in 
> RexBuilder.java:1524 - {{return TimestampString.fromMillisSinceEpoch((Long) 
> o)}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10729) Create a Hive connector for Hive data access in Flink

2018-11-18 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690840#comment-16690840
 ] 

xuqianjin commented on FLINK-10729:
---

This function is so great, I hope to see task and design, I also want to join 
you.

> Create a Hive connector for Hive data access in Flink
> -
>
> Key: FLINK-10729
> URL: https://issues.apache.org/jira/browse/FLINK-10729
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Affects Versions: 1.6.2
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
>Priority: Major
>
> As part of Flink-Hive integration effort, it's important for Flink to access 
> (read/write) Hive data, which is the responsibility of Hive connector. While 
> there is a HCatalog data connector in the code base, it's not complete (i.e. 
> missing all connector related classes such as validators, etc.). Further, 
> HCatalog interface has many limitations such as accessing a subset of Hive 
> data, supporting a subset of Hive data types, etc. In addition, it's not 
> actively maintained. In fact, it's now only a sub-project in Hive.
> Therefore, here we propose a complete connector set for Hive tables, not via 
> HCatalog, but via direct Hive interface. HCatalog connector will be 
> deprecated.
> Please note that connector on Hive metadata is already covered in other 
> JIRAs, as {{HiveExternalCatalog}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10134) UTF-16 support for TextInputFormat

2018-10-18 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16654945#comment-16654945
 ] 

xuqianjin commented on FLINK-10134:
---

@[~till.rohrmann] I will take the time to submit the PR.

> UTF-16 support for TextInputFormat
> --
>
> Key: FLINK-10134
> URL: https://issues.apache.org/jira/browse/FLINK-10134
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.4.2
>Reporter: David Dreyfus
>Priority: Critical
>  Labels: pull-request-available
>
> It does not appear that Flink supports a charset encoding of "UTF-16". It 
> particular, it doesn't appear that Flink consumes the Byte Order Mark (BOM) 
> to establish whether a UTF-16 file is UTF-16LE or UTF-16BE.
>  
> TextInputFormat.setCharset("UTF-16") calls DelimitedInputFormat.setCharset(), 
> which sets TextInputFormat.charsetName and then modifies the previously set 
> delimiterString to construct the proper byte string encoding of the the 
> delimiter. This same charsetName is also used in TextInputFormat.readRecord() 
> to interpret the bytes read from the file.
>  
> There are two problems that this implementation would seem to have when using 
> UTF-16.
>  # delimiterString.getBytes(getCharset()) in DelimitedInputFormat.java will 
> return a Big Endian byte sequence including the Byte Order Mark (BOM). The 
> actual text file will not contain a BOM at each line ending, so the delimiter 
> will never be read. Moreover, if the actual byte encoding of the file is 
> Little Endian, the bytes will be interpreted incorrectly.
>  # TextInputFormat.readRecord() will not see a BOM each time it decodes a 
> byte sequence with the String(bytes, offset, numBytes, charset) call. 
> Therefore, it will assume Big Endian, which may not always be correct. [1] 
> [https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/io/TextInputFormat.java#L95]
>  
> While there are likely many solutions, I would think that all of them would 
> have to start by reading the BOM from the file when a Split is opened and 
> then using that BOM to modify the specified encoding to a BOM specific one 
> when the caller doesn't specify one, and to overwrite the caller's 
> specification if the BOM is in conflict with the caller's specification. That 
> is, if the BOM indicates Little Endian and the caller indicates UTF-16BE, 
> Flink should rewrite the charsetName as UTF-16LE.
>  I hope this makes sense and that I haven't been testing incorrectly or 
> misreading the code.
>  
> I've verified the problem on version 1.4.2. I believe the problem exists on 
> all versions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-10425) taskmaster.host is not respected

2018-09-28 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-10425:
--
Comment: was deleted

(was: hi [~yanghua] Did this bug fix it? I want to reproduce and fix it)

> taskmaster.host is not respected
> 
>
> Key: FLINK-10425
> URL: https://issues.apache.org/jira/browse/FLINK-10425
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.6.1
>Reporter: Andrew Kowpak
>Assignee: vinoyang
>Priority: Major
>
> The documentation states that taskmanager.host can be set to override the 
> discovered hostname, however, setting this value has no effect.
> Looking at the code, the value never seems to be used.  Instead, the 
> deprecated taskmanager.hostname is still used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10425) taskmaster.host is not respected

2018-09-28 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631559#comment-16631559
 ] 

xuqianjin commented on FLINK-10425:
---

hi [~yanghua] Did this bug fix it? I want to reproduce and fix it

> taskmaster.host is not respected
> 
>
> Key: FLINK-10425
> URL: https://issues.apache.org/jira/browse/FLINK-10425
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.6.1
>Reporter: Andrew Kowpak
>Assignee: vinoyang
>Priority: Major
>
> The documentation states that taskmanager.host can be set to override the 
> discovered hostname, however, setting this value has no effect.
> Looking at the code, the value never seems to be used.  Instead, the 
> deprecated taskmanager.hostname is still used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10361) Elasticsearch (v6.3.1) sink end-to-end test instable

2018-09-28 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631498#comment-16631498
 ] 

xuqianjin commented on FLINK-10361:
---

[~till.rohrmann] Has this bug been fixed in version 1.7?

> Elasticsearch (v6.3.1) sink end-to-end test instable
> 
>
> Key: FLINK-10361
> URL: https://issues.apache.org/jira/browse/FLINK-10361
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.7.0
>
> Attachments: flink-elasticsearch-logs.tgz
>
>
> The Elasticsearch (v6.3.1) sink end-to-end test is instable. Running it on an 
> Amazon instance it failed with the following exception in the logs:
> {code}
> 2018-09-17 20:46:04,856 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: Sequence Source -> Flat Map -> Sink: Unnamed (1/1) 
> (cb23fdd9df0d4e09270b2ae9970efbac) switched from RUNNING to FAILED.
> java.io.IOException: Connection refused
>   at 
> org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:728)
>   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
>   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:198)
>   at 
> org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:522)
>   at 
> org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:275)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.createClient(Elasticsearch6ApiCallBridge.java:81)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.createClient(Elasticsearch6ApiCallBridge.java:47)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:296)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:424)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:290)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:171)
>   at 
> org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:145)
>   at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348)
>   at 
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192)
>   at 
> org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
>   ... 1 more
> {code}
> I assume that we should harden the test against connection problems a little 
> bit better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8851) SQL Client fails if same file is used as default and env configuration

2018-09-27 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630146#comment-16630146
 ] 

xuqianjin commented on FLINK-8851:
--

@[~twalthr] Sorry, I have verified that this bug does not exist.

> SQL Client fails if same file is used as default and env configuration
> --
>
> Key: FLINK-8851
> URL: https://issues.apache.org/jira/browse/FLINK-8851
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.5.5
>
>
> Specifying the same file as default and environment configuration yields the 
> following exception
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:156)
> Caused by: java.lang.UnsupportedOperationException
>     at java.util.AbstractMap.put(AbstractMap.java:209)
>     at java.util.AbstractMap.putAll(AbstractMap.java:281)
>     at 
> org.apache.flink.table.client.config.Environment.merge(Environment.java:107)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.createEnvironment(LocalExecutor.java:461)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.listTables(LocalExecutor.java:203)
>     at 
> org.apache.flink.table.client.cli.CliClient.callShowTables(CliClient.java:270)
>     at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:198)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:97)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:146){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-8851) SQL Client fails if same file is used as default and env configuration

2018-09-26 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin updated FLINK-8851:
-
Comment: was deleted

(was: I verified in the 1.6 version that the same configuration file does have 
this bug, I will try to fix this bug.)

> SQL Client fails if same file is used as default and env configuration
> --
>
> Key: FLINK-8851
> URL: https://issues.apache.org/jira/browse/FLINK-8851
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.5.5
>
>
> Specifying the same file as default and environment configuration yields the 
> following exception
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:156)
> Caused by: java.lang.UnsupportedOperationException
>     at java.util.AbstractMap.put(AbstractMap.java:209)
>     at java.util.AbstractMap.putAll(AbstractMap.java:281)
>     at 
> org.apache.flink.table.client.config.Environment.merge(Environment.java:107)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.createEnvironment(LocalExecutor.java:461)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.listTables(LocalExecutor.java:203)
>     at 
> org.apache.flink.table.client.cli.CliClient.callShowTables(CliClient.java:270)
>     at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:198)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:97)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:146){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-8851) SQL Client fails if same file is used as default and env configuration

2018-09-26 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628299#comment-16628299
 ] 

xuqianjin edited comment on FLINK-8851 at 9/26/18 6:36 AM:
---

I verified in the 1.6 version that the same configuration file does have this 
bug, I will try to fix this bug.


was (Author: x1q1j1):
I verified that the same configuration file does have this bug, I will try to 
fix this bug.

> SQL Client fails if same file is used as default and env configuration
> --
>
> Key: FLINK-8851
> URL: https://issues.apache.org/jira/browse/FLINK-8851
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.5.5
>
>
> Specifying the same file as default and environment configuration yields the 
> following exception
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:156)
> Caused by: java.lang.UnsupportedOperationException
>     at java.util.AbstractMap.put(AbstractMap.java:209)
>     at java.util.AbstractMap.putAll(AbstractMap.java:281)
>     at 
> org.apache.flink.table.client.config.Environment.merge(Environment.java:107)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.createEnvironment(LocalExecutor.java:461)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.listTables(LocalExecutor.java:203)
>     at 
> org.apache.flink.table.client.cli.CliClient.callShowTables(CliClient.java:270)
>     at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:198)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:97)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:146){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8851) SQL Client fails if same file is used as default and env configuration

2018-09-26 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628299#comment-16628299
 ] 

xuqianjin commented on FLINK-8851:
--

I verified that the same configuration file does have this bug, I will try to 
fix this bug.

> SQL Client fails if same file is used as default and env configuration
> --
>
> Key: FLINK-8851
> URL: https://issues.apache.org/jira/browse/FLINK-8851
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.5.5
>
>
> Specifying the same file as default and environment configuration yields the 
> following exception
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:156)
> Caused by: java.lang.UnsupportedOperationException
>     at java.util.AbstractMap.put(AbstractMap.java:209)
>     at java.util.AbstractMap.putAll(AbstractMap.java:281)
>     at 
> org.apache.flink.table.client.config.Environment.merge(Environment.java:107)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.createEnvironment(LocalExecutor.java:461)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.listTables(LocalExecutor.java:203)
>     at 
> org.apache.flink.table.client.cli.CliClient.callShowTables(CliClient.java:270)
>     at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:198)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:97)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:146){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-8851) SQL Client fails if same file is used as default and env configuration

2018-09-20 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623073#comment-16623073
 ] 

xuqianjin edited comment on FLINK-8851 at 9/21/18 5:09 AM:
---

[~fhueske] [~twalthr] Hello, I want to try to recreate and fix this bug.


was (Author: x1q1j1):
[~fhueske] [~twalthr] Hello, I want to try to recreate and fix this bug.

> SQL Client fails if same file is used as default and env configuration
> --
>
> Key: FLINK-8851
> URL: https://issues.apache.org/jira/browse/FLINK-8851
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.5.5
>
>
> Specifying the same file as default and environment configuration yields the 
> following exception
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:156)
> Caused by: java.lang.UnsupportedOperationException
>     at java.util.AbstractMap.put(AbstractMap.java:209)
>     at java.util.AbstractMap.putAll(AbstractMap.java:281)
>     at 
> org.apache.flink.table.client.config.Environment.merge(Environment.java:107)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.createEnvironment(LocalExecutor.java:461)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.listTables(LocalExecutor.java:203)
>     at 
> org.apache.flink.table.client.cli.CliClient.callShowTables(CliClient.java:270)
>     at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:198)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:97)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:146){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8851) SQL Client fails if same file is used as default and env configuration

2018-09-20 Thread xuqianjin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623073#comment-16623073
 ] 

xuqianjin commented on FLINK-8851:
--

[~fhueske] [~twalthr] Hello, I want to try to recreate and fix this bug.

> SQL Client fails if same file is used as default and env configuration
> --
>
> Key: FLINK-8851
> URL: https://issues.apache.org/jira/browse/FLINK-8851
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.5.5
>
>
> Specifying the same file as default and environment configuration yields the 
> following exception
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:156)
> Caused by: java.lang.UnsupportedOperationException
>     at java.util.AbstractMap.put(AbstractMap.java:209)
>     at java.util.AbstractMap.putAll(AbstractMap.java:281)
>     at 
> org.apache.flink.table.client.config.Environment.merge(Environment.java:107)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.createEnvironment(LocalExecutor.java:461)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.listTables(LocalExecutor.java:203)
>     at 
> org.apache.flink.table.client.cli.CliClient.callShowTables(CliClient.java:270)
>     at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:198)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:97)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:146){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >