[jira] [Commented] (FLINK-22071) LookupTableSource.LookupRuntimeProvider customizes parallelism

2021-04-22 Thread Wong Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327233#comment-17327233
 ] 

Wong Mulan commented on FLINK-22071:


Could we add ParallelTableFunctionProvider interface.
{code:java}
interface ParallelTableFunctionProvider extends TableFunctionProvider, 
ParallelismProvider
{code}

and set this parallelism in CommonLookupJoin class.
For example:
{code:java}
val finalParallelism = if (parallelism > 0) { 
  parallelism 
} else { 
  inputTransformation.getParallelism 
} 
ExecNode.createOneInputTransformation( 
  inputTransformation, 
  getRelDetailedDescription, 
  operatorFactory, 
  InternalTypeInfo.of(resultRowType), 
  finalParallelism)
{code}

> LookupTableSource.LookupRuntimeProvider customizes parallelism
> --
>
> Key: FLINK-22071
> URL: https://issues.apache.org/jira/browse/FLINK-22071
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Wong Mulan
>Priority: Major
>
> Now, sink table can customize parallelism. LookupTable is not supported. 
> Could we add this capability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22372) Rename LogicalTypeCasts class variables in the castTo method.

2021-04-20 Thread Wong Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325759#comment-17325759
 ] 

Wong Mulan edited comment on FLINK-22372 at 4/20/21, 12:55 PM:
---

Please assign this issue to me, I want to fix it, thanks. cc [~twalthr]


was (Author: ana4):
Please assign this issue to me, I want to fix it,  thanks. 

> Rename LogicalTypeCasts class variables in the castTo method.
> -
>
> Key: FLINK-22372
> URL: https://issues.apache.org/jira/browse/FLINK-22372
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Wong Mulan
>Priority: Minor
>  Labels: pull-request-available
>
> castTo parameter has a mistake name. It should be 'targetType'.
> {code:java}
> // Old
> private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
> return new CastingRuleBuilder(sourceType);
> }
> // New
> private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
> return new CastingRuleBuilder(targetType);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22372) Rename LogicalTypeCasts class variables in the castTo method.

2021-04-20 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan updated FLINK-22372:
---
Description: 
castTo parameter has a mistake name. It should be 'targetType'.
{code:java}
// Old
private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
return new CastingRuleBuilder(sourceType);
}
// New
private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
return new CastingRuleBuilder(targetType);
}
{code}

  was:
castTo parameter has a mistake name. It should be 'targetType'.
{code:java}
// Old
private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
return new CastingRuleBuilder(sourceType);
}
// New
private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
return new CastingRuleBuilder(targetType);
}
{code}



> Rename LogicalTypeCasts class variables in the castTo method.
> -
>
> Key: FLINK-22372
> URL: https://issues.apache.org/jira/browse/FLINK-22372
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Wong Mulan
>Priority: Minor
>
> castTo parameter has a mistake name. It should be 'targetType'.
> {code:java}
> // Old
> private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
> return new CastingRuleBuilder(sourceType);
> }
> // New
> private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
> return new CastingRuleBuilder(targetType);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22372) Rename LogicalTypeCasts class variables in the castTo method.

2021-04-20 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan updated FLINK-22372:
---
Description: 
castTo parameter has a mistake name. It should be 'targetType'.
{code:java}
// Old
private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
return new CastingRuleBuilder(sourceType);
}
// New
private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
return new CastingRuleBuilder(targetType);
}
{code}


  was:

{code:java}
// Old
private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
return new CastingRuleBuilder(sourceType);
}
// New
private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
return new CastingRuleBuilder(targetType);
}
{code}



> Rename LogicalTypeCasts class variables in the castTo method.
> -
>
> Key: FLINK-22372
> URL: https://issues.apache.org/jira/browse/FLINK-22372
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Wong Mulan
>Priority: Minor
>
> castTo parameter has a mistake name. It should be 'targetType'.
> {code:java}
> // Old
> private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
> return new CastingRuleBuilder(sourceType);
> }
> // New
> private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
> return new CastingRuleBuilder(targetType);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22372) Rename LogicalTypeCasts class variables in the castTo method.

2021-04-20 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan updated FLINK-22372:
---
Component/s: (was: Table SQL / Client)
 Table SQL / API

> Rename LogicalTypeCasts class variables in the castTo method.
> -
>
> Key: FLINK-22372
> URL: https://issues.apache.org/jira/browse/FLINK-22372
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Wong Mulan
>Priority: Minor
>
> {code:java}
> // Old
> private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
> return new CastingRuleBuilder(sourceType);
> }
> // New
> private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
> return new CastingRuleBuilder(targetType);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22372) Rename LogicalTypeCasts class variables in the castTo method.

2021-04-20 Thread Wong Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325759#comment-17325759
 ] 

Wong Mulan commented on FLINK-22372:


Please assign this issue to me, I want to fix it,  thanks. 

> Rename LogicalTypeCasts class variables in the castTo method.
> -
>
> Key: FLINK-22372
> URL: https://issues.apache.org/jira/browse/FLINK-22372
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Wong Mulan
>Priority: Minor
>
> {code:java}
> // Old
> private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
> return new CastingRuleBuilder(sourceType);
> }
> // New
> private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
> return new CastingRuleBuilder(targetType);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22372) Rename LogicalTypeCasts class variables in the castTo method.

2021-04-20 Thread Wong Mulan (Jira)
Wong Mulan created FLINK-22372:
--

 Summary: Rename LogicalTypeCasts class variables in the castTo 
method.
 Key: FLINK-22372
 URL: https://issues.apache.org/jira/browse/FLINK-22372
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Client
Affects Versions: 1.12.3, 1.13.1
Reporter: Wong Mulan



{code:java}
// Old
private static CastingRuleBuilder castTo(LogicalTypeRoot sourceType) {
return new CastingRuleBuilder(sourceType);
}
// New
private static CastingRuleBuilder castTo(LogicalTypeRoot targetType) {
return new CastingRuleBuilder(targetType);
}
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22071) LookupTableSource.LookupRuntimeProvider customizes parallelism

2021-03-31 Thread Wong Mulan (Jira)
Wong Mulan created FLINK-22071:
--

 Summary: LookupTableSource.LookupRuntimeProvider customizes 
parallelism
 Key: FLINK-22071
 URL: https://issues.apache.org/jira/browse/FLINK-22071
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.13.0
Reporter: Wong Mulan


Now, sink table can customize parallelism. LookupTable is not supported. Could 
we add this capability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20696) Yarn Session Blob Directory is not deleted.

2021-03-26 Thread Wong Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309392#comment-17309392
 ] 

Wong Mulan commented on FLINK-20696:


JobManager is alway running, never be killed.

> Yarn Session Blob Directory is not deleted.
> ---
>
> Key: FLINK-20696
> URL: https://issues.apache.org/jira/browse/FLINK-20696
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.3, 1.12.0, 1.13.0
>Reporter: Wong Mulan
>Priority: Major
> Fix For: 1.13.0
>
> Attachments: image-2020-12-21-16-47-37-278.png
>
>
> This Job is finished, but blob directory is not deleted.
> There is a small probability that this problem will occur, when I submit so 
> many jobs .
>   !image-2020-12-21-16-47-37-278.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20970) DECIMAL(10, 0) can not be GROUP BY key.

2021-02-04 Thread Wong Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279297#comment-17279297
 ] 

Wong Mulan commented on FLINK-20970:


There is not the problem in 1.11.3 and 1.12.1 version.

> DECIMAL(10, 0) can not be GROUP BY key.
> ---
>
> Key: FLINK-20970
> URL: https://issues.apache.org/jira/browse/FLINK-20970
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.1
>Reporter: Wong Mulan
>Priority: Major
> Attachments: image-2021-01-14-17-06-28-648.png
>
>
> If value which type is not DECIMAL(38, 18) is GROUP BY key, the result will 
> be -1.
> So, only DECIMAL(38, 18) can be GROUP BY key?
> Whatever the value is, it will be return -1.
>  !image-2021-01-14-17-06-28-648.png|thumbnail! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20970) DECIMAL(10, 0) can not be GROUP BY key.

2021-01-14 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan updated FLINK-20970:
---
Description: 
If value which type is not DECIMAL(38, 18) is GROUP BY key, the result will be 
-1.
So, only DECIMAL(38, 18) can be GROUP BY key?
Whatever the value is, it will be return -1.
 !image-2021-01-14-17-06-28-648.png|thumbnail! 


  was:
If value which type is not DECIMAL(38, 18) is GROUP BY key, the result will be 
-1.
So, only DECIMAL(38, 18) can be GROUP BY key?
 !image-2021-01-14-17-06-28-648.png|thumbnail! 



> DECIMAL(10, 0) can not be GROUP BY key.
> ---
>
> Key: FLINK-20970
> URL: https://issues.apache.org/jira/browse/FLINK-20970
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.1
>Reporter: Wong Mulan
>Priority: Major
> Attachments: image-2021-01-14-17-06-28-648.png
>
>
> If value which type is not DECIMAL(38, 18) is GROUP BY key, the result will 
> be -1.
> So, only DECIMAL(38, 18) can be GROUP BY key?
> Whatever the value is, it will be return -1.
>  !image-2021-01-14-17-06-28-648.png|thumbnail! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20970) DECIMAL(10, 0) can not be GROUP BY key.

2021-01-14 Thread Wong Mulan (Jira)
Wong Mulan created FLINK-20970:
--

 Summary: DECIMAL(10, 0) can not be GROUP BY key.
 Key: FLINK-20970
 URL: https://issues.apache.org/jira/browse/FLINK-20970
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.10.1
Reporter: Wong Mulan
 Attachments: image-2021-01-14-17-06-28-648.png

If value which type is not DECIMAL(38, 18) is GROUP BY key, the result will be 
-1.
So, only DECIMAL(38, 18) can be GROUP BY key?
 !image-2021-01-14-17-06-28-648.png|thumbnail! 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20965) BigDecimalTypeInfo can not be converted.

2021-01-13 Thread Wong Mulan (Jira)
Wong Mulan created FLINK-20965:
--

 Summary: BigDecimalTypeInfo can not be converted.
 Key: FLINK-20965
 URL: https://issues.apache.org/jira/browse/FLINK-20965
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.10.1
Reporter: Wong Mulan
 Attachments: image-2021-01-14-10-56-07-949.png, 
image-2021-01-14-10-59-03-656.png

LegacyTypeInfoDataTypeConverter#toDataType can not correctly convert 
BigDecimalTypeInfo

Types.BIG_DEC do not include BigDecimalTypeInfo.

!image-2021-01-14-10-56-07-949.png!

!image-2021-01-14-10-59-03-656.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20875) Could patch CVE-2020-17518 to version 1.10

2021-01-07 Thread Wong Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17260928#comment-17260928
 ] 

Wong Mulan commented on FLINK-20875:


Thanks. It's important to create a dedicated 1.10.3 release.

> Could patch CVE-2020-17518 to version 1.10
> --
>
> Key: FLINK-20875
> URL: https://issues.apache.org/jira/browse/FLINK-20875
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.2
>Reporter: Wong Mulan
>Priority: Major
>
> So many flink job of prod are running in version 1.10。



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20875) Could patch CVE-2020-17518 to version 1.10

2021-01-06 Thread Wong Mulan (Jira)
Wong Mulan created FLINK-20875:
--

 Summary: Could patch CVE-2020-17518 to version 1.10
 Key: FLINK-20875
 URL: https://issues.apache.org/jira/browse/FLINK-20875
 Project: Flink
  Issue Type: Bug
Reporter: Wong Mulan


So many flink job of prod are running in version 1.10。



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-12-30 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan reopened FLINK-19962:


> fix doc: more examples for expanding arrays into a relation
> ---
>
> Key: FLINK-19962
> URL: https://issues.apache.org/jira/browse/FLINK-19962
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Wong Mulan
>Assignee: Wong Mulan
>Priority: Minor
>  Labels: pull-request-available
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
> The ' Expanding arrays into a relation ' section is so simple. I can not 
> understand the usage when I see this section.
> It should be added more examples.
> The following examples
> {quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
> {quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
> {quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}
> or add some Table field description. For example, field 'set' type of table 
> t2 is array or map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-12-30 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan resolved FLINK-19962.

Resolution: Fixed

> fix doc: more examples for expanding arrays into a relation
> ---
>
> Key: FLINK-19962
> URL: https://issues.apache.org/jira/browse/FLINK-19962
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Wong Mulan
>Assignee: Wong Mulan
>Priority: Minor
>  Labels: pull-request-available
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
> The ' Expanding arrays into a relation ' section is so simple. I can not 
> understand the usage when I see this section.
> It should be added more examples.
> The following examples
> {quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
> {quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
> {quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}
> or add some Table field description. For example, field 'set' type of table 
> t2 is array or map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20170) json deserialize decimal loses precision

2020-11-18 Thread Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17234545#comment-17234545
 ] 

Mulan commented on FLINK-20170:
---

I agree with [~jark]

> json deserialize decimal loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
> Attachments: image-2020-11-18-16-51-35-317.png
>
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> precision.
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20170) json deserialize decimal loses precision

2020-11-18 Thread Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17234378#comment-17234378
 ] 

Mulan commented on FLINK-20170:
---

I try out this in 1.11 version using new field 'connector' = kafka . This 
happened the same problem.

This is jackson problem. What should I do.

 

!image-2020-11-18-16-51-35-317.png!

> json deserialize decimal loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
> Attachments: image-2020-11-18-16-51-35-317.png
>
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> precision.
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20170) json deserialize decimal loses precision

2020-11-18 Thread Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mulan updated FLINK-20170:
--
Attachment: image-2020-11-18-16-51-35-317.png

> json deserialize decimal loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
> Attachments: image-2020-11-18-16-51-35-317.png
>
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> precision.
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20170) json deserialize loses precision

2020-11-16 Thread Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17232709#comment-17232709
 ] 

Mulan commented on FLINK-20170:
---

Could we add following code for fix this?
{code:java}
objectMapper.enable(DeserializationFeature.USE_BIG_DECIMAL_FOR_FLOATS);
{code}

> json deserialize loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> precision.
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20170) json deserialize loses precision

2020-11-16 Thread Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mulan updated FLINK-20170:
--
Description: 
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

This following code call readTree() method. This method make value loses 
precision.

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}

Could we add following code for fix this?



  was:
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

This following code call readTree() method. This method make value loses 
precision.

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}




> json deserialize loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> precision.
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}
> Could we add following code for fix this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20170) json deserialize loses precision

2020-11-16 Thread Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mulan updated FLINK-20170:
--
Description: 
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

This following code call readTree() method. This method make value loses 
precision.

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}



  was:
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

This following code call readTree() method. This method make value loses 
precision.

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}

Could we add following code for fix this?




> json deserialize loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> precision.
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20170) json deserialize loses precision

2020-11-16 Thread Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mulan updated FLINK-20170:
--
Description: 
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

This following code call readTree() method. This method make value loses 
precision.

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}



  was:
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

This following code call readTree() method. This method make value loses 

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}




> json deserialize loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> precision.
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20170) json deserialize loses precision

2020-11-16 Thread Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mulan updated FLINK-20170:
--
Description: 
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

This following code call readTree() method. This method make value loses 

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}



  was:
{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}




> json deserialize loses precision
> 
>
> Key: FLINK-20170
> URL: https://issues.apache.org/jira/browse/FLINK-20170
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Mulan
>Priority: Major
>
> {code:java}
> CREATE TABLE ods (
> id BIGINT,
> factor DECIMAL(38, 18)
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = '_foo',
> 'connector.topic?' = '_foo',
> 'connector.properties.bootstrap.servers' = 'localhost:9092',
> 'connector.properties.group.id' = 'g',
> 'format.type' = 'json',
> 'update-mode' = 'append'
> );
> {code}
> this following is input data.
> {code:json}
> {"id": 1, "factor": 799.929496989092949698}
> {code}
> this following is output data and loses precision.
> {code:json}
> 1, 799.92949698909300
> {code}
> This following code call readTree() method. This method make value loses 
> {code:java}
>   public Row deserialize(byte[] message) throws IOException {
>   try {
>   final JsonNode root = objectMapper.readTree(message);
>   return (Row) runtimeConverter.convert(objectMapper, 
> root);
>   } catch (Throwable t) {
>   throw new IOException("Failed to deserialize JSON 
> object.", t);
>   }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20170) json deserialize loses precision

2020-11-16 Thread Mulan (Jira)
Mulan created FLINK-20170:
-

 Summary: json deserialize loses precision
 Key: FLINK-20170
 URL: https://issues.apache.org/jira/browse/FLINK-20170
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
SQL / API
Reporter: Mulan


{code:java}
CREATE TABLE ods (
id BIGINT,
factor DECIMAL(38, 18)
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'connector.topic' = '_foo',
'connector.topic?' = '_foo',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'g',
'format.type' = 'json',
'update-mode' = 'append'


);
{code}

this following is input data.
{code:json}
{"id": 1, "factor": 799.929496989092949698}
{code}

this following is output data and loses precision.
{code:json}
1, 799.92949698909300
{code}

{code:java}
public Row deserialize(byte[] message) throws IOException {
try {
final JsonNode root = objectMapper.readTree(message);
return (Row) runtimeConverter.convert(objectMapper, 
root);
} catch (Throwable t) {
throw new IOException("Failed to deserialize JSON 
object.", t);
}
}
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-11-04 Thread Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225921#comment-17225921
 ] 

Mulan edited comment on FLINK-19962 at 11/4/20, 12:04 PM:
--

Please assign this fix to me, thanks. cc [~jark]


was (Author: ana4):
Please assign this fix to me, thanks.

> fix doc: more examples for expanding arrays into a relation
> ---
>
> Key: FLINK-19962
> URL: https://issues.apache.org/jira/browse/FLINK-19962
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Mulan
>Priority: Minor
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
> The ' Expanding arrays into a relation ' section is so simple. I can not 
> understand the usage when I see this section.
> It should be added more examples.
> The following examples
> {quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
> {quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
> {quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}
> or add some Table field description. For example, field 'set' type of table 
> t2 is array or map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-11-04 Thread Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225921#comment-17225921
 ] 

Mulan commented on FLINK-19962:
---

Please assign this fix to me, thanks.

> fix doc: more examples for expanding arrays into a relation
> ---
>
> Key: FLINK-19962
> URL: https://issues.apache.org/jira/browse/FLINK-19962
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Mulan
>Priority: Minor
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
> The ' Expanding arrays into a relation ' section is so simple. I can not 
> understand the usage when I see this section.
> It should be added more examples.
> The following examples
> {quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
> {quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
> {quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}
> or add some Table field description. For example, field 'set' type of table 
> t2 is array or map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-11-04 Thread Mulan (Jira)
Mulan created FLINK-19962:
-

 Summary: fix doc: more examples for expanding arrays into a 
relation
 Key: FLINK-19962
 URL: https://issues.apache.org/jira/browse/FLINK-19962
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Mulan


https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html

The ' Expanding arrays into a relation ' section is so simple. I can not 
understand the usage when I see this section.

It should be added more examples.

The following examples

{quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
{quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
{quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}

or add some Table field description. For example, field 'set' type of table t2 
is array or map.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18889) New Async Table Function type inference fails

2020-08-11 Thread Mulan (Jira)
Mulan created FLINK-18889:
-

 Summary: New Async Table Function type inference fails
 Key: FLINK-18889
 URL: https://issues.apache.org/jira/browse/FLINK-18889
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.1
Reporter: Mulan


{code:java}
@FunctionHint(
input = @DataTypeHint("STRING"),
output = @DataTypeHint("ROW")
)
public class RedisAsyncTableFunction extends AsyncTableFunction {

private RedisClient redisClient;
private StatefulRedisConnection connection;
private RedisKeyAsyncCommands async;
private static final String PREFIX = "redis://";
private static final String DEFAULT_DB = "0";
private static final String DEFAULT_URL = "localhost:6379";
private static final String DEFAULT_PASSWORD = "";

@Override
public void open(FunctionContext context) throws Exception {
final String url = DEFAULT_URL;
final String password = DEFAULT_PASSWORD;
final String database = DEFAULT_DB;
StringBuilder redisUri = new StringBuilder();

redisUri.append(PREFIX).append(password).append(url).append("/").append(database);

redisClient = RedisClient.create(redisUri.toString());
connection = redisClient.connect();
async = connection.async();
}

public void eval(CompletableFuture> outputFuture, String 
key) {
RedisFuture> redisFuture = 
((RedisHashAsyncCommands) async).hgetall(key);
redisFuture.thenAccept(new Consumer>() {
@Override
public void accept(Map values) {
int len = 1;
Row row = new Row(len);
row.setField(0, values.get("ct"));
outputFuture.complete(Collections.singletonList(row));
}
});
}

@Override
public void close() throws Exception {
if (connection != null){
connection.close();
}
if (redisClient != null){
redisClient.shutdown();
}
}
}
{code}
{code:java}
tEnv.createTemporarySystemFunction("lookup_redis", 
RedisAsyncTableFunction.class);
{code}
{code:java}
Exception in thread "main" org.apache.flink.table.api.ValidationException: SQL 
validation failed. From line 3, column 31 to line 3, column 48: No match found 
for function signature lookup_redis()
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:185)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:525)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:202)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:684)
at hua.mulan.slink.SqlSubmit.callInsertInto(SqlSubmit.java:100)
at hua.mulan.slink.SqlSubmit.callCommand(SqlSubmit.java:75)
at hua.mulan.slink.SqlSubmit.run(SqlSubmit.java:57)
at hua.mulan.slink.SqlSubmit.main(SqlSubmit.java:38)
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 3, 
column 31 to line 3, column 48: No match found for function signature 
lookup_redis()
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:457)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:839)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5089)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.handleUnresolvedFunction(SqlValidatorImpl.java:1882)
at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:305)
at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5858)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5845)
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
 

[jira] [Commented] (FLINK-18520) New Table Function type inference fails

2020-08-10 Thread Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175204#comment-17175204
 ] 

Mulan commented on FLINK-18520:
---

Has AsyncTableFunction fixed?

> New Table Function type inference fails
> ---
>
> Key: FLINK-18520
> URL: https://issues.apache.org/jira/browse/FLINK-18520
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Benchao Li
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.1
>
>
> For a simple UDTF like 
> {code:java}
> public class Split extends TableFunction {
>   public Split(){}
>   public void eval(String str, String ch) {
>   if (str == null || str.isEmpty()) {
>   return;
>   } else {
>   String[] ss = str.split(ch);
>   for (String s : ss) {
>   collect(s);
>   }
>   }
>   }
>   }
> {code}
> register it using new function type inference 
> {{tableEnv.createFunction("my_split", Split.class);}} and using it in a 
> simple query will fail with following exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 93 to line 1, column 115: No match 
> found for function signature my_split(, )
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:716)
>   at com.bytedance.demo.SqlTest.main(SqlTest.java:64)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 93 to line 1, column 115: No match found for function signature 
> my_split(, )
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:457)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:839)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5089)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.handleUnresolvedFunction(SqlValidatorImpl.java:1882)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:305)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5858)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5845)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1800)
>   at 
> org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:57)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1110)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1084)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3256)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3238)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateJoin(SqlValidatorImpl.java:3303)
>   at 
>