[
https://issues.apache.org/jira/browse/FLINK-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060863#comment-17060863
]
jackray wang commented on FLINK-16627:
--------------------------------------
I suggest adding a parameter when defining a sinktable like “format.removenull
= true” ,if this parameter is true ,remove all the keys which value are
nulls,and this parameter default value set to false ,do nothing. Compatible
with previous scripts, add parameters if necessary.
> when insert into kafkas ,how can i remove the keys with null values of json
> ---------------------------------------------------------------------------
>
> Key: FLINK-16627
> URL: https://issues.apache.org/jira/browse/FLINK-16627
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / Client
> Affects Versions: 1.10.0
> Reporter: jackray wang
> Priority: Major
>
> {code:java}
> //sql
> CREATE TABLE sink_kafka ( subtype STRING , svt STRING ) WITH (……)
> {code}
>
> {code:java}
> //sql
> CREATE TABLE source_kafka ( subtype STRING , svt STRING ) WITH (……)
> {code}
>
> {code:java}
> //scala udf
> class ScalaUpper extends ScalarFunction {
> def eval(str: String) : String= {
> if(str == null){
> return ""
> }else{
> return str
> }
> }
>
> }
> btenv.registerFunction("scala_upper", new ScalaUpper())
> {code}
>
> {code:java}
> //sql
> insert into sink_kafka select subtype, scala_upper(svt) from source_kafka
> {code}
>
>
> ----
> Sometimes the svt's value is null, inert into kafkas json like
> \{"subtype":"qin","svt":null}
> If the amount of data is small, it is acceptable,but we process 10TB of data
> every day, and there may be many nulls in the json, which affects the
> efficiency. If you can add a parameter to remove the null key when defining a
> sinktable, the performance will be greatly improved
>
>
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)