[ 
https://issues.apache.org/jira/browse/FLINK-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060651#comment-17060651
 ] 

Kurt Young commented on FLINK-16627:
------------------------------------

BTW, such issue should be reported to user mailing thread, I will close this 
for now.

> when insert into kafkas ,how can i remove the keys with null values of json
> ---------------------------------------------------------------------------
>
>                 Key: FLINK-16627
>                 URL: https://issues.apache.org/jira/browse/FLINK-16627
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table SQL / Client
>    Affects Versions: 1.10.0
>            Reporter: jackray wang
>            Priority: Major
>
> {code:java}
> //sql
> CREATE TABLE sink_kafka ( subtype STRING , svt STRING ) WITH (……)
> {code}
>  
> {code:java}
> //sql
> CREATE TABLE source_kafka ( subtype STRING , svt STRING ) WITH (……)
> {code}
>  
> {code:java}
> //scala udf
> class ScalaUpper extends ScalarFunction {    
> def eval(str: String) : String= { 
>        if(str == null){
>            return ""
>        }else{
>            return str
>        }
>     }
>     
> }
> btenv.registerFunction("scala_upper", new ScalaUpper())
> {code}
>  
> {code:java}
> //sql
> insert into sink_kafka select subtype, scala_upper(svt)  from source_kafka
> {code}
>  
>  
> ----
> Sometimes the svt's value is null, inert into kafkas json like  
> \{"subtype":"qin","svt":null}
> If the amount of data is small, it is acceptable,but we process 10TB of data 
> every day, and there may be many nulls in the json, which affects the 
> efficiency. If you can add a parameter to remove the null key when defining a 
> sinktable, the performance will be greatly improved
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to