[ 
https://issues.apache.org/jira/browse/FLINK-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15158:
-----------------------------------
    Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Why convert integer to bigdecimal for formart-json when kafka is used
> ---------------------------------------------------------------------
>
>                 Key: FLINK-15158
>                 URL: https://issues.apache.org/jira/browse/FLINK-15158
>             Project: Flink
>          Issue Type: Improvement
>          Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>            Reporter: hehuiyuan
>            Priority: Minor
>              Labels: auto-deprioritized-major, stale-minor
>         Attachments: image-2019-12-16-10-47-23-565.png, 
> image-2019-12-16-10-47-43-437.png
>
>
> For example , 
> I have a table  `table1` :
> root
> |– name: STRING|
> |– age: INT|
> |– sex: STRING|
>  
> then , I want to execute the sql  -- `insert into kafka select * form table1` 
> :
> tablesink's shema is json schame: 
> {type:'object',
>  properties:{
>                       name:\{ type: 'string' },
>                       age: \{ type: 'integer' },
>                       sex: \{ type: 'string' }
>                   }
> }
>  
> Code :
> ```
> String jsonSchema = "
> {type:'object',
>  properties:{
>                       name:\{ type: 'string' },
>                       age: \{ type: 'integer' },
>                       sex: \{ type: 'string' }
>                   }
> }";
> JsonRowDeserializationSchema deserializationSchema = new 
> JsonRowDeserializationSchema(jsonSchema);
> TypeInformation fieldTypes = deserializationSchema.getProducedType();
> String[] fieldNames = ((RowTypeInfo)fieldTypes).getFieldNames();
> TypeInformation[] typeInformations = ((RowTypeInfo) 
> fieldTypes).getFieldTypes();
> Schema schema = configSchema(fieldNames,typeInformations);
> descriptor.withFormat(new Json().jsonSchema(jsonSchema)).withSchema(schema);
>  
>  
> public Schema configSchema(String[] fields,TypeInformation[] 
> typeInformations){
>         Schema schema = new Schema();
>         int fieldNums = fields.length;
>         for(int i =0 ; i < fieldNums ; i++){
>             schema = schema.field(fields[i],typeInformations[i]);
>          }
>       return schema;
> }
> ```
>  
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Field types of query result and registered TableSink [sink_example2] do not 
> match.Exception in thread "main" 
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink [sink_example2] do not match.
> *Query result schema: [name: String, age: Integer, sex: String]*
> *TableSink schema:    [name: String, age: BigDecimal, sex: String]* at 
> org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:65)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:156)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:155)
>  at scala.Option.map(Option.scala:146) 
>  
>  
>   !image-2019-12-16-10-47-43-437.png|width=468,height=246!
> I know that the type of integer for json schema is convert to BigDecimal for 
> flink SQL types. But for the above scenario, does this have to be forced to 
> be decimal?
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to