[
https://issues.apache.org/jira/browse/FLINK-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Konstantin Knauf updated FLINK-15158:
-------------------------------------
Issue Type: Improvement (was: Wish)
As part of https://issues.apache.org/jira/browse/FLINK-22029 the "Wish" issue
type will be dropped. I changed this one to "Improvement" instead.
> Why convert integer to bigdecimal for formart-json when kafka is used
> ---------------------------------------------------------------------
>
> Key: FLINK-15158
> URL: https://issues.apache.org/jira/browse/FLINK-15158
> Project: Flink
> Issue Type: Improvement
> Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
> Reporter: hehuiyuan
> Priority: Major
> Attachments: image-2019-12-16-10-47-23-565.png,
> image-2019-12-16-10-47-43-437.png
>
>
> For example ,
> I have a table `table1` :
> root
> |– name: STRING|
> |– age: INT|
> |– sex: STRING|
>
> then , I want to execute the sql -- `insert into kafka select * form table1`
> :
> tablesink's shema is json schame:
> {type:'object',
> properties:{
> name:\{ type: 'string' },
> age: \{ type: 'integer' },
> sex: \{ type: 'string' }
> }
> }
>
> Code :
> ```
> String jsonSchema = "
> {type:'object',
> properties:{
> name:\{ type: 'string' },
> age: \{ type: 'integer' },
> sex: \{ type: 'string' }
> }
> }";
> JsonRowDeserializationSchema deserializationSchema = new
> JsonRowDeserializationSchema(jsonSchema);
> TypeInformation fieldTypes = deserializationSchema.getProducedType();
> String[] fieldNames = ((RowTypeInfo)fieldTypes).getFieldNames();
> TypeInformation[] typeInformations = ((RowTypeInfo)
> fieldTypes).getFieldTypes();
> Schema schema = configSchema(fieldNames,typeInformations);
> descriptor.withFormat(new Json().jsonSchema(jsonSchema)).withSchema(schema);
>
>
> public Schema configSchema(String[] fields,TypeInformation[]
> typeInformations){
> Schema schema = new Schema();
> int fieldNums = fields.length;
> for(int i =0 ; i < fieldNums ; i++){
> schema = schema.field(fields[i],typeInformations[i]);
> }
> return schema;
> }
> ```
>
> Exception in thread "main" org.apache.flink.table.api.ValidationException:
> Field types of query result and registered TableSink [sink_example2] do not
> match.Exception in thread "main"
> org.apache.flink.table.api.ValidationException: Field types of query result
> and registered TableSink [sink_example2] do not match.
> *Query result schema: [name: String, age: Integer, sex: String]*
> *TableSink schema: [name: String, age: BigDecimal, sex: String]* at
> org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:65)
> at
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:156)
> at
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:155)
> at scala.Option.map(Option.scala:146)
>
>
> !image-2019-12-16-10-47-43-437.png|width=468,height=246!
> I know that the type of integer for json schema is convert to BigDecimal for
> flink SQL types. But for the above scenario, does this have to be forced to
> be decimal?
>
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)