gengliangwang edited a comment on issue #26524: [SPARK-29898][SQL] Support Avro 
Custom Logical Types
URL: https://github.com/apache/spark/pull/26524#issuecomment-557707264
 
 
   Hi @pradomota ,
   
   To write Avro files with a schema different from the default mapping, you 
can use the option "avroSchema":
    df.write.format("avro").option("avroSchema", 
avroSchemaAsJSONStringFormat)... 
   See 
https://spark.apache.org/docs/latest/sql-data-sources-avro.html#supported-types-for-spark-sql---avro-conversion
 for more details. 
   The function `to_avro` also supports customized the output schema with the 
last parameter "jsonFormatSchema"
   
   To read Avro file with customized Avro schema, you can also use the option 
"avroSchema". To specify a customized Dataframe schema, you can use the general 
data source method "spark.read.schema(..)..".
   If there is missing mapping for the Avro logical types to DataFrame 
schema(https://spark.apache.org/docs/latest/sql-data-sources-avro.html#supported-types-for-avro---spark-sql-conversion),
 please update it in the `SchemaConverters`.
   
   I think the existing code already covers the problem this PR tries to 
resolve. Hope this helps.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to