Jochen Niebuhr commented on SPARK-21651:

Specifying the Schema myself would mean I'll have to change it every time a new 
field appears.
With the current implementation you could write some schema to JSON with spark 
and it'll read a different schema or not be able to read it at all if you're 
using Maps.
We could add some flag which activates this feature but I think this might be 
helpful for some people.

> Detect MapType in Json InferSchema
> ----------------------------------
>                 Key: SPARK-21651
>                 URL: https://issues.apache.org/jira/browse/SPARK-21651
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.1.0, 2.1.1, 2.2.0
>            Reporter: Jochen Niebuhr
>            Priority: Minor
> When loading Json Files which include a map with very variable keys, the 
> current schema infer logic might create a very large schema. This will lead 
> to long load times and possibly out of memory errors. 
> I've already submitted a pull request to the mongo spark driver which had the 
> same problem. Should I port this logic over to the json schema infer class?
> The MongoDB Spark pull request mentioned is: 
> https://github.com/mongodb/mongo-spark/pull/24

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to