Spark SQL has great support for reading text files that contain JSON data. However, in many cases the JSON data is just one column amongst others. This is particularly true when reading from sources such as Kafka. This PR <https://github.com/apache/spark/pull/15274> adds a new functions from_json that converts a string column into a nested StructType with a user specified schema, using the same internal logic as the json Data Source.
Would love to hear any comments / suggestions. Michael