I would really love something like this! It would be great if it doesn't
throw away corrupt_records like the Data Source.

On Wed, Sep 28, 2016 at 11:02 AM, Nathan Lande <nathanla...@gmail.com>
wrote:

> We are currently pulling out the JSON columns, passing them through
> read.json, and then joining them back onto the initial DF so something like
> from_json would be a nice quality of life improvement for us.
>
> On Wed, Sep 28, 2016 at 10:52 AM, Michael Armbrust <mich...@databricks.com
> > wrote:
>
>> Spark SQL has great support for reading text files that contain JSON
>> data. However, in many cases the JSON data is just one column amongst
>> others. This is particularly true when reading from sources such as Kafka. 
>> This
>> PR <https://github.com/apache/spark/pull/15274> adds a new functions
>> from_json that converts a string column into a nested StructType with a
>> user specified schema, using the same internal logic as the json Data
>> Source.
>>
>> Would love to hear any comments / suggestions.
>>
>> Michael
>>
>
>

Reply via email to