Dear fellow Spark users,
I have a multithreaded Java program which launches multiple Spark jobs in
parallel through *SparkLauncher* API. It also monitors these Spark jobs and
keeps on updating information like job start/end time, current state,
tracking url etc in an audit table. To get these info
Thank you Armbust.
On Fri, Mar 24, 2017 at 7:02 PM, Michael Armbrust
wrote:
> I'm not sure you can parse this as an Array, but you can hint to the
> parser that you would like to treat source as a map instead of as a
> struct. This is a good strategy when you have dynamic columns in your data.