Hi All,
Processing streaming JSON files with Spark features (Spark streaming and
Spark SQL), is very efficient and works like a charm.
Below is the code snippet to process JSON files.
windowDStream.foreachRDD(IncomingFiles => {
val IncomingFilesTable = sqlContext.jsonRDD(Incoming
inefficient. Is
there any alternative processing for xml files?
- How to create Spark SQL table with the above xml data?
Regards
Vijay Innamuri
On 16 March 2015 at 12:12, Akhil Das wrote:
> One approach would be, If you are using fileStream you can access the
> individual filenames fr
When I run the Spark application (streaming) in local mode I could see the
execution progress as below..
[Stage
0:>
(1817 + 1) / 3125]
[Stage
2:===>
(740 + 1) / 3125]
One of the stages
; Stage tab to see what is taking so long. And yes code snippet helps us
> debug.
>
> TD
>
> On Fri, Apr 3, 2015 at 12:47 AM, Akhil Das
> wrote:
>
>> You need open the Stage\'s page which is taking time, and see how long
>> its spending on GC etc. Also it will