Thank you for everyone,  origin question " Every time, i write to parquet.
it shows on Spark UI that stages succeeded but on spark shell it hold
context on wait mode for almost 10 mins. then it clears broadcast,
accumulator shared variables.".

I don't think stopping context can resolve current issue.

It takes more time to clear Broadcast, accumulator etc.

Can we tune up this with spark 1.6.1 MapR distribution.
On Oct 27, 2016 2:34 PM, "Mehrez Alachheb" <lachheb.meh...@gmail.com> wrote:

> I think you should just shut down your SparkContext at the end.
> sc.stop()
>
> 2016-10-21 22:47 GMT+02:00 Chetan Khatri <ckhatriman...@gmail.com>:
>
>> Hello Spark Users,
>>
>> I am writing around 10 GB of Processed Data to Parquet where having 1 TB
>> of HDD and 102 GB of RAM, 16 vCore machine on Google Cloud.
>>
>> Every time, i write to parquet. it shows on Spark UI that stages
>> succeeded but on spark shell it hold context on wait mode for almost 10
>> mins. then it clears broadcast, accumulator shared variables.
>>
>> Can we sped up this thing ?
>>
>> Thanks.
>>
>> --
>> Yours Aye,
>> Chetan Khatri.
>> M.+91 76666 80574
>> Data Science Researcher
>> INDIA
>>
>> ​​Statement of Confidentiality
>> ————————————————————————————
>> The contents of this e-mail message and any attachments are confidential
>> and are intended solely for addressee. The information may also be legally
>> privileged. This transmission is sent in trust, for the sole purpose of
>> delivery to the intended recipient. If you have received this transmission
>> in error, any use, reproduction or dissemination of this transmission is
>> strictly prohibited. If you are not the intended recipient, please
>> immediately notify the sender by reply e-mail or phone and delete this
>> message and its attachments, if any.​​
>>
>
>

Reply via email to