og/hadoop-yarn/containers
>>>
>>> Is there a way to clean up these directories while the spark streaming
>>> application is running?
>>>
>>> Thanks
>>>
>>
--
Take Care
Fawze Abujaber
n Fri, Aug 17, 2018 at 2:01 AM Manu Zhang wrote:
> Hi Fawze,
>
> Sorry but I'm not familiar with CM. Maybe you can look into the logs (or
> turn on DEBUG log).
>
> On Thu, Aug 16, 2018 at 3:05 PM Fawze Abujaber wrote:
>
>> Hi Manu,
>>
>> I'm using
able to log onto the node where UI has been launched, then try
> `ps -aux | grep HistoryServer` and the first column of output should be the
> user.
>
> On Wed, Aug 15, 2018 at 10:26 PM Fawze Abujaber wrote:
>
>> Thanks Manu, Do you know how i can see which user the UI is
) to the group like Spark will do.
>
>
> On Wed, Aug 15, 2018 at 6:38 PM Fawze Abujaber wrote:
>
>> Hi Manu,
>>
>> Thanks for your response.
>>
>> Yes, i see but still interesting to know how i can see these applications
>> from the spark histor
Hi Manu,
Thanks for your response.
Yes, i see but still interesting to know how i can see these applications
from the spark history UI.
How i can know with which user i'm logged in when i'm navigating the spark
history UI.
The Spark process is running with cloudera-scm and the events written i
Hi Guys,
Any help here?
On Wed, Aug 8, 2018 at 7:56 AM Fawze Abujaber wrote:
> Hello Community,
>
> I'm using Spark 2.3 and Spark 1.6.0 in my cluster with Cloudera
> distribution 5.13.0.
>
> Both are configured to run on Yarn, but i'm unable to see completed
>
Sparks UIs running with different users but
was unable to find it.
Anyone who ran into this issue and solved it?
Thanks in advance.
--
Take Care
Fawze Abujaber
where your *.so library resides
>
> On Thursday, May 3, 2018, 5:06:35 AM PDT, Fawze Abujaber <
> fawz...@gmail.com> wrote:
>
>
> Hi Guys,
>
> I'm running into issue where my spark jobs are failing on the below error,
> I'm using Spark 1.6.0 with CDH 5.13.0.
>
oudera-scm cloudera-scm 62268 Oct 4 2017
hadoop-lzo-0.4.15-cdh5.13.0.jar
lrwxrwxrwx 1 cloudera-scm cloudera-scm31 May 3 07:23 hadoop-lzo.jar ->
hadoop-lzo-0.4.15-cdh5.13.0.jar
drwxr-xr-x 2 cloudera-scm cloudera-scm 4096 Oct 4 2017 native
--
Take Care
Fawze Abujaber
y worry
> about tuning memory when you can let spark take care of it automatically
> based on memory pressure. Will post details when we are ready. So yes we
> are working on memory, but it will not be a tool but a transparent feature.
>
> thanks,
> rohitk
>
>
>
>
> O
e:
>
>> Hi Rohit,
>>
>> Thanks for the analysis.
>>
>> I can use repartition on the slow task. But how can I tell what part of
>> the code is in charge of the slow tasks?
>>
>> It would be great if you could further explain the rest of the output.
added that to my config and ran spark-shell.
>
> $ hdfs dfs -ls /user/spark/applicationHistory | grep
> application_1522085988298_0002
> -rwxrwx--- 3 blah blah 9844 2018-03-26 10:54
> /user/spark/applicationHistory/application_1522085988298_0002.snappy
>
>
>
> On Mo
fferent node, if the setting is
> there, Spark should be using it.
>
> You can also look in the UI's environment page to see the
> configuration that the app is using.
>
> On Mon, Mar 26, 2018 at 10:10 AM, Fawze Abujaber
> wrote:
> > I see this configuration onl
the spark-defaults.conf file in the machine where you're starting
> the Spark app has that config, then that's all that should be needed.
>
> On Mon, Mar 26, 2018 at 10:02 AM, Fawze Abujaber
> wrote:
> > Thanks Marcelo,
> >
> > Yes I was was expecting to see
format.
>
> The SHS doesn't compress existing logs.
>
> On Mon, Mar 26, 2018 at 9:17 AM, Fawze Abujaber wrote:
> > Hi All,
> >
> > I'm trying to compress the logs at SPark history server, i added
> > spark.eventLog.compress=true to spark-defaults.conf to s
Hi All,
I'm trying to compress the logs at SPark history server, i
added spark.eventLog.compress=true to spark-defaults.conf to spark Spark
Client Advanced Configuration Snippet (Safety Valve) for
spark-conf/spark-defaults.conf
which i see applied only to the spark gateway servers spark conf.
Wh
anch, and also compiled against the specific
> image of Spark we are using (cdh5.7.6).
>
> Now I need to figure out what the output means... :P
>
> Shmuel
>
> On Fri, Mar 23, 2018 at 7:24 PM, Fawze Abujaber wrote:
>
>> Quick question:
>>
>> how to add t
jar should be an hdfs path as i'm using it in
cluster mode.
On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber wrote:
> Hi Shmuel,
>
> Did you compile the code against the right branch for Spark 1.6.
>
> I tested it and it looks working and now i'm testing the branch for a
how it doesn't. Both help.
>>>
>>> Fawaze, just made few changes to make this work with spark 1.6. Can you
>>> please try building from branch *spark_1.6*
>>>
>>> thanks,
>>> rohitk
>>>
>>>
>>>
>>> On Thu, Ma
It's super amazing i see it was tested on spark 2.0.0 and above, what
about Spark 1.6 which is still part of Cloudera's main versions?
We have a vast Spark applications with version 1.6.0
On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau wrote:
> Super exciting! I look forward to digging throu
It's recommended to sue executor-cores of 5.
Each executor here will utilize 20 GB which mean the spark job will utilize
50 cpu cores and 100GB memory.
You can not run more than 4 executors because your cluster doesn't have
enough memory.
Use see 5 executor because 4 for the job and one for the
Hi all,
I upgraded my Hadoop cluster which include spark 1.6.0, I noticed that
sometimes the job is running with scala version 2.10.5 and sometimes with
2.10.4, any idea why this happening?
Hi Soheil,
Resource manager and NodeManager are enough, of your you need the roles of
DataNode and NameNode to be able accessing the Data.
On Thu, 18 Jan 2018 at 10:12 Soheil Pourbafrani
wrote:
> I am setting up a Yarn cluster to run Spark applications on that, but I'm
> confused a bit!
>
> Con
23 matches
Mail list logo