> encountered. Sometime it goes further after shuffle dep before executing
>> anything. e.g. if there are map steps after shuffle then it doesn't stop at
>> shuffle to execute anything but goes to that next map steps until it finds
>> a reason(spark action) to execute. As a result stag
result stage that spark is running
> can be internally series of (map -> shuffle -> map -> map -> collect) and
> spark UI just shows its currently running 'collect' stage. SO if job fails
> at that point spark UI just says Collect failed but in fact it could be any
> stage in th
to execute anything but goes to that next map steps until it finds
a reason(spark action) to execute. As a result stage that spark is running
can be internally series of (map -> shuffle -> map -> map -> collect) and
spark UI just shows its currently running 'collect' stage. SO
Hi,
I have been noticing that for shuffled tasks(groupBy, Join) reducer tasks
are not evenly loaded. Most of them (90%) finished super fast but there are
some outliers that takes much longer as you can see from "Max" value in
following metric. Metric is from Join operation done on two RDDs. I
t show actual python line numbers.
Thanks,
Chaitanya
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-UI-only-shows-lines-belonging-to-py4j-lib-tp26893.html
Sent from the Apache Spark User List mailing list archive at Nabb
RDD is cached, this RDD is only computed once and the stages for
>>>> computing this RDD in the following jobs are skipped.
>>>>
>>>>
>>>> On Wed, Mar 16, 2016 at 8:14 AM, Prabhu Joseph <
>>>> prabhujose.ga...@gmail.com> wrote:
>>>&
d.
>>
>>
>> On Wed, Mar 16, 2016 at 8:14 AM, Prabhu Joseph <
>> prabhujose.ga...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>>
>>> Spark UI Completed Jobs section shows below information, what is the
>>> skipped value shown for
t; computing this RDD in the following jobs are skipped.
>
>
> On Wed, Mar 16, 2016 at 8:14 AM, Prabhu Joseph <prabhujose.ga...@gmail.com
> > wrote:
>
>> Hi All,
>>
>>
>> Spark UI Completed Jobs section shows below information, what is the
>&g
Hi All,
Spark UI Completed Jobs section shows below information, what is the
skipped value shown for Stages and Tasks below.
Job_IDDescriptionSubmittedDuration
Stages (Succeeded/Total)Tasks (for all stages): Succeeded/Total
11 count 2016/03
Hi Teng,
I was not asking the question, I was responding in terms of what to expect
from SPARK UI in terms of how you start using SPARK application.
Thanks and Regards,
Gourav
On Tue, Mar 1, 2016 at 8:30 PM, Teng Qiu <teng...@gmail.com> wrote:
> as Gourav said, the application UI on
that" :)
2016-03-01 21:13 GMT+01:00 Gourav Sengupta <gourav.sengu...@gmail.com>:
> Hi,
>
> in case you are submitting your SPARK jobs then the UI is only available
> when the job is running.
>
> Else if you are starting a SPARK cluster in standalone mode or HADO
Hi,
in case you are submitting your SPARK jobs then the UI is only available
when the job is running.
Else if you are starting a SPARK cluster in standalone mode or HADOOP or
etc, then the SPARK UI remains alive.
The other way to keep the SPARK UI alive is to use the Jupyter notebook for
Python
1日(星期二) 上午8:02
> *收件人:* "Sumona Routh"<sumos...@gmail.com>;
> *抄送:* "user@spark.apache.org"<user@spark.apache.org>;
> *主题:* Re: Spark UI standalone "crashes" after an application finishes
>
> Do you mean you cannot access Master UI after your app
bruary 29, 2016 4:03 PM
To: Sumona Routh
Cc: user@spark.apache.org
Subject: Re: Spark UI standalone "crashes" after an application finishes
Do you mean you cannot access Master UI after your application completes? Could
you check the master log?
On Mon, Feb 29, 2016 at 3:48 PM, Sumo
atabricks.com>;
: 2016??3??1??(??) 8:02
??: "Sumona Routh"<sumos...@gmail.com>;
: "user@spark.apache.org"<user@spark.apache.org>;
: Re: Spark UI standalone "crashes" after an application finishes
Do you mean you cannot acc
Do you mean you cannot access Master UI after your application completes?
Could you check the master log?
On Mon, Feb 29, 2016 at 3:48 PM, Sumona Routh wrote:
> Hi there,
> I've been doing some performance tuning of our Spark application, which is
> using Spark 1.2.1
Hi there,
I've been doing some performance tuning of our Spark application, which is
using Spark 1.2.1 standalone. I have been using the spark metrics to graph
out details as I run the jobs, as well as the UI to review the tasks and
stages.
I notice that after my application completes, or is near
Hi Jeff ,
The issues with EC2 logs view .
Had to set up SSH tunnels to view the current running job.
Thanks,
Divya
On 24 February 2016 at 10:33, Jeff Zhang <zjf...@gmail.com> wrote:
> View running job in SPARK UI doesn't matter which master you use. What do
> you mean
View running job in SPARK UI doesn't matter which master you use. What do
you mean "I cant see the currently running jobs in Spark WEB UI" ? Do you
see a blank spark ui or can't open the spark ui ?
On Mon, Feb 15, 2016 at 12:55 PM, Sabarish Sasidharan <
sabarish.sasidha...@manth
://apache-spark-user-list.1001560.n3.nabble.com/Spark-UI-documentaton-needed-tp26300p26301.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
Hi Sparklers,
Can you guys give an elaborate documentation of Spark UI as there are many
fields in it and we do not know much about it.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-UI-documentaton-needed-tp26300.html
Sent from the Apache Spark
When running in YARN, you can use the YARN Resource Manager UI to get to
the ApplicationMaster url, irrespective of client or cluster mode.
Regards
Sab
On 15-Feb-2016 10:10 am, "Divya Gehlot" wrote:
> Hi,
> I have Hortonworks 2.3.4 cluster on EC2 and Have spark jobs as
Hi,
I have Hortonworks 2.3.4 cluster on EC2 and Have spark jobs as scala files
.
I am bit confused between using *master *options
I want to execute this spark job in YARN
Curently running as
spark-shell --properties-file /TestDivya/Spark/Oracle.properties --jars
In spark UI , Workers used memoy show negative number as following picture:
spark version:1.4.0
How to solve this problem? appreciate for you help!
3526FD5F@8B5ABE15.9A0C9356.png
Description: Binary data
According to https://spark.apache.org/docs/latest/security.html#web-ui ,
web UI is covered.
FYI
On Thu, Jan 7, 2016 at 6:35 AM, Kostiantyn Kudriavtsev <
kudryavtsev.konstan...@gmail.com> wrote:
> hi community,
>
> do I understand correctly that spark.ui.filters property sets up filters
> only
can I do it without kerberos and hadoop?
ideally using filters as for job UI
On Jan 7, 2016, at 1:22 PM, Prem Sure wrote:
> you can refer more on https://searchcode.com/codesearch/view/97658783/
>
Without kerberos you don't have true security.
Cheers
On Thu, Jan 7, 2016 at 1:56 PM, Kostiantyn Kudriavtsev <
kudryavtsev.konstan...@gmail.com> wrote:
> can I do it without kerberos and hadoop?
> ideally using filters as for job UI
>
> On Jan 7, 2016, at 1:22 PM, Prem Sure
I know, but I need only to hide/protect web ui at least with servlet/filter api
On Jan 7, 2016, at 4:59 PM, Ted Yu wrote:
> Without kerberos you don't have true security.
>
> Cheers
>
> On Thu, Jan 7, 2016 at 1:56 PM, Kostiantyn Kudriavtsev
>
t; On Jan 4, 2016, at 10:05 PM, Prasad Ravilla <pras...@slalom.com> wrote:
>>
>> I am seeing negative active tasks in the Spark UI.
>>
>> Is anyone seeing this?
>> How is this possible?
>>
>> Thanks,
>> Prasad.<Negative_Active_Tasks[1][GJVA].png>
lWJQ5Y4=-5JY3iMOXXyFuBleKruCQ-6rGWyZEyiHu8ySSzJdEHw=4v0Ji1ymhcVi2Ys2mzOne0cuiDxWMiYmeRYVUeF3hWU=9L2ltekpwnC0BDcJPW43_ctNL_G4qTXN4EY2H_Ys0nU=
> >
> >Do you use dynamic allocation ?
> >
> >Cheers
> >
> >> On Jan 4, 2016, at 10:05 PM, Prasad Ravilla <pras...@slalom.com> wrot
Which version of Spark do you use ?
This might be related:
https://issues.apache.org/jira/browse/SPARK-8560
Do you use dynamic allocation ?
Cheers
> On Jan 4, 2016, at 10:05 PM, Prasad Ravilla <pras...@slalom.com> wrote:
>
> I am seeing negative active tasks in the Spark UI.
Hi,
We tried to get the streaming tab interface on Spark UI -
https://databricks.com/blog/2015/07/08/new-visualizations-for-understanding-spark-streaming-applications.html
Tested on version 1.5.1, 1.6.0-snapshot, but no such interface for
streaming applications at all. Any suggestions? Do we
e:
> Hi,
>
> We tried to get the streaming tab interface on Spark UI -
> https://databricks.com/blog/2015/07/08/new-visualizations-for-understanding-spark-streaming-applications.html
>
> Tested on version 1.5.1, 1.6.0-snapshot, but no such interface for
> streaming application
d has been in there since at least 1.0.
HTH,
Duc
On Fri, Dec 4, 2015 at 7:28 AM, patcharee <patcharee.thong...@uni.no
<mailto:patcharee.thong...@uni.no>> wrote:
Hi,
We tried to get the streaming tab interface on Spark UI -
https://databricks.com/blog/2015/07
, 2015 at 7:28 AM, patcharee <patcharee.thong...@uni.no>
> wrote:
>
>> Hi,
>>
>> We tried to get the streaming tab interface on Spark UI -
>> <https://databricks.com/blog/2015/07/08/new-visualizations-for-understanding-spark-streaming-applications.html>
>&g
Hi,
I wonder what does write time means exactly?
I run GraphX workloads and noticed the main bottleneck in most stages is
one or two tasks takes too long in "write time" and delay the whole job.
Enabling speculation helps a little but I am still interested to know how
to fix that?
I use
ion. I am also using Spark Standalone
>>>>> cluster
>>>>> manager so have not had to use the history server.
>>>>>
>>>>>
>>>>> On Mon, Oct 12, 2015 at 8:17 PM, Shixiong Zhu <zsxw...@gmail.com>
>>>>> w
Hello,
I am wondering if anyone else is also facing this issue:
https://issues.apache.org/jira/browse/SPARK-11147
17 PM, Shixiong Zhu <zsxw...@gmail.com>
>>>> wrote:
>>>>
>>>>> Could you show how did you set the configurations? You need to set
>>>>> these configurations before creating SparkContext and SQLContext.
>>>>>
>>>>> Mor
Hi,
In my application, the Spark UI is consuming a lot of memory, especially the
SQL tab. I have set the following configurations to reduce the memory
consumption:
- spark.ui.retainedJobs=20
- spark.ui.retainedStages=40
- spark.sql.ui.retainedExecutions=0
However, I still get OOM errors
oesn't support SQL UI. So
>> "spark.eventLog.enabled=true" doesn't work now.
>>
>> Best Regards,
>> Shixiong Zhu
>>
>> 2015-10-13 2:01 GMT+08:00 pnpritchard <nicholas.pritch...@falkonry.com>:
>>
>>> Hi,
>>&
ns? You need to set these
> configurations before creating SparkContext and SQLContext.
>
> Moreover, the history sever doesn't support SQL UI. So
> "spark.eventLog.enabled=true" doesn't work now.
>
> Best Regards,
> Shixiong Zhu
>
> 2015-10-13 2:01 GMT+08:00 pnpritc
did you set the configurations? You need to set
>>>> these configurations before creating SparkContext and SQLContext.
>>>>
>>>> Moreover, the history sever doesn't support SQL UI. So
>>>> "spark.eventLog.enabled=true" do
GMT+08:00 pnpritchard <nicholas.pritch...@falkonry.com>:
> Hi,
>
> In my application, the Spark UI is consuming a lot of memory, especially
> the
> SQL tab. I have set the following configurations to reduce the memory
> consumption:
> - spark.ui.retainedJ
tions? You need to set these
>>> configurations before creating SparkContext and SQLContext.
>>>
>>> Moreover, the history sever doesn't support SQL UI. So
>>> "spark.eventLog.enabled=true" doesn't work now.
>>>
>>> Best Regar
I am having problem in accessing spark UI while running in spark-client
mode. It works fine in local mode.
It keeps redirecting back to itself by adding /null at the end and
ultimately run out of size limit for url and returns 500. Look at response
below.
I have a feeling that I might be missing
Hi All,
I am having problem in accessing spark UI while running in spark-client
mode. It works fine in local mode.
It keeps redirecting back to itself by adding /null at the end and
ultimately run out of size limit for url and returns 500. Look at response
below.
I have a feeling that I might
Hi All,
I am having problem in accessing spark UI while running in spark-client
mode. It works fine in local mode.
It keeps redirecting back to itself by adding /null at the end and
ultimately run out of size limit for url and returns 500. Look at following
below.
I have a feeling that I might
link for completed applications) to get to the Spark UI
from there. The ApplicationMaster link will use the YARN Proxy Service
(port 20888 on emr-4.x; not sure about 3.x) to proxy through the Spark
application's UI, regardless of what port it's running on. For completed
applications, the History link
to
access the spark ui directly. The application proxy was still getting in
the way by the way it creates the URL, so I manually filled in the
/stage?id=#attempt=# and that workedI'm still having trouble with the
css as the UI looks horridbut I'll tackle that next :)
On Tue, Aug 25
investigated it myself yet, but if you instead go to the YARN
ResourceManager UI (port 8088 if you are using emr-4.x; port 9026 for 3.x,
I believe), then you should be able to click on the ApplicationMaster link
(or the History link for completed applications) to get to the Spark UI
from
SPARK_DNS_HOME=ec2_publicdns, which makes it available to
access the spark ui directly. The application proxy was still getting in
the way by the way it creates the URL, so I manually filled in the
/stage?id=#attempt=# and that workedI'm still having trouble with the
css as the UI looks horrid
SUCCESS! I set SPARK_DNS_HOME=ec2_publicdns, which makes it available to
access the spark ui directly. The application proxy was still getting in
the way by the way it creates the URL, so I manually filled in the
/stage?id=#attempt=# and that workedI'm still having trouble with the
css
.
So, how can I access the spark UI when running a spark shell in AWS yarn?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-access-Spark-UI-through-AWS-tp24436.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Running Spark on YARN in yarn-client mode, everything appears to be working
just fine except I can't access Spark UI via the Application Master link in
YARN UI or directly at http://driverip:4040/jobs I get error 500, and the
driver log shows the error pasted below:
When running the same job
Running Spark on YARN in yarn-client mode, everything appears to be working
just fine except I can't access Spark UI via the Application Master link in
YARN UI or directly at http://driverip:4040/jobs I get error 500, and the
driver log shows the error pasted below:
When running the same job
Thanks!
I was getting a little confused by this partitioner business, I thought
that by default a pairRDD would be partitioned by a HashPartitioner? Was
this possibly the case in 0.9.3 but not in 1.x?
In anycase, I tried your suggestion and the shuffle was removed. Cheers.
One small question
Hi.
I have an RDD that I use repeatedly through many iterations of an
algorithm. To prevent recomputation, I persist the RDD (and incidentally I
also persist and checkpoint it's parents)
val consCostConstraintMap = consCost.join(constraintMap).map {
case (cid, (costs,(mid1,_,mid2,_,_))) = {
Is it a way to tunnel Spark UI?
I tried to tunnel client-node:4040 but my browser was redirected from
localhost to some cluster locally visible domain name..
Maybe there is some startup option to encourage Spark UI be fully
accessiable just through single endpoint (address:port)?
Serg
On Mon, Mar 23, 2015 at 1:12 PM, sergunok ser...@gmail.com wrote:
Is it a way to tunnel Spark UI?
I tried to tunnel client-node:4040 but my browser was redirected from
localhost to some cluster locally visible domain name..
Maybe there is some startup option to encourage Spark UI be fully
Regards
On Mon, Mar 23, 2015 at 1:12 PM, sergunok ser...@gmail.com wrote:
Is it a way to tunnel Spark UI?
I tried to tunnel client-node:4040 but my browser was redirected from
localhost to some cluster locally visible domain name..
Maybe there is some startup option to encourage Spark UI
Did you try ssh -L 4040:127.0.0.1:4040 user@host
Thanks
Best Regards
On Mon, Mar 23, 2015 at 1:12 PM, sergunok ser...@gmail.com wrote:
Is it a way to tunnel Spark UI?
I tried to tunnel client-node:4040 but my browser was redirected from
localhost to some cluster locally visible domain name
Hi ,
1. When I run my application with --master yarn-cluster or --master
yarn --deploy-mode cluster , I can not the spark UI at the location --
masternode:4040Even if I am running the job , I can not see teh SPARK UI.
2. When I run with --master yarn --deploy-mode client -- I see
?
Thanks a lot for the help
-AJ
On Mon, Mar 2, 2015 at 3:50 PM, Marcelo Vanzin van...@cloudera.com wrote:
What are you calling masternode? In yarn-cluster mode, the driver
is running somewhere in your cluster, not on the machine where you run
spark-submit.
The easiest way to get to the Spark UI
yarn-cluster or --master yarn
--deploy-mode cluster , I can not the spark UI at the location --
masternode:4040Even if I am running the job , I can not see teh SPARK
UI.
When I run with --master yarn --deploy-mode client -- I see the Spark
UI
but I cannot see my job running.
When I
What are you calling masternode? In yarn-cluster mode, the driver
is running somewhere in your cluster, not on the machine where you run
spark-submit.
The easiest way to get to the Spark UI when using Yarn is to use the
Yarn RM's web UI. That will give you a link to the application's UI
somewhere in your cluster, not on the machine where you run
spark-submit.
The easiest way to get to the Spark UI when using Yarn is to use the
Yarn RM's web UI. That will give you a link to the application's UI
regardless of whether it's running on client or cluster mode.
On Mon, Mar 2, 2015 at 3
On Mon, Mar 2, 2015 at 3:50 PM, Marcelo Vanzin van...@cloudera.com
wrote:
What are you calling masternode? In yarn-cluster mode, the driver
is running somewhere in your cluster, not on the machine where you run
spark-submit.
The easiest way to get to the Spark UI when using Yarn
Hi All,
Is there a way to disable the Spark UI? What I really need is to stop the
startup of the Jetty server.
--
Thanks regards,
Nirmal
Senior Software Engineer- Platform Technologies Team, WSO2 Inc.
Mobile: +94715779733
Blog: http://nirmalfdo.blogspot.com/
...@wso2.com wrote:
Hi All,
Is there a way to disable the Spark UI? What I really need is to stop
the startup of the Jetty server.
--
Thanks regards,
Nirmal
Senior Software Engineer- Platform Technologies Team, WSO2 Inc.
Mobile: +94715779733
Blog: http://nirmalfdo.blogspot.com
I'm deploying Spark using the Click to Deploy Hadoop - Install Apache
Spark on Google Compute Engine.
I can run Spark jobs on the REPL and read data from Google storage.
However, I'm not sure how to access the Spark UI in this deployment. Can
anyone help?
Also, it deploys Spark 1.1
, at 4:47 PM, Soumya Simanta soumya.sima...@gmail.com wrote:
I'm deploying Spark using the Click to Deploy Hadoop - Install Apache
Spark on Google Compute Engine.
I can run Spark jobs on the REPL and read data from Google storage. However,
I'm not sure how to access the Spark UI
Hello,
From accumulator documentation, it says that if the accumulator is named,
it will be displayed in the WebUI. However, I cannot find it anywhere.
Do I need to specify anything in the spark ui config?
Thanks.
Justin
accumulator documentation, it says that if the accumulator is
named, it
will be displayed in the WebUI. However, I cannot find it anywhere.
Do I need to specify anything in the spark ui config?
Thanks.
Justin
it anywhere.
Do I need to specify anything in the spark ui config?
Thanks.
Justin
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
Sorry- replace ### with an actual number. What does a skipped stage mean?
I'm running a series of jobs and it seems like after a certain point, the
number of skipped stages is larger than the number of actual completed
stages.
On Wed, Jan 7, 2015 at 3:28 PM, Ted Yu yuzhih...@gmail.com wrote:
+Josh, who added the Job UI page.
I've seen this as well and was a bit confused about what it meant. Josh, is
there a specific scenario that creates these skipped stages in the Job UI ?
Thanks
Shivaram
On Wed, Jan 7, 2015 at 12:32 PM, Corey Nolet cjno...@gmail.com wrote:
Sorry- replace ###
We just upgraded to Spark 1.2.0 and we're seeing this in the UI.
Looks like the number of skipped stages couldn't be formatted.
Cheers
On Wed, Jan 7, 2015 at 12:08 PM, Corey Nolet cjno...@gmail.com wrote:
We just upgraded to Spark 1.2.0 and we're seeing this in the UI.
of a stage is skipped if the
results for that stage are still available from the evaluation of a prior
job run:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala#L163
On Wed, Jan 7, 2015 at 12:32 PM, Corey Nolet cjno...@gmail.com
That's what you want to see. The computation of a stage is skipped if the
results for that stage are still available from the evaluation of a prior
job run:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala#L163
On Wed, Jan 7, 2015
Hi Manoj,
I've noticed that the storage tab only shows RDDs that have been cached.
Did you call .cache() or .persist() on any of the RDDs?
Andrew
On Tue, Jan 6, 2015 at 6:48 PM, Manoj Samel manojsamelt...@gmail.com
wrote:
Hi,
I create a bunch of RDDs, including schema RDDs. When I run the
Hi,
I create a bunch of RDDs, including schema RDDs. When I run the program and
go to UI on xxx:4040, the storage tab does not shows any RDDs.
Spark version is 1.1.1 (Hadoop 2.3)
Any thoughts?
Thanks,
Hi,
The Spark documentation states that If accumulators are created with a
name, they will be displayed in Spark’s UI
http://spark.apache.org/docs/latest/programming-guide.html#accumulators
Where exactly are they shown? I may be dense, but I can't find them on the
UI from http://localhost:4040
Hello folks,
I'm trying to deploy a Spark driver on Amazon EMR in yarn-cluster mode
expecting to be able to access the Spark UI from the spark-master-ip:4040
address (default port). The problem here is that the Spark UI port is
always defined randomly at runtime, although I also tried to specify
It shows the amount of memory used to store RDD blocks, which are created
when you run .cache()/.persist() on an RDD.
On Wed, Oct 22, 2014 at 10:07 PM, Haopu Wang hw...@qilinsoft.com wrote:
Hi, please take a look at the attached screen-shot. I wonders what's the
Memory Used column mean.
I
!
From: Patrick Wendell [mailto:pwend...@gmail.com]
Sent: 2014年10月23日 14:00
To: Haopu Wang
Cc: user
Subject: Re: About Memory usage in the Spark UI
It shows the amount of memory used to store RDD blocks, which are created when
you run .cache()/.persist() on an RDD.
On Wed
The memory usage of blocks of data received through Spark Streaming is not
reflected in the Spark UI. It only shows the memory usage due to cached
RDDs.
I didnt find a JIRA for this, so I opened a new one.
https://issues.apache.org/jira/browse/SPARK-4072
TD
On Thu, Oct 23, 2014 at 12:47 AM
: 2014年10月24日 8:07
To: Haopu Wang
Cc: Patrick Wendell; user
Subject: Re: About Memory usage in the Spark UI
The memory usage of blocks of data received through Spark Streaming is not
reflected in the Spark UI. It only shows the memory usage due to cached RDDs.
I didnt find a JIRA for this, so I
Hi, please take a look at the attached screen-shot. I wonders what's the
Memory Used column mean.
I give 2GB memory to the driver process and 12GB memory to the executor
process.
Thank you!
Set up the spark port to a different one and the connection seems successful
but get a 302 to /proxy on port 8100 ? Nothing is listening on that port as
well.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-ui-redirecting-to-port-8100-tp16956.html
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-ui-redirecting-to-port-8100-tp16956.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user
2014-09-25 12:55 GMT-07:00 Harsha HN 99harsha.h@gmail.com:
Hi,
Details laid out in Spark UI for the job in progress is really interesting
and very useful.
But this gets vanished once the job is done.
Is there a way to get job details post processing?
Looking for Spark UI data
. Then, if you are
running standalone mode, you can access the finished SparkUI through the
Master UI. Otherwise, you can start a HistoryServer to display finished
UIs.
-Andrew
2014-09-25 12:55 GMT-07:00 Harsha HN 99harsha.h@gmail.com:
Hi,
Details laid out in Spark UI for the job
Hi,
Details laid out in Spark UI for the job in progress is really interesting
and very useful.
But this gets vanished once the job is done.
Is there a way to get job details post processing?
Looking for Spark UI data, not standard input,output and error info.
Thanks,
Harsha
, you can start a HistoryServer to display finished UIs.
-Andrew
2014-09-25 12:55 GMT-07:00 Harsha HN 99harsha.h@gmail.com:
Hi,
Details laid out in Spark UI for the job in progress is really
interesting and very useful.
But this gets vanished once the job is done.
Is there a way to get
my yarn environment does have less memory for the executors.
i am checking if the RDDs are cached by calling sc.getRDDStorageInfo, which
shows an RDD as fully cached in memory, yet it does not show up in the UI
On Sun, Jul 13, 2014 at 1:49 AM, Matei Zaharia matei.zaha...@gmail.com
wrote:
The
Hi Koert,
Just curious did you find any information like CANNOT FIND ADDRESS
after clicking into some stage? I've seen similar problems due to lost of
executors.
Best,
On Fri, Jul 11, 2014 at 4:42 PM, Koert Kuipers ko...@tresata.com wrote:
I just tested a long lived application (that we
hey shuo,
so far all stage links work fine for me.
i did some more testing, and it seems kind of random what shows up on the
gui and what does not. some partially cached RDDs make it to the GUI, while
some fully cached ones do not. I have not been able to detect a pattern.
is the codebase for
The UI code is the same in both, but one possibility is that your executors
were given less memory on YARN. Can you check that? Or otherwise, how do you
know that some RDDs were cached?
Matei
On Jul 12, 2014, at 4:12 PM, Koert Kuipers ko...@tresata.com wrote:
hey shuo,
so far all stage
101 - 200 of 207 matches
Mail list logo