I think Spark doesn't keep historical metrics. You can use something like
SPM for that -
http://blog.sematext.com/2014/01/30/announcement-apache-storm-monitoring-in-spm/
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support *
I think you can use SPM - http://sematext.com/spm - it will give you all
Spark and all Kafka metrics, including offsets broken down by topic, etc.
out of the box. I see more and more people using it to monitor various
components in data processing pipelines, a la
Hi,
If you get ES response back in 1-5 seconds that's pretty slow. Are these
ES aggregation queries? Costin may be right about GC possibly causing
timeouts. SPM http://sematext.com/spm/ can give you all Spark and all
key Elasticsearch metrics, including various JVM metrics. If the problem
is
Regards
On Tue, Mar 17, 2015 at 3:26 AM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Hi,
I've been trying to run a simple SparkWordCount app on EC2, but it looks
like my apps are not succeeding/completing. I'm suspecting some sort of
communication issue. I used the SparkWordCount app
Hi,
I've been trying to run a simple SparkWordCount app on EC2, but it looks
like my apps are not succeeding/completing. I'm suspecting some sort of
communication issue. I used the SparkWordCount app from
http://blog.cloudera.com/blog/2014/04/how-to-run-a-simple-apache-spark-app-in-cdh-5/
Hi Josh,
SPM will show you this info. I see you use Kafka, too, whose numerous metrics
you can also see in SPM side by side with your Spark metrics. Sounds like
trends is what you are after, so I hope this helps. See http://sematext.com/spm
Otis
On Feb 24, 2015, at 11:59, Josh J
Hi,
I'll be showing our Spark monitoring
http://blog.sematext.com/2014/10/07/apache-spark-monitoring/ at the
upcoming Spark Summit in NYC. I'd like to run some/any Spark job that
really exercises Spark and makes it emit all its various metrics (so the
metrics charts are full of data and not
Hi Judy,
SPM monitors Spark. Here are some screenshots:
http://blog.sematext.com/2014/10/07/apache-spark-monitoring/
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On Mon, Dec 8, 2014 at 2:35 AM, Judy Nash
Hi Isca,
I think SPM can do that for you:
http://blog.sematext.com/2014/10/07/apache-spark-monitoring/
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On Tue, Dec 2, 2014 at 11:57 PM, Isca Harmatz
Hi everyone,
We've recently added indexing of all Spark resources to
http://search-hadoop.com/spark .
Everything is nicely searchable:
* user dev mailing lists
* JIRA issues
* web site
* wiki
* source code
* javadoc.
Maybe it's worth adding to http://spark.apache.org/community.html ?
Enjoy!
Hi Mahsa,
Use SPM http://sematext.com/spm/. See
http://blog.sematext.com/2014/10/07/apache-spark-monitoring/ .
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On Fri, Oct 31, 2014 at 1:00 PM, mahsa
Hi,
If using Ganglia is not an absolute requirement, check out SPM
http://sematext.com/spm/ for Spark --
http://blog.sematext.com/2014/10/07/apache-spark-monitoring/
It monitors all Spark metrics (i.e. you don't need to figure out what you
need to monitor, how to get it, how to graph it, etc.)
Hi,
Jerry said I'm guessing, so maybe the thing to try is to check if his
guess is correct.
What about running sudo lsof | grep metrics.properties ? I imagine you
should be able to see it if the file was found and read. If Jerry is
right, then I think you will NOT see it.
Next, how about
Hi,
I'm trying to determine which Spark deployment models are the most popular
- Standalone, YARN, Mesos, or SIMR. Anyone knows?
I thought I'm use search-hadoop.com to help me figure this out and this is
what I found:
1) Standalone
14 matches
Mail list logo