Re: Spark metrics when running with YARN?

2016-08-30 Thread Vijay Kiran
Hi Otis,

Did you check the REST API as documented in  
http://spark.apache.org/docs/latest/monitoring.html 

Regards,
Vijay

> On 30 Aug 2016, at 14:43, Otis Gospodnetić <otis.gospodne...@gmail.com> wrote:
> 
> Hi Mich and Vijay,
> 
> Thanks!  I forgot to include an important bit - I'm looking for a 
> programmatic way to get Spark metrics when running Spark under YARN - so JMX 
> or API of some kind.
> 
> Thanks,
> Otis
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> 
> 
> On Tue, Aug 30, 2016 at 6:59 AM, Mich Talebzadeh <mich.talebza...@gmail.com> 
> wrote:
> Spark UI regardless of deployment mode Standalone, yarn etc runs on port 4040 
> by default that can be accessed directly
> 
> Otherwise one can specify a specific port with --conf "spark.ui.port=5" 
> for example 5
> 
> HTH
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> http://talebzadehmich.wordpress.com
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  
> 
> On 30 August 2016 at 11:48, Vijay Kiran <m...@vijaykiran.com> wrote:
> 
> From Yarm RM UI, find the spark application Id, and in the application 
> details, you can click on the “Tracking URL” which should give you the Spark 
> UI.
> 
> ./Vijay
> 
> > On 30 Aug 2016, at 07:53, Otis Gospodnetić <otis.gospodne...@gmail.com> 
> > wrote:
> >
> > Hi,
> >
> > When Spark is run on top of YARN, where/how can one get Spark metrics?
> >
> > Thanks,
> > Otis
> > --
> > Monitoring - Log Management - Alerting - Anomaly Detection
> > Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> >
> 
> 
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> 
> 
> 


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark metrics when running with YARN?

2016-08-30 Thread Vijay Kiran

From Yarm RM UI, find the spark application Id, and in the application details, 
you can click on the “Tracking URL” which should give you the Spark UI.

./Vijay

> On 30 Aug 2016, at 07:53, Otis Gospodnetić  wrote:
> 
> Hi,
> 
> When Spark is run on top of YARN, where/how can one get Spark metrics?
> 
> Thanks,
> Otis
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> 


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: ROSE: Spark + R on the JVM.

2016-01-12 Thread Vijay Kiran
I think it would be this: https://github.com/onetapbeyond/opencpu-spark-executor

> On 12 Jan 2016, at 18:32, Corey Nolet  wrote:
> 
> David,
> 
> Thank you very much for announcing this! It looks like it could be very 
> useful. Would you mind providing a link to the github?
> 
> On Tue, Jan 12, 2016 at 10:03 AM, David  
> wrote:
> Hi all,
> 
> I'd like to share news of the recent release of a new Spark package, ROSE. 
> 
> ROSE is a Scala library offering access to the full scientific computing 
> power of the R programming language to Apache Spark batch and streaming 
> applications on the JVM. Where Apache SparkR lets data scientists use Spark 
> from R, ROSE is designed to let Scala and Java developers use R from Spark. 
> 
> The project is available and documented on GitHub and I would encourage you 
> to take a look. Any feedback, questions etc very welcome.
> 
> David
> 
> "All that is gold does not glitter, Not all those who wander are lost."
> 


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: ROSE: Spark + R on the JVM.

2016-01-12 Thread Vijay Kiran
I think it would be this: https://github.com/onetapbeyond/opencpu-spark-executor

> On 12 Jan 2016, at 18:32, Corey Nolet  wrote:
> 
> David,
> 
> Thank you very much for announcing this! It looks like it could be very 
> useful. Would you mind providing a link to the github?
> 
> On Tue, Jan 12, 2016 at 10:03 AM, David  
> wrote:
> Hi all,
> 
> I'd like to share news of the recent release of a new Spark package, ROSE. 
> 
> ROSE is a Scala library offering access to the full scientific computing 
> power of the R programming language to Apache Spark batch and streaming 
> applications on the JVM. Where Apache SparkR lets data scientists use Spark 
> from R, ROSE is designed to let Scala and Java developers use R from Spark. 
> 
> The project is available and documented on GitHub and I would encourage you 
> to take a look. Any feedback, questions etc very welcome.
> 
> David
> 
> "All that is gold does not glitter, Not all those who wander are lost."
> 


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Fat jar can't find jdbc

2015-12-22 Thread Vijay Kiran
Can you paste your libraryDependencies from build.sbt ?

./Vijay

> On 22 Dec 2015, at 06:12, David Yerrington  wrote:
> 
> Hi Everyone,
> 
> I'm building a prototype that fundamentally grabs data from a MySQL instance, 
> crunches some numbers, and then moves it on down the pipeline.  I've been 
> using SBT with assembly tool to build a single jar for deployment.
> 
> I've gone through the paces of stomping out many dependency problems and have 
> come down to one last (hopefully) zinger.  
> 
> java.lang.ClassNotFoundException: Failed to load class for data source: jdbc.
> 
> at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:67)
> 
> at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:87)
> 
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
> 
> at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1203)
> 
> at her.recommender.getDataframe(her.recommender.scala:45)
> 
> at her.recommender.getRecommendations(her.recommender.scala:60)
> 
> 
> I'm assuming this has to do with mysql-connector because this is the problem 
> I run into when I'm working with spark-shell and I forget to include my 
> classpath with my mysql-connect jar file.
> 
> I've tried:
>   • Using different versions of mysql-connector-java in my build.sbt file
>   • Copying the connector jar to my_project/src/main/lib
>   • Copying the connector jar to my_project/lib <-- (this is where I keep 
> my build.sbt)
> Everything loads fine and works, except my call that does 
> "sqlContext.load("jdbc", myOptions)".  I know this is a total newbie question 
> but in my defense, I'm fairly new to Scala, and this is my first go at 
> deploying a fat jar with sbt-assembly.
> 
> Thanks for any advice!
> 
> -- 
> David Yerrington
> yerrington.net


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org