how can I get the application belong to the driver?

2016-12-26 Thread John Fang
I hope I can get the application by the driverId, but I don't find the rest api at spark。Then how can i get the application, which belong to one driver。

Spark Graphx with Database

2016-12-26 Thread balaji9058
Hi All, I would like to know about spark graphx execution/processing with database.Yes, i understand spark graphx is in-memory processing but some extent we can manage querying but would like to do much more complex query or processing.Please suggest me the usecase or steps for the same. --

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Chawla,Sumit
What is the expected effect of reducing the mesosExecutor.cores to zero? What functionality of executor is impacted? Is the impact is just that it just behaves like a regular process? Regards Sumit Chawla On Mon, Dec 26, 2016 at 9:25 AM, Michael Gummelt wrote: > >

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Jacek Laskowski
Thanks a LOT, Michael! Pozdrawiam, Jacek Laskowski https://medium.com/@jaceklaskowski/ Mastering Apache Spark 2.0 https://bit.ly/mastering-apache-spark Follow me at https://twitter.com/jaceklaskowski On Mon, Dec 26, 2016 at 10:04 PM, Michael Gummelt wrote: > In

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
In fine-grained mode (which is deprecated), Spark tasks (which are threads) were implemented as Mesos tasks. When a Mesos task starts and stops, its underlying cgroup, and therefore the resources its consuming on the cluster, grows or shrinks based on the resources allocated to the tasks, which

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Jacek Laskowski
Hi Michael, That caught my attention... Could you please elaborate on "elastically grow and shrink CPU usage" and how it really works under the covers? It seems that CPU usage is just a "label" for an executor on Mesos. Where's this in the code? Pozdrawiam, Jacek Laskowski

Re: Spark Storage Tab is empty

2016-12-26 Thread Jacek Laskowski
Hi David, Can you use persist instead? Perhaps with some other StorageLevel? It worked with Spark 2.2.0-SNAPSHOT I use and don't remember how it worked back then in 1.6.2. You could also check the Executors tab and see how many blocks you have in their BlockManagers. Pozdrawiam, Jacek Laskowski

Spark Storage Tab is empty

2016-12-26 Thread David Hodeffi
I have tried the following code but didn't see anything on the storage tab. val myrdd = sc.parallelilize(1 to 100) myrdd.setName("my_rdd") myrdd.cache() myrdd.collect() Storage tab is empty, though I can see the stage of collect() . I am using 1.6.2 ,HDP 2.5 , spark on yarn Thanks David

[Spark 2.0.2 HDFS]: no data locality

2016-12-26 Thread Karamba
Hi, I am running a couple of docker hosts, each with an HDFS and a spark worker in a spark standalone cluster. In order to get data locality awareness, I would like to configure Racks for each host, so that a spark worker container knows from which hdfs node container it should load its data.

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
> Using 0 for spark.mesos.mesosExecutor.cores is better than dynamic allocation Maybe for CPU, but definitely not for memory. Executors never shut down in fine-grained mode, which means you only elastically grow and shrink CPU usage, not memory. On Sat, Dec 24, 2016 at 10:14 PM, Davies Liu