Hi,
I'm working on a RDD of a tuple of objects which represent trees (Node
containing a hashmap of nodes). I'm trying to aggregate these trees over the
RDD.
Let's take an example, 2 graphs :
C - D - B - A - D - B - E
F - E - B - A - D - B - F
I'm spliting each graphs according to the vertex A re
Hi,
I do'nt have any history server running. As SK's already pointed in a
previous post the history server seems to be required only in mesos or yarn
mode, not in standalone mode.
https://spark.apache.org/docs/1.1.1/monitoring.html
"If Spark is run on Mesos or YARN, it is still possible to recons
Hi,
I've a similar problem. I want to see the detailed logs of Completed
Applications so I've set in my program :
set("spark.eventLog.enabled","true").
set("spark.eventLog.dir","file:/tmp/spark-events")
but when I click on the application in the webui, I got a page with the
message :
Application
Hi everyone,
I'm writing a program that update a cassandra table.
I've writen a first shot where I update the table row by row from a rdd
trhough a map.
Now I want to build a batch of updates using the same kind of syntax as in
this thread :
https://groups.google.com/forum/#!msg/spark-users/LUb