Hi all,
I'd like to use an octree data structure in order to simplify several
computations in a big data set. I've been wondering if Spark has any
built-in options for such structures (the only thing I could find is the
DecisionTree), specially if they make use of RDDs.
I've also been exploring
Hi everyone!
I'm really new to Spark and I'm trying to figure out which would be the
proper way to do the following:
1.- Read a file header (a single line)
2.- Build with it a configuration object
3.- Use that object in a function that will be called by map()
I thought about using filter()
think someone mentioned before that this is a good use case for
having a tail method on RDDs too, to skip the header for subsequent
processing. But you can ignore it with a filter, or logic in your map
method.
On Wed, Jul 16, 2014 at 11:01 AM, Silvina Caíno Lores
silvi.ca...@gmail.com wrote:
Hi
Hi everyone,
I am new to Spark and I'm having problems to make my code compile. I have
the feeling I might be misunderstanding the functions so I would be very
glad to get some insight in what could be wrong.
The problematic code is the following:
JavaRDDBody bodies = lines.map(l - {Body b =
, Silvina Caíno Lores silvi.ca...@gmail.com
wrote:
Hi everyone,
I am new to Spark and I'm having problems to make my code compile. I
have the feeling I might be misunderstanding the functions so I would be
very glad to get some insight in what could be wrong.
The problematic code