you could only do the deep check if the hashcodes are the same and design
hashcodes that do not take all elements into account.
the alternative seems to be putting cache statements all over graphx, as is
currently the case, which is trouble for any long lived application where
caching is
probably a dumb question, but why is reference equality used for the
indexes?
On Sun, Jul 6, 2014 at 12:43 AM, Ankur Dave ankurd...@gmail.com wrote:
When joining two VertexRDDs with identical indexes, GraphX can use a fast
code path (a zip join without any hash lookups). However, the check
Well, the alternative is to do a deep equality check on the index arrays,
which would be somewhat expensive since these are pretty large arrays (one
element per vertex in the graph). But, in case the reference equality check
fails, it actually might be a good idea to do the deep check before
thanks for replying. why is joining two vertexrdds without caching slow?
what is recomputed unnecessarily?
i am not sure what is different here from joining 2 regular RDDs (where
nobody seems to recommend to cache before joining i think...)
On Thu, Jul 3, 2014 at 10:52 PM, Ankur Dave
When joining two VertexRDDs with identical indexes, GraphX can use a fast
code path (a zip join without any hash lookups). However, the check for
identical indexes is performed using reference equality.
Without caching, two copies of the index are created. Although the two
indexes are
A common reason for the Joining ... is slow message is that you're
joining VertexRDDs without having cached them first. This will cause Spark
to recompute unnecessarily, and as a side effect, the same index will get
created twice and GraphX won't be able to do an efficient zip join.
For example,
lately i am seeing a lot of this warning in graphx:
org.apache.spark.graphx.impl.ShippableVertexPartitionOps: Joining two
VertexPartitions with different indexes is slow.
i am using Graph.outerJoinVertices to join in data from a regular RDD (that
is co-partitioned). i would like this operation to