>
>
> Udbhav.
>
> *From:* Robin East [mailto:robin.e...@xense.co.uk]
> *Sent:* Thursday, February 25, 2016 7:42 PM
>
>
> *To:* Udbhav Agarwal
> *Cc:* user@spark.apache.org
> *Subject:* Re: Reindexing in graphx
>
>
>
> So first up GraphX is not really designed
@spark.apache.org
Subject: Re: Reindexing in graphx
So first up GraphX is not really designed for real-time graph mutation time
situations. That’s not to say it can’t be done but you may be butting up
against some of the design limitations in that area. As a first point of
interrogation you should look at
tem which I am trying to build so vertices
> will be keep on coming.
>
> Thanks.
> From: Robin East [mailto:robin.e...@xense.co.uk]
> Sent: Wednesday, February 24, 2016 3:54 PM
> To: Udbhav Agarwal
> Cc: user@spark.apache.org
> Subject: Re: Reindexing in graphx
&
: user@spark.apache.org
Subject: Re: Reindexing in graphx
It looks like you adding vertices one-by-one, you definitely don’t want to do
that. What happens when you batch together 400 vertices into an RDD and then
add 400 in one go
inputGraph = Graph(gVertices, gEdges, defaultUser)
> inputGraph.cache()
> gVertices = inputGraph.vertices
> gVertices.cache()
> val count = gVertices.count
> println(count);
>
> return 1;
> }
>
>
> From: Robin East [mailto:robin.e...@xense.
Re: Reindexing in graphx
Hi
Well this is the line that is failing in VertexRDDImpl:
require(partitionsRDD.partitioner.isDefined)
But really you shouldn’t need to be calling the reindex() function as it deals
with some internals of the GraphX implementation - it looks to me like it ought
to be a priv