At 2014-09-05 12:13:18 +0200, Yifan LI wrote:
> But how to assign the storage level to a new vertices RDD that mapped from
> an existing vertices RDD,
> e.g.
> *val newVertexRDD =
> graph.collectNeighborIds(EdgeDirection.Out).map{case(id:VertexId,
> a:Array[VertexId]) => (id, initialHashMap(a))}*
Thank you, Ankur! :)
But how to assign the storage level to a new vertices RDD that mapped from
an existing vertices RDD,
e.g.
*val newVertexRDD =
graph.collectNeighborIds(EdgeDirection.Out).map{case(id:VertexId,
a:Array[VertexId]) => (id, initialHashMap(a))}*
the new one will be combined with th
At 2014-09-03 17:58:09 +0200, Yifan LI wrote:
> val graph = GraphLoader.edgeListFile(sc, edgesFile, minEdgePartitions =
> numPartitions).partitionBy(PartitionStrategy.EdgePartition2D).persist(StorageLevel.MEMORY_AND_DISK)
>
> Error: java.lang.UnsupportedOperationException: Cannot change storage l
Hi Ankur,
Thanks so much for your advice.
But it failed when I tried to set the storage level in constructing a graph.
val graph = GraphLoader.edgeListFile(sc, edgesFile, minEdgePartitions =
numPartitions).partitionBy(PartitionStrategy.EdgePartition2D).persist(StorageLevel.MEMORY_AND_DISK)
Erro
On Mon, Aug 18, 2014 at 6:29 AM, Yifan LI wrote:
> I am testing our application(similar to "personalised page rank" using
> Pregel, and note that each vertex property will need pretty much more space
> to store after new iteration)
[...]
But when we ran it on larger graph(e.g. LiveJouranl), it
Hi,
I am testing our application(similar to "personalised page rank" using Pregel,
and note that each vertex property will need pretty much more space to store
after new iteration), it works correctly on small graph.(we have one single
machine, 8 cores, 16G memory)
But when we ran it on larger