Github user octavian-ganea commented on the pull request:
https://github.com/apache/spark/pull/1297#issuecomment-69365816
Thanks for the nice work!
I am trying to use this IndexedRDD as a distributed hash map and I would
like to be able to insert and update many entries (tens of millions). However,
for the following code I get rapidly a StackOverflow on a 8 nodes cluster, each
node having 120GB of RAM:
val rdd = sc.parallelize((1 to 10).map(x => (x.toLong, 1)))
var indexed = IndexedRDD(rdd).cache
for (i <- 1 until 10000000) {
indexed = indexed.put(i, 0)
if (i % 1000 == 0) {
println("i = " + i + " val = " + indexed.get(i))
}
}
I tried also doing an update like this: indexed = indexed.put(i,
0).cache(), or even keep a second indexed2 and do an update like:
indexed2 = indexed.put(i, 0).cache();
indexed.unpersist();
indexed = indexed2;
For both methods I get stackoverflow after less than 1000 iterations.
Can you please help me with this issue ?
Many thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]