Re: spark rdd grouping

2015-12-25 Thread Shushant Arora
Hi I have created a jira for this feature https://issues.apache.org/jira/browse/SPARK-12524 Please vote this feature if its necessary. I would like to implement this feature. Thanks Shushant On Wed, Dec 2, 2015 at 1:14 PM, Rajat Kumar wrote: > What if I don't have

Re: spark rdd grouping

2015-12-01 Thread Jacek Laskowski
Hi Rajat, My quick test has showed that groupBy will preserve the partitions: scala> sc.parallelize(Seq(0,0,0,0,1,1,1,1),2).map((_,1)).mapPartitionsWithIndex { case (idx, iter) => val s = iter.toSeq; println(idx + " with " + s.size + " elements: " + s); s.toIterator

Re: spark rdd grouping

2015-12-01 Thread ayan guha
I believe reduceByKeyLocally was introduced for this purpose. On Tue, Dec 1, 2015 at 10:21 PM, Jacek Laskowski wrote: > Hi Rajat, > > My quick test has showed that groupBy will preserve the partitions: > > scala> >

spark rdd grouping

2015-11-30 Thread Rajat Kumar
Hi i have a javaPairRdd rdd1. i want to group by rdd1 by keys but preserve the partitions of original rdd only to avoid shuffle since I know all same keys are already in same partition. PairRdd is basically constrcuted using kafka streaming low level consumer which have all records with