tor(aggregatePartition(it)))
>> >>> var numPartitions = partiallyAggregated.partitions.length
>> >>> val scale = math.max(math.ceil(math.pow(numPartitions, 1.0 /
>> >>> depth)).toInt, 2)
>> >>> //
7;t help reduce
> >>> // the wall-clock time, we stop tree aggregation.
> >>> while (numPartitions > scale + numPartitions / scale) {
> >>> numPartitions /= scale
> >>> val curNumPartitions
partiallyAggregated = partiallyAggregated.mapPartitionsWithIndex
>>> {
>>> (i, iter) => iter.map((i % curNumPartitions, _))
>>> }.reduceByKey(new HashPartitioner(curNumPartitions),
>>> cleanCombOp).values
>>> }
>>&g
urNumPartitions),
>> cleanCombOp).values
>> }
>> partiallyAggregated.reduce(cleanCombOp)
>>
>> I am completely lost about what is happening in this function. I would
>> greatly appreciate some sort of explanation.
>>
>>
>>
>>
>&
cleanCombOp).values
> }
> partiallyAggregated.reduce(cleanCombOp)
>
> I am completely lost about what is happening in this function. I would
> greatly appreciate some sort of explanation.
>
>
>
>
> --
> View this message in context:
> h
tly appreciate some sort of explanation.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TreeReduce-Functionality-in-Spark-tp23147.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-