Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2847#issuecomment-71420350
@zhangyouhua2014
> We refer to the PFP paper, but reduces the process of building the tree,
omit this process it can use > this time to do other things.
By "reduce", did you mean skipping the process of growing trees? The
FP-Growth algorithm reduces memory requirement using the tree representation of
candidate sets. If we skip this step, it is hard to call it `FPGrowth`. Did you
do any performance comparison between your version and the PFP implementation?
> 2, reduce operations performed using groupByKey operator will
conditionSEQ on a machine of the same key into the presence of the same key
conditionSEQ worker set on each machine after the merger. The following is
based conditionSEQ to mining frequent item sets.
It is important to grow the tree on the mapper side to save communication
cost. `groupByKey` doesn't do that. I was suggesting using `aggregateByKey`.
For each key, we start with an empty tree, with `seqOp` growing the tree and
`combOp` merging two trees. Besides, the partition key is the hash value of the
last item of the sequence. We should be able to reduce communication cost (see
my inline comments at L135).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]