Github user hhbyyh commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17130#discussion_r108600468
  
    --- Diff: docs/ml-frequent-pattern-mining.md ---
    @@ -0,0 +1,75 @@
    +---
    +layout: global
    +title: Frequent Pattern Mining
    +displayTitle: Frequent Pattern Mining
    +---
    +
    +Mining frequent items, itemsets, subsequences, or other substructures is 
usually among the
    +first steps to analyze a large-scale dataset, which has been an active 
research topic in
    +data mining for years.
    +We refer users to Wikipedia's [association rule 
learning](http://en.wikipedia.org/wiki/Association_rule_learning)
    +for more information.
    +
    +**Table of Contents**
    +
    +* This will become a table of contents (this text will be scraped).
    +{:toc}
    +
    +## FP-Growth
    +
    +The FP-growth algorithm is described in the paper
    +[Han et al., Mining frequent patterns without candidate 
generation](http://dx.doi.org/10.1145/335191.335372),
    +where "FP" stands for frequent pattern.
    +Given a dataset of transactions, the first step of FP-growth is to 
calculate item frequencies and identify frequent items.
    +Different from 
[Apriori-like](http://en.wikipedia.org/wiki/Apriori_algorithm) algorithms 
designed for the same purpose,
    +the second step of FP-growth uses a suffix tree (FP-tree) structure to 
encode transactions without generating candidate sets
    +explicitly, which are usually expensive to generate.
    +After the second step, the frequent itemsets can be extracted from the 
FP-tree.
    +In `spark.mllib`, we implemented a parallel version of FP-growth called 
PFP,
    +as described in [Li et al., PFP: Parallel FP-growth for query 
recommendation](http://dx.doi.org/10.1145/1454008.1454027).
    +PFP distributes the work of growing FP-trees based on the suffices of 
transactions,
    +and hence more scalable than a single-machine implementation.
    +We refer users to the papers for more details.
    +
    +`spark.ml`'s FP-growth implementation takes the following 
(hyper-)parameters:
    +
    +* `minSupport`: the minimum support for an itemset to be identified as 
frequent.
    +  For example, if an item appears 3 out of 5 transactions, it has a 
support of 3/5=0.6.
    +* `minConfidence`: minimum confidence for generating Association Rule. The 
parameter will not affect the mining
    +  for frequent itemsets,, but specify the minimum confidence for 
generating association rules from frequent itemsets.
    +* `numPartitions`: the number of partitions used to distribute the work. 
By default the param is not set, and
    +  partition number of the input dataset is used.
    +
    +The `FPGrowthModel` provides:
    +
    +* `freqItemsets`: frequent itemsets in the format of 
DataFrame("items"[Array], "freq"[Long])
    +* `associationRules`: association rules generated with confidence above 
`minConfidence`, in the format of 
    +  DataFrame("antecedent"[Array], "consequent"[Array], 
"confidence"[Double]).
    +* `transform`: The transform method examines the input items in `itemsCol` 
against all the association rules and
    +  summarize the consequents as prediction. The prediction column has the 
same data type as the
    --- End diff --
    
    Thanks for the suggestion. I do wish to have a better illustration here. 
But the two containing in your version make it not that straightforward, and 
actually it should be items in `itemsCol ` contains the antecedents for 
association rules. 
    
    I extend it to a longer version,
    
    For each record in `itemsCol`,  the `transform` method will compare its 
items against the antecedents of each association rule. If the record contains 
all the antecedents of a specific association rule, the rule will be considered 
as applicable and its consequents will be added to the prediction result.  The 
`transform` method will summarize the consequents from all the applicable rules 
as prediction.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to