Github user thunterdb commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10602#discussion_r49140273
  
    --- Diff: python/pyspark/mllib/fpm.py ---
    @@ -130,15 +133,21 @@ def train(cls, data, minSupport=0.1, 
maxPatternLength=10, maxLocalProjDBSize=320
             """
             Finds the complete set of frequent sequential patterns in the 
input sequences of itemsets.
     
    -        :param data: The input data set, each element contains a sequnce 
of itemsets.
    -        :param minSupport: the minimal support level of the sequential 
pattern, any pattern appears
    -            more than  (minSupport * size-of-the-dataset) times will be 
output (default: `0.1`)
    -        :param maxPatternLength: the maximal length of the sequential 
pattern, any pattern appears
    -            less than maxPatternLength will be output. (default: `10`)
    -        :param maxLocalProjDBSize: The maximum number of items (including 
delimiters used in
    -            the internal storage format) allowed in a projected database 
before local
    -            processing. If a projected database exceeds this size, another
    -            iteration of distributed prefix growth is run. (default: 
`32000000`)
    +        :param data:
    +          The input data set, each element contains a sequnce of itemsets.
    +        :param minSupport:
    +          The minimal support level of the sequential pattern, any pattern 
appears more than
    --- End diff --
    
    the lines below have indentation issues


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to