Github user animenon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19632#discussion_r148620630
  
    --- Diff: examples/src/main/python/pi.py ---
    @@ -27,12 +27,16 @@
     if __name__ == "__main__":
         """
             Usage: pi [partitions]
    +        
    +        Monte Carlo method is used to estimate Pi in the below example.
         """
         spark = SparkSession\
             .builder\
             .appName("PythonPi")\
             .getOrCreate()
    -
    +    
    +    # If no arguments are passed(i.e. `len(sys.argv) < = 1` ) 
    --- End diff --
    
    This is actually the first example on the spark doc and I wanted to know 
how the `pi` calculation was done. There was no mention of what algorithm is 
used for it, so took me a while to figure out the Monte-Carlo estimator was 
used and the logic is randomly generating over 100000 points to finally 
estimate the Pi value.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to