BryanCutler commented on issue #24070: [SPARK-23961][PYTHON] Fix error when 
toLocalIterator goes out of scope
URL: https://github.com/apache/spark/pull/24070#issuecomment-472129446
 
 
   ### Timings for DataFrame.toLocalIterator and RDD.toLocalIterator
   
   These tests are to illustrate the slowdown caused by this change, comparing 
current master with this change. Wall clock time is measure to fully consume 
the local iterator and average of 5 runs are shown:
   
    _ | DataFrame | RDD
   --- | --- | ---
   *master* | 10.26016583 | 4.354181528
   *this PR* | 12.14033799 | 3.823320436
   
   #### Test Script
   
   ```python
   import time
   from pyspark.sql import SparkSession
   
   spark = SparkSession\
           .builder\
           .appName("toLocalIterator_timing")\
           .getOrCreate()
   
   num = 1 << 22
   numParts = 32
   
   def run(df):
     print("Starting iterator:")
     start = time.time()
   
     count = 0
     for row in df.toLocalIterator():
       count += 1
   
     if count != num:
       raise RuntimeError("Expected {} but got {}".format(num, count))
   
     elapsed = time.time() - start
     print("completed in {}".format(elapsed))
   
   run(spark.range(num, numPartitions=numParts))
   run(spark.sparkContext.range(num, numSlices=numParts))
   
   spark.stop()
   
   
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to