abhishekd0907 commented on a change in pull request #29242:
URL: https://github.com/apache/spark/pull/29242#discussion_r463762567



##########
File path: python/pyspark/sql/dataframe.py
##########
@@ -674,7 +674,7 @@ def cache(self):
         .. note:: The default storage level has changed to `MEMORY_AND_DISK` 
to match Scala in 2.0.
         """
         self.is_cached = True
-        self._jdf.cache()
+        self.persist(StorageLevel.MEMORY_AND_DISK)

Review comment:
       using Pyspark's `self.persist(StorageLevel.MEMORY_AND_DISK)` or 
Pyspark's `self.persist()` will have the same impact since in both the cases 
Python's default storage level will be used i.e. `StorageLevel.MEMORY_AND_DISK 
= StorageLevel(True, True, False, False)` with `deserialized=false`. Hence both 
changes are equivalent and either will fix the bug.
   
   Note that calling 
`self._sc._getJavaStorageLevel(StorageLevel.MEMORY_AND_DISK)` in pyspark 
doesn't return scala's default storage level. Instead, it returns `new 
StorageLevel(true, true, false, false)` which has `deserialized=false` and 
hence it will fix the bug.
   
   However, in current master since scala's cache is called 
`self._jdf.cache()`, storage level uses is scala's default ie. `val 
MEMORY_AND_DISK = new StorageLevel(true, true, false, true)` which has 
`deserialized=true` leading to the bug.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to