abhishekd0907 commented on a change in pull request #29242:
URL: https://github.com/apache/spark/pull/29242#discussion_r463228142



##########
File path: python/pyspark/sql/dataframe.py
##########
@@ -674,7 +674,7 @@ def cache(self):
         .. note:: The default storage level has changed to `MEMORY_AND_DISK` 
to match Scala in 2.0.
         """
         self.is_cached = True
-        self._jdf.cache()
+        self.persist(StorageLevel.MEMORY_AND_DISK)

Review comment:
       @srowen 
   Do you mean changing `self.persist(StorageLevel.MEMORY_AND_DISK)` to 
`self.persist()` at line 677? I believe the effect will be the same because the 
default storage level for Pyspark persist is also MEMORY_AND_DISK. 
   
   ```
   @since(1.3)
       def persist(self, storageLevel=StorageLevel.MEMORY_AND_DISK):
           """Sets the storage level to persist the contents of the 
:class:`DataFrame` across
           operations after the first time it is computed. This can only be 
used to assign
           a new storage level if the :class:`DataFrame` does not have a 
storage level set yet.
           If no storage level is specified defaults to (`MEMORY_AND_DISK`).
           .. note:: The default storage level has changed to `MEMORY_AND_DISK` 
to match Scala in 2.0.
           """
           self.is_cached = True
           javaStorageLevel = self._sc._getJavaStorageLevel(storageLevel)
           self._jdf.persist(javaStorageLevel)
           return self
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to