abhishekd0907 commented on a change in pull request #29242:
URL: https://github.com/apache/spark/pull/29242#discussion_r464821795



##########
File path: python/pyspark/sql/dataframe.py
##########
@@ -674,7 +674,7 @@ def cache(self):
         .. note:: The default storage level has changed to `MEMORY_AND_DISK` 
to match Scala in 2.0.
         """
         self.is_cached = True
-        self._jdf.cache()
+        self.persist(StorageLevel.MEMORY_AND_DISK)

Review comment:
       @cloud-fan @srowen 
   Okay so, currently `scalaDataFrame.cache()` and `pySparkDataFrame.cache()` 
show consistent behavior. But `scalaDataFrame.persist()` and 
`pySparkDataFrame.persist()` are not consistent because 
`pysparkDataframe.persist()` uses `StorageLevel(true, true, false, false)` but 
`scalaDataframe.persist()` uses `StorageLevel(true, true, false, true)`. Should 
we change `pysparkDataframe.persist()` to directly call `self._jdf.persist()` ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to