Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13780#discussion_r67724699
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetCacheSuite.scala ---
@@ -21,11 +21,32 @@ import scala.language.postfixOps
import org.apache.spark.sql.functions._
import org.apache.spark.sql.test.SharedSQLContext
+import org.apache.spark.storage.StorageLevel
class DatasetCacheSuite extends QueryTest with SharedSQLContext {
import testImplicits._
+ test("get storage level") {
+ val ds1 = Seq("1", "2").toDS().as("a")
+ val ds2 = Seq(2, 3).toDS().as("b")
+
+ // default storage level
+ ds1.persist()
+ ds2.cache()
+ assert(ds1.storageLevel() == StorageLevel.MEMORY_AND_DISK)
+ assert(ds2.storageLevel() == StorageLevel.MEMORY_AND_DISK)
+ // unpersist
+ ds1.unpersist()
+ assert(ds1.storageLevel() == StorageLevel.NONE)
+ // non-default storage level
+ ds1.persist(StorageLevel.MEMORY_ONLY_2)
--- End diff --
When writing black-box testing, I might just try all the levels in the test
case, even we can include some customized StorageLevel, which is diffrent from
the defined one.
```scala
Seq(StorageLevel.OFF_HEAP, MEMORY_AND_DISK_SER_2, ...,
StorageLevel.NONE).foreach { level =>
ds1.persist(level)
assert(ds1.storageLevel() == level)
ds1.unpersist()
assert(ds1.storageLevel() == StorageLevel.NONE)
}
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]