holdenk commented on code in PR #53232:
URL: https://github.com/apache/spark/pull/53232#discussion_r2566922029
##########
mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala:
##########
@@ -1142,4 +1143,31 @@ private[spark] object ReadWriteUtils {
spark.read.parquet(path).as[T].collect()
}
}
+
+ def saveDataFrame(path: String, df: DataFrame): Unit = {
+ if (localSavingModeState.get()) {
+ val filePath = Paths.get(path)
+ Files.createDirectories(filePath.getParent)
+
+ df match {
+ case d: org.apache.spark.sql.classic.DataFrame =>
+ ArrowFileReadWrite.save(d, path)
+ case _ => throw new UnsupportedOperationException("Unsupported
dataframe type")
+ }
+ } else {
+ df.write.parquet(path)
+ }
+ }
+
+ def loadDataFrame(path: String, spark: SparkSession): DataFrame = {
+ if (localSavingModeState.get()) {
+ spark match {
+ case s: org.apache.spark.sql.classic.SparkSession =>
+ ArrowFileReadWrite.load(s, path)
+ case _ => throw new UnsupportedOperationException("Unsupported session
type")
+ }
+ } else {
+ spark.read.parquet(path)
+ }
+ }
Review Comment:
So if we have `localSavingModeState` set to true this will write out an
arrow file which is not stable format wise. It does look like
localSavingModeState is only set to true in internal methods in Scala. Looking
in the PySpark docstrings I see we tell people to use this API so I remain -0.9.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]