srowen commented on a change in pull request #31090:
URL: https://github.com/apache/spark/pull/31090#discussion_r559645279



##########
File path: 
mllib/src/main/scala/org/apache/spark/ml/classification/DecisionTreeClassifier.scala
##########
@@ -288,7 +288,9 @@ object DecisionTreeClassificationModel extends 
MLReadable[DecisionTreeClassifica
       DefaultParamsWriter.saveMetadata(instance, path, sc, Some(extraMetadata))
       val (nodeData, _) = NodeData.build(instance.rootNode, 0)
       val dataPath = new Path(path, "data").toString
-      sparkSession.createDataFrame(nodeData).write.parquet(dataPath)
+      // 2,000,000 nodes is about 40MB
+      val numDataParts = (instance.numNodes / 2000000.0).ceil.toInt

Review comment:
       OK - my rule of thumb about partition sizes is "128MB" going back to the 
days of Hadoop. Any number in that range is about as good as the next, but I 
might increase this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to