Github user xuchuanyin commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2864#discussion_r228702949
--- Diff:
integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
---
@@ -833,4 +833,26 @@ object CommonUtil {
})
}
}
+
+ /**
+ * This method will validate single node minimum load data volume of
table specified by the user
+ *
+ * @param tableProperties table property specified by user
+ * @param propertyName property name
+ */
+ def validateLoadMinSize(tableProperties: Map[String, String],
propertyName: String): Unit = {
+ var size: Integer = 0
+ if (tableProperties.get(propertyName).isDefined) {
+ val loadSizeStr: String =
+ parsePropertyValueStringInMB(tableProperties(propertyName))
+ try {
+ size = Integer.parseInt(loadSizeStr)
--- End diff --
what about the checking for range bounds, can this be negative or zero?
I think in exception scenario, you can set this value to 0, so that later
you can use this as a flag (whether the value is zero) to determine whether to
enable size-based-block-assignment.
---