cloud-fan commented on code in PR #42667:
URL: https://github.com/apache/spark/pull/42667#discussion_r1305257429
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/BadRecordException.scala:
##########
@@ -65,3 +93,25 @@ case class StringAsDataTypeException(
fieldName: String,
fieldValue: String,
dataType: DataType) extends RuntimeException()
+
+/**
+ * No-stacktrace equivalent of
`QueryExecutionErrors.cannotParseJSONFieldError`.
+ * Used for code control flow in the parser without overhead of creating a
full exception.
+ */
+case class CannotParseJSONFieldException(
+ fieldName: String,
+ fieldValue: String,
+ jsonType: JsonToken,
+ dataType: DataType) extends RuntimeException() {
+ override def getStackTrace(): Array[StackTraceElement] = new
Array[StackTraceElement](0)
+ override def fillInStackTrace(): Throwable = this
+}
+
+/**
+ * No-stacktrace equivalent of `QueryExecutionErrors.emptyJsonFieldValueError`.
+ * Used for code control flow in the parser without overhead of creating a
full exception.
Review Comment:
So we still use exceptions in the control flow, but use a trick to avoid the
stacktrace overhead?
Can we avoid using stacktrace in the control flow? or at lease not in the
critical code path. e.g. `fieldConverters` should return null directly if the
input is empty string and `enablePartialResults` is true.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]