IvanVergiliev commented on a change in pull request #23766: [SPARK-26859][SQL]
Fix field writer index bug in non-vectorized ORC deserializer
URL: https://github.com/apache/spark/pull/23766#discussion_r256893397
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcDeserializer.scala
##########
@@ -47,18 +47,20 @@ class OrcDeserializer(
}.toArray
}
- private val validColIds = requestedColIds.filterNot(_ == -1)
-
def deserialize(orcStruct: OrcStruct): InternalRow = {
- var i = 0
- while (i < validColIds.length) {
- val value = orcStruct.getFieldValue(validColIds(i))
- if (value == null) {
- resultRow.setNullAt(i)
- } else {
- fieldWriters(i)(value)
+ var fieldWriterIndex = 0
+ var targetColumnIndex = 0
+ while (targetColumnIndex < requestedColIds.length) {
Review comment:
I considered this as well. The currently implemented version allowed me to
keep the change more isolated (a single method only), which seemed less risky
and easier to review.
However, I agree that changing `fieldWriters` as well is probably better for
readability in the long run. I’m definitely open to implementing it this way if
there’s consensus that this is better.
(Minor note: if we do switch to that implementation, I would make the
missing field writers None instead of null.)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]