Yicong-Huang commented on code in PR #53822:
URL: https://github.com/apache/spark/pull/53822#discussion_r2697008144
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowWriter.scala:
##########
@@ -387,6 +387,11 @@ private[arrow] class ArrayWriter(
val valueVector: ListVector,
val elementWriter: ArrowFieldWriter) extends ArrowFieldWriter {
+ // SPARK-55056: Arrow format requires ListArray offset buffer to have N+1
entries.
+ // Even when N=0, the buffer must contain [0]. Initialize offset buffer at
construction
+ // to ensure it exists even if no elements are written.
+ valueVector.getOffsetBuffer.setInt(0, 0)
Review Comment:
I think we don't need to check the allocated size.
The offset buffer is guaranteed to be allocated at this point. In
ArrowWriter.create(), we call vector.allocateNew() before createFieldWriter():
```
def create(root: VectorSchemaRoot): ArrowWriter = {
val children = root.getFieldVectors().asScala.map { vector =>
vector.allocateNew() // allocates all buffers including nested
children
createFieldWriter(vector) }
...
}
```
Arrow's ListVector.allocateNew() recursively allocates buffers for all
nested child vectors, so when the ArrayWriter constructor runs, the offset
buffer already exists with sufficient capacity.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]