alamb commented on code in PR #7096:
URL: https://github.com/apache/arrow-rs/pull/7096#discussion_r1949946651
##########
arrow-buffer/src/buffer/boolean.rs:
##########
@@ -59,17 +59,31 @@ impl BooleanBuffer {
///
/// This method will panic if `buffer` is not large enough
pub fn new(buffer: Buffer, offset: usize, len: usize) -> Self {
+ Self::try_new(buffer, offset, len).unwrap()
+ }
+
+ /// Create a a new [`BooleanBuffer`] from a [`Buffer`], an `offset` and
`length` in bits
+ ///
+ /// # Returns
+ /// * `Ok(Self)` if the buffer is large enough
+ /// * `Err(String)` if the buffer is not large enough with an appropriate
message
+ ///
+ /// Note the `arrow-buffer` crate does not depend on `arrow-schema` and
thus
+ /// does not have access to `ArrowError`, hence the use of a `String` for
the error type.
+ pub fn try_new(buffer: Buffer, offset: usize, len: usize) -> Result<Self,
String> {
Review Comment:
I updated the validation in ArrayData to actually return an error in this
case rather than panic in 0b88bbccb8
I think this is inline with the guidance on
https://github.com/apache/arrow-rs?tab=readme-ov-file#guidelines-for-panic-vs-result
However, I am not super happy with the number of changes required to the
code. I'll see if I can get the test to pass with a smaller change
##########
arrow-ipc/src/reader.rs:
##########
@@ -2492,4 +2539,109 @@ mod tests {
assert_eq!(decoded_batch.expect("Failed to read RecordBatch"),
batch);
});
}
+
+ #[test]
+ fn test_validation_of_invalid_list_array() {
Review Comment:
I concluded it was impossible to create an invalid PrimitiveArray -- when I
made one that had invalid nulls, then I got a panic in the
BooleanBufferConstructor here:
https://github.com/apache/arrow-rs/blob/d4432c023de6bd0ce841bab3605708a20502161a/arrow-buffer/src/buffer/boolean.rs#L65
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]