jhorstmann opened a new issue, #2856:
URL: https://github.com/apache/arrow-rs/issues/2856

   **Describe the bug**
   
   I think this is not a new bug, but rather something that miri is now able to 
detect.
   
   From my understanding this is a bug in packed_simd2:
   
   
https://github.com/rust-lang/packed_simd/blob/e4ec7ce86ba5e6479409c91b9a9a6af25536b047/src/api/slice/write_to_slice.rs#L63
   
   ```rust
   pub unsafe fn write_to_slice_unaligned_unchecked(self, slice: &mut 
[$elem_ty]) {
       debug_assert!(slice.len() >= $elem_count);
       let target_ptr = slice.get_unchecked_mut(0) as *mut $elem_ty as *mut u8; 
// <-- here
       let self_ptr = &self as *const Self as *const u8;
       crate::ptr::copy_nonoverlapping(self_ptr, target_ptr, 
crate::mem::size_of::<Self>());
   }
   ```
   
   That line takes a mutable reference to the first element of the slice and 
then uses that as a pointer to write multiple elements. This would mean it is 
not actually writing out of bounds of the underlying allocation, but would be 
violating the stacked borrows model.
   
   Since it is unlikely that packed_simd2 gets bugfix releases, this might be a 
reason to migrate to portable_simd instead.
   
   **To Reproduce**
   
   ```
   $ cargo --version
   cargo 1.66.0-nightly (0b84a35c2 2022-10-03)
   ```
   
   ```
   $ cargo miri test --features simd -- test_primitive_array_sum
   test compute::kernels::aggregate::tests::test_primitive_array_sum ... error: 
Undefined Behavior: attempting a write access using <5683393> at 
alloc2379099[0x4], but that tag does not exist in the borrow stack for this 
location
      --> 
/home/i526205/.cargo/registry/src/github.com-1ecc6299db9ec823/packed_simd_2-0.3.8/src/v512.rs:50:1
       |
   50  | / impl_i!([i32; 16]: i32x16, m32x16 | i32, u16 | test_v512 |
   51  | |         x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, 
x14, x15 |
   52  | |         From: i8x16, u8x16, i16x16, u16x16 |
   53  | |         /// A 512-bit vector with 16 `i32` lanes.
   54  | | );
       | | ^
       | | |
       | |_attempting a write access using <5683393> at alloc2379099[0x4], but 
that tag does not exist in the borrow stack for this location
       |   this error occurs as part of an access at alloc2379099[0x0..0x40]
       |
       = help: this indicates a potential bug in the program: it performed an 
invalid operation, but the Stacked Borrows rules it violated are still 
experimental
       = help: see 
https://github.com/rust-lang/unsafe-code-guidelines/blob/master/wip/stacked-borrows.md
 for further information
   help: <5683393> was created by a SharedReadWrite retag at offsets [0x0..0x4]
      --> arrow/src/datatypes/numeric.rs:323:26
       |
   323 |                 unsafe { 
simd_result.write_to_slice_unaligned_unchecked(slice) };
       |                          
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   ...
   342 | make_numeric_type!(Int32Type, i32, i32x16, m32x16);
       | -------------------------------------------------- in this macro 
invocation
       = note: BACKTRACE:
       = note: inside `packed_simd_2::v512::<impl packed_simd_2::Simd<[i32; 
16]>>::write_to_slice_unaligned_unchecked` at 
/home/i526205/.cargo/registry/src/github.com-1ecc6299db9ec823/packed_simd_2-0.3.8/src/api/slice/write_to_slice.rs:65:17
   note: inside `<datatypes::types::Int32Type as 
datatypes::numeric::ArrowNumericType>::write` at 
arrow/src/datatypes/numeric.rs:323:26
      --> arrow/src/datatypes/numeric.rs:323:26
       |
   323 |                 unsafe { 
simd_result.write_to_slice_unaligned_unchecked(slice) };
       |                          
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   ...
   342 | make_numeric_type!(Int32Type, i32, i32x16, m32x16);
       | -------------------------------------------------- in this macro 
invocation
   note: inside 
`<compute::kernels::aggregate::simd::SumAggregate<datatypes::types::Int32Type> 
as 
compute::kernels::aggregate::simd::SimdAggregate<datatypes::types::Int32Type>>::reduce`
 at arrow/src/compute/kernels/aggregate.rs:390:13
      --> arrow/src/compute/kernels/aggregate.rs:390:13
       |
   390 |             T::write(simd_accumulator, slice);
       |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   note: inside 
`compute::kernels::aggregate::simd::simd_aggregation::<datatypes::types::Int32Type,
 compute::kernels::aggregate::simd::SumAggregate<datatypes::types::Int32Type>>` 
at arrow/src/compute/kernels/aggregate.rs:641:9
      --> arrow/src/compute/kernels/aggregate.rs:641:9
       |
   641 |         A::reduce(chunk_acc, rem_acc)
       |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   note: inside 
`compute::kernels::aggregate::sum::<datatypes::types::Int32Type>` at 
arrow/src/compute/kernels/aggregate.rs:655:5
      --> arrow/src/compute/kernels/aggregate.rs:655:5
       |
   655 |     simd::simd_aggregation::<T, SumAggregate<T>>(&array)
       |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   note: inside `compute::kernels::aggregate::tests::test_primitive_array_sum` 
at arrow/src/compute/kernels/aggregate.rs:692:24
      --> arrow/src/compute/kernels/aggregate.rs:692:24
       |
   692 |         assert_eq!(15, sum(&a).unwrap());
       |                        ^^^^^^^
   note: inside closure at arrow/src/compute/kernels/aggregate.rs:690:5
      --> arrow/src/compute/kernels/aggregate.rs:690:5
       |
   689 |       #[test]
       |       ------- in this procedural macro expansion
   690 | /     fn test_primitive_array_sum() {
   691 | |         let a = Int32Array::from(vec![1, 2, 3, 4, 5]);
   692 | |         assert_eq!(15, sum(&a).unwrap());
   693 | |     }
       | |_____^
   ```
   
   
   **Expected behavior**
   
   Tests should run without miri failures
   
   **Additional context**
   
   Migration to portable_simd was previously discussed in #1492
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to