raulcd commented on code in PR #46286:
URL: https://github.com/apache/arrow/pull/46286#discussion_r2071242705
##########
python/pyarrow/tests/test_compute.py:
##########
@@ -3786,19 +3786,52 @@ def test_list_slice_bad_parameters():
pc.list_slice(arr, 0, 1, step=-1)
-def check_run_end_encode_decode(run_end_encode_opts=None):
- arr = pa.array([1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3])
+def check_run_end_encode_decode(value_type, run_end_encode_opts=None):
+ values = [1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3]
+
+ # When value_type is floating point, we need to explicitly convert to
create
+ # the array. There might be a better way to do this.
+ if value_type is pa.float16():
+ values = np.float16(values)
+ elif value_type is pa.float32():
+ values = np.float32(values)
+ elif values is pa.float64():
+ values = np.float64(values)
Review Comment:
The failure on the Python `no_numpy` job is related. Could you also add a
check to check those only if `np is not None`?
```suggestion
if np is not None:
if value_type is pa.float16():
values = np.float16(values)
elif value_type is pa.float32():
values = np.float32(values)
elif values is pa.float64():
values = np.float64(values)
```
I think this is better than adding a `@pytest.mark.numpy` to the test
because we are still testing the rest of the types.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]