westonpace commented on code in PR #36266:
URL: https://github.com/apache/arrow/pull/36266#discussion_r1259944135


##########
python/pyarrow/tests/test_csv.py:
##########
@@ -1968,3 +1968,28 @@ def test_write_csv_decimal(tmpdir, type_factory):
     out = read_csv(tmpdir / "out.csv")
 
     assert out.column('col').cast(type) == table.column('col')
+
+
[email protected]("data_size", (
+    int(1E2),
+    int(1E4),
+    int(1E6)
+))
+def test_large_binary_write_to_csv(tmpdir, data_size):
+    file_name = tmpdir / "fixedsize_"+str(data_size)+".csv"
+
+    nparr = np.frombuffer(np.random.randint(65, 91, data_size, 'u1'), 'S4')
+
+    fixed_arr = pa.array(nparr, pa.string())

Review Comment:
   ```suggestion
       fixed_arr = pa.array(nparr, pa.binary(4))
   ```
   This test, as written, doesn't actually reproduce the error.  Changing this 
to be fixed size binary does trigger the error (assuming you compile without 
your fix).  You will need to update the comparison though because fixed size 
binary will be read back as string without any other options.  So you will also 
need to specify the column type (using convert_options) when reading from CSV. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to