joosthooz commented on code in PR #13709:
URL: https://github.com/apache/arrow/pull/13709#discussion_r932621048


##########
python/pyarrow/_dataset.pyx:
##########
@@ -1237,6 +1238,40 @@ cdef class CsvFileFormat(FileFormat):
         return f"<CsvFileFormat parse_options={self.parse_options}>"
 
 
+# From io.pxi
+def py_buffer(object obj):
+    """
+    Construct an Arrow buffer from a Python bytes-like or buffer-like object
+
+    Parameters
+    ----------
+    obj : object
+        the object from which the buffer should be constructed.
+    """
+    cdef shared_ptr[CBuffer] buf
+    buf = GetResultValue(PyBuffer.FromPyObject(obj))
+    return pyarrow_wrap_buffer(buf)
+
+
+# From io.pxi
+cdef void _cb_transform(transform_func, const shared_ptr[CBuffer]& src,
+                        shared_ptr[CBuffer]* dest) except *:
+    py_dest = transform_func(pyarrow_wrap_buffer(src))
+    dest[0] = pyarrow_unwrap_buffer(py_buffer(py_dest))
+
+
+# from io.pxi
+class Transcoder:

Review Comment:
   I hope to add that as a possibility in the future, but for now I wanted to 
mimic the behavior of `read_csv` as much as possible. We'll have to see how bad 
of a bottleneck this will create. But for scanning a single file it shouldn't 
matter, and that is good enough for my use case because I just want to be able 
to deal with files that are larger than memory (which pyarrow.dataset will 
allow me to do and read_csv will not)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to