jorisvandenbossche commented on code in PR #43818:
URL: https://github.com/apache/arrow/pull/43818#discussion_r1732717576


##########
python/pyarrow/parquet/core.py:
##########
@@ -1163,13 +1163,13 @@ def _get_pandas_index_columns(keyvalues):
 buffer_size : int, default 0
     If positive, perform read buffering when deserializing individual
     column chunks. Otherwise IO calls are unbuffered.
-partitioning : pyarrow.dataset.Partitioning or str or list of str, \
+partitioning : :py:class:`pyarrow.dataset.Partitioning` or str or list of str, 
\

Review Comment:
   OK, looking at 
https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_to_dataset.html,
 I see this works for several other pyarrow objects automatically, but indeed 
not for `Partitioning`.
   
   The page does exist 
(https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Partitioning.html,
 although not that informative ... it's the subclasses that have more 
information), so not entirely sure why it doesn't work. Maybe something with 
case insensitivity (and classing with `partitioning()`)?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to