westonpace commented on code in PR #35568:
URL: https://github.com/apache/arrow/pull/35568#discussion_r1251302128


##########
python/pyarrow/dataset/protocol.py:
##########
@@ -0,0 +1,180 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Protocol definitions for pyarrow.dataset
+
+These provide the abstract interface for a dataset. Other libraries may 
implement
+this interface to expose their data, without having to extend PyArrow's 
classes.
+
+Applications and libraries that want to consume datasets should accept datasets
+that implement these protocols, rather than requiring the specific
+PyArrow classes.
+
+See Extending PyArrow Datasets for more information:
+
+https://arrow.apache.org/docs/python/integration/dataset.html
+"""
+from abc import abstractmethod, abstractproperty
+from typing import Iterator, List, Optional
+
+# TODO: remove once we drop support for Python 3.7
+if sys.version_info >= (3, 8):
+    from typing import Protocol, runtime_checkable
+else:
+    from typing_extensions import Protocol, runtime_checkable
+
+from pyarrow.dataset import Expression
+from pyarrow import Table, RecordBatchReader, Schema
+
+
+@runtime_checkable
+class Scanner(Protocol):
+    """
+    A scanner implementation for a dataset.
+
+    This may be a scan of a whole dataset, or a scan of a single fragment.
+    """
+    @abstractmethod
+    def count_rows(self) -> int:
+        """
+        Count the number of rows in this dataset.
+
+        Implementors may provide optimized code paths that compute this from 
metadata.
+
+        Returns
+        -------
+        int
+            The number of rows in the dataset.
+        """
+        ...
+
+    @abstractmethod
+    def head(self, num_rows: int) -> Table:
+        """
+        Get the first ``num_rows`` rows of the dataset.
+
+        Parameters
+        ----------
+        num_rows : int
+            The number of rows to return.
+
+        Returns
+        -------
+        Table
+            A table containing the first ``num_rows`` rows of the dataset.
+        """
+        ...
+
+    @abstractmethod
+    def to_reader(self) -> RecordBatchReader:
+        """
+        Create a Record Batch Reader for this scan.
+
+        This is used to read the data in chunks.
+
+        Returns
+        -------
+        RecordBatchReader
+        """
+        ...
+
+
+@runtime_checkable
+class Scannable(Protocol):
+    @abstractmethod
+    def scanner(self, columns: Optional[List[str]] = None,
+                filter: Optional[Expression] = None, batch_size: Optional[int] 
= None,
+                use_threads: bool = True,
+                **kwargs) -> Scanner:
+        """Create a scanner for this dataset.
+
+        Parameters
+        ----------
+        columns : List[str], optional
+            Names of columns to include in the scan. If None, all columns are
+            included.
+        filter : Expression, optional
+            Filter expression to apply to the scan. If None, no filter is 
applied.
+        batch_size : int, optional
+            The number of rows to include in each batch. If None, the default
+            value is used. The default value is implementation specific.

Review Comment:
   This parameter has lost importance in arrow-c++ datasets.  It used to be an 
important tuning parameter that affected the size of the batches used 
internally by the C++ implementation.  However, it didn't make sense for the 
user to pick the correct value (and there are multiple batch sizes in the C++ 
and the right value might even depend on the schema and be quite difficult to 
calculate).
   
   I think it still has value, especially "max batch size".  The user needs 
someway to say "don't give me 20GB of data all at once".
   
   So I think it needs to be a hard upper limit but it can be a soft lower 
limit.  We could either call it `max_batch_size` (and ignore it as an upper 
limit entirely) or `preferred_batch_size` (and explain that only the upper 
limit is strictly enforced).  I don't think using this as an upper limit is 
overly burdensome as slicing tables/batches should be pretty easy and 
lightweight.  The reverse (concatenating batches) is more complicated and 
expensive.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to