wjones127 commented on code in PR #35568:
URL: https://github.com/apache/arrow/pull/35568#discussion_r1230089142


##########
python/pyarrow/dataset/protocol.py:
##########
@@ -0,0 +1,77 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Protocol definitions for pyarrow.dataset
+
+These provide the abstract interface for a dataset. Other libraries may 
implement
+this interface to expose their data, without having to extend PyArrow's 
classes.
+
+Applications and libraries that want to consume datasets should accept datasets
+that implement these protocols, rather than requiring the specific
+PyArrow classes.
+"""
+from abc import abstractmethod
+from typing import Iterator, List, Optional, Protocol
+
+from pyarrow.dataset import Expression
+from pyarrow import Table, IntegerArray, RecordBatch, RecordBatchReader, Schema
+
+
+class Scanner(Protocol):
+    @abstractmethod
+    def count_rows(self) -> int:
+        ...
+    
+    @abstractmethod
+    def head(self, num_rows: int) -> Table:
+        ...
+
+    @abstractmethod
+    def take(self, indices: IntegerArray) -> Table:
+        ...
+    
+    @abstractmethod
+    def to_table(self) -> Table:
+        ...
+    
+    @abstractmethod
+    def to_batches(self) -> Iterator[RecordBatch]:
+        ...
+
+    @abstractmethod
+    def to_reader(self) -> RecordBatchReader:
+        ...
+
+
+class Scannable(Protocol):
+    @abstractmethod
+    def scanner(self, columns: Optional[List[str]] = None,
+                filter: Optional[Expression] = None, **kwargs) -> Scanner:
+        ...
+    
+    @abstractmethod
+    def schema(self) -> Schema:
+        ...
+
+
+class Fragment(Scannable):

Review Comment:
   Okay. I think the expectation is that residual filtering needs to be done in 
this API. The way this is handled in PyArrow's implementation is that each 
fragment has a "guarantee". So the fragments look like:
   
   ```
   <fragment 1 guarantee="date(ts) = "2022-05-03">
   <fragment 2 guarantee="date(ts) = "2022-05-04">
   <fragment 3 guarantee="date(ts) = "2022-05-05">
   ```
   
   When the consumer calls `fragments = dataset.get_fragments(filter)`, 
fragment 1 is eliminated, because it cannot match. All the fragments that might 
match are returned.
   
   The consumer then passes the same filter to the scan of each fragment. When 
it is passed to fragment 2, the filter is executed because there are some rows 
it might still filter out. When it is passed to fragment 3, the filter is 
simplified to `true`, since we know all rows already satisfy it based on the 
guarantee `date(ts) = "2022-05-05`.
   
   In the end, the rows produced by the dataset scanner must not have any rows 
that would be removed by the filter.
   
   Does that API make sense now?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to