westonpace commented on code in PR #35568:
URL: https://github.com/apache/arrow/pull/35568#discussion_r1197393156


##########
python/pyarrow/dataset/protocol.py:
##########
@@ -0,0 +1,77 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Protocol definitions for pyarrow.dataset
+
+These provide the abstract interface for a dataset. Other libraries may 
implement
+this interface to expose their data, without having to extend PyArrow's 
classes.
+
+Applications and libraries that want to consume datasets should accept datasets
+that implement these protocols, rather than requiring the specific
+PyArrow classes.
+"""
+from abc import abstractmethod
+from typing import Iterator, List, Optional, Protocol
+
+from pyarrow.dataset import Expression
+from pyarrow import Table, IntegerArray, RecordBatch, RecordBatchReader, Schema
+
+
+class Scanner(Protocol):
+    @abstractmethod
+    def count_rows(self) -> int:
+        ...
+    
+    @abstractmethod
+    def head(self, num_rows: int) -> Table:
+        ...
+
+    @abstractmethod
+    def take(self, indices: IntegerArray) -> Table:
+        ...
+    
+    @abstractmethod
+    def to_table(self) -> Table:
+        ...
+    
+    @abstractmethod
+    def to_batches(self) -> Iterator[RecordBatch]:
+        ...
+
+    @abstractmethod
+    def to_reader(self) -> RecordBatchReader:
+        ...
+
+
+class Scannable(Protocol):
+    @abstractmethod
+    def scanner(self, columns: Optional[List[str]] = None,
+                filter: Optional[Expression] = None, **kwargs) -> Scanner:
+        ...
+    
+    @abstractmethod
+    def schema(self) -> Schema:
+        ...
+
+
+class Fragment(Scannable):

Review Comment:
   Perhaps "fragment" isn't the right word here.  If this is "something that 
can be scanned" and it can maybe be scanned in parts then we could go back to 
some old wording we used to have which is "scan tasks".
   
   At the least, we should probably be clear that a fragment is not necessarily 
a file.
   
   I do think its useful for parallelization.  However, I agree with @Fokko, 
the fragments should not be scannable in the same way the dataset is.  It 
doesn't make sense to provide them with separate filters.  They should 
essentially have `to_reader` and nothing else.
   
   This does bring up concerns regarding threading and resource contention.  
E.g. if datafusion was using pyarrow as a data source and spreading the work 
across many fragments then it probably wouldn't want those individual fragment 
scanners using their own threads.
   
   Perhaps we can add `use_threads`?  It's a rather coarse control but may be 
sufficient.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to