amol- commented on code in PR #13155:
URL: https://github.com/apache/arrow/pull/13155#discussion_r876856035


##########
python/pyarrow/_dataset.pyx:
##########
@@ -405,6 +405,27 @@ cdef class Dataset(_Weakrefable):
                                               use_threads=use_threads, 
coalesce_keys=coalesce_keys,
                                               output_type=InMemoryDataset)
 
+    def filter(self, expr):
+        """
+        Select rows from the Dataset.
+
+        The Dataset can be filtered based on a boolean :class:`Expression` 
filter.
+
+        Parameters
+        ----------
+        expr : Expression
+            The boolean :class:`Expression` to filter the table with.
+
+        Returns
+        -------
+        filtered : InMemoryDataset
+            An InMemoryDataset of the same schema, with only the rows selected
+            by applied filtering
+
+        """
+        return _pc()._exec_plan._filter_table(self, expr,

Review Comment:
   Actually, I double checked this and apart from reusing Scanner, the end 
result would still be materialised into a table. 
   As you would have to do
   ```
   InMemoryDataset(dataset.scanner(filter=X).to_batches())
   ```
   The `InMemoryDataset` constructor will consume all the batches to create a 
`Table`, thus doing the same exact thing that `_filter_table` does.
   
   So not sure there would be any benefit using `Scanner` instead of 
`_filter_table`.
   
   It would probably make sense to add a way to build back a `Dataset` from a 
`Scanner`, so that we can push evaluation further away in time and chain 
multiple operations on the dataset.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to