Fokko commented on code in PR #6645:
URL: https://github.com/apache/iceberg/pull/6645#discussion_r1085961480
##########
python/pyiceberg/io/pyarrow.py:
##########
@@ -470,13 +471,54 @@ def expression_to_pyarrow(expr: BooleanExpression) ->
pc.Expression:
return boolean_expression_visit(expr, _ConvertToArrowExpression())
+def _file_to_table(
+ fs: FileSystem,
+ task: FileScanTask,
+ bound_row_filter: BooleanExpression,
+ projected_schema: Schema,
+ projected_field_ids: Set[int],
+ case_sensitive: bool,
+) -> pa.Table:
+ _, path = PyArrowFileIO.parse_location(task.file.file_path)
+
+ # Get the schema
+ with fs.open_input_file(path) as fout:
+ parquet_schema = pq.read_schema(fout)
+ schema_raw = parquet_schema.metadata.get(ICEBERG_SCHEMA)
+ if schema_raw is None:
+ raise ValueError(
+ "Iceberg schema is not embedded into the Parquet file, see
https://github.com/apache/iceberg/issues/6505"
+ )
+ file_schema = Schema.parse_raw(schema_raw)
+
+ pyarrow_filter = None
+ if bound_row_filter is not AlwaysTrue():
+ translated_row_filter = translate_column_names(bound_row_filter,
file_schema, case_sensitive=case_sensitive)
+ bound_file_filter = bind(file_schema, translated_row_filter,
case_sensitive=case_sensitive)
+ pyarrow_filter = expression_to_pyarrow(bound_file_filter)
+
+ file_project_schema = prune_columns(file_schema, projected_field_ids,
select_full_types=False)
+
+ if file_schema is None:
+ raise ValueError(f"Missing Iceberg schema in Metadata for file:
{path}")
+
+ # Prune the stuff that we don't need anyway
+ file_project_schema_arrow = schema_to_pyarrow(file_project_schema)
+
+ arrow_table = ds.dataset(
Review Comment:
Nice, let me add that! Looking at the docs that makes a lot of sense.
What's also on my list is to test reading tables instead of datasets:
https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html#pyarrow-parquet-read-table
A dataset is a high-level construct to read data in a lazy fashion. Since we
create a dataset per file, this might not be optimal. Maybe a table might be
more efficient. I haven't done this because the dataset allows you to pass in
an expression, and for the table, you need to pass in the filter in a DNF
format, but we have the conversion available now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]