milesgranger commented on code in PR #14574:
URL: https://github.com/apache/arrow/pull/14574#discussion_r1019245324


##########
python/pyarrow/parquet/core.py:
##########
@@ -3423,13 +3427,15 @@ def write_metadata(schema, where, 
metadata_collector=None, **kwargs):
     ...     table.schema, 'dataset_metadata/_metadata',
     ...     metadata_collector=metadata_collector)
     """
-    writer = ParquetWriter(where, schema, **kwargs)
+    filesystem, where = _resolve_filesystem_and_path(where, filesystem)
+
+    writer = ParquetWriter(where, schema, filesystem, **kwargs)
     writer.close()
 
     if metadata_collector is not None:
         # ParquetWriter doesn't expose the metadata until it's written. Write
         # it and read it again.
-        metadata = read_metadata(where)
+        metadata = read_metadata(where, filesystem=filesystem)
         for m in metadata_collector:
             metadata.append_row_groups(m)
         metadata.write_metadata_file(where)

Review Comment:
   Removing the `if filesystem is not None` at the top of this function then 
resolved the previous error for the file URI when `write_metadata_file` was not 
able to open file "file:///...".  But suppose here, you're right. For that 
point, I've added back that condition and updated the test. I can go about 
adding the s3 variant as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to