samredai commented on a change in pull request #4081:
URL: https://github.com/apache/iceberg/pull/4081#discussion_r806126150



##########
File path: python/src/iceberg/io/pyarrow.py
##########
@@ -0,0 +1,215 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""FileIO implementation for reading and writing table files that uses 
pyarrow.fs
+
+This file contains a FileIO implementation that relies on the filesystem 
interface provided
+by PyArrow. It relies on PyArrow's `from_uri` method that infers the correct 
filesytem
+type to use. Theoretically, this allows the supported storage types to grow 
naturally
+with the pyarrow library.
+"""
+
+from typing import Union
+from urllib.parse import ParseResult, urlparse
+
+from pyarrow import NativeFile
+from pyarrow.fs import FileSystem, FileType
+
+from iceberg.io.base import FileIO, InputFile, InputStream, OutputFile, 
OutputStream
+
+
+class PyArrowInputFile(InputFile):
+    """An InputFile implementation that uses a pyarrow filesystem to generate 
pyarrow.lib.NativeFile instances for reading
+
+    Args:
+        location(str): A URI or a path to a local file
+
+    Attributes:
+        location(str): The URI or path to a local file for a PyArrowInputFile 
instance
+        parsed_location(urllib.parse.ParseResult): The parsed location with 
attributes `scheme`, `netloc`, `path`, `params`,
+          `query`, and `fragment`
+        exists(bool): Whether the file exists or not
+
+    Examples:
+        >>> from iceberg.io.pyarrow import PyArrowInputFile
+        >>> input_file = PyArrowInputFile("s3://foo/bar.txt")
+        >>> file_content = input_file.open().read()
+    """
+
+    def __init__(self, location: str):
+        parsed_location = urlparse(location)  # Create a ParseResult from the 
uri
+
+        if parsed_location.scheme and parsed_location.scheme not in (
+            "file",
+            "mock",
+            "s3fs",
+            "hdfs",
+            "viewfs",
+        ):  # Validate that a uri is provided with a scheme of `file`
+            raise ValueError("PyArrowInputFile location must have a scheme of 
`file`, `mock`, `s3fs`, `hdfs`, or `viewfs`")

Review comment:
       Based on the response on dev@arrow, it looks like caching here is 
reasonable. I updated the PR to do this and it actually streamlined a few 
things (removed the need for `parsed_location` to be a class attribute and made 
some patching in tests a bit easier where you can just override 
self._filesystem now).
   
   The approach I took was adding a module level `_FILESYSTEM_INSTANCES` 
dictionary to use as a cache and a module level function `get_filesystem`. The 
function takes a location, pulls out the scheme, and then checks 
`_FILESYSTEM_INSTANCES` to if a filesystem already exists for that scheme. If 
the key doesn't exist, it instantiates a new filesystem using 
`FileSystem.from_uri(location)` and adds that filesystem instance to 
`_FILESYSTEM_INSTANCES`.

##########
File path: python/src/iceberg/io/pyarrow.py
##########
@@ -0,0 +1,215 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""FileIO implementation for reading and writing table files that uses 
pyarrow.fs
+
+This file contains a FileIO implementation that relies on the filesystem 
interface provided
+by PyArrow. It relies on PyArrow's `from_uri` method that infers the correct 
filesytem
+type to use. Theoretically, this allows the supported storage types to grow 
naturally
+with the pyarrow library.
+"""
+
+from typing import Union
+from urllib.parse import ParseResult, urlparse
+
+from pyarrow import NativeFile
+from pyarrow.fs import FileSystem, FileType
+
+from iceberg.io.base import FileIO, InputFile, InputStream, OutputFile, 
OutputStream
+
+
+class PyArrowInputFile(InputFile):
+    """An InputFile implementation that uses a pyarrow filesystem to generate 
pyarrow.lib.NativeFile instances for reading
+
+    Args:
+        location(str): A URI or a path to a local file
+
+    Attributes:
+        location(str): The URI or path to a local file for a PyArrowInputFile 
instance
+        parsed_location(urllib.parse.ParseResult): The parsed location with 
attributes `scheme`, `netloc`, `path`, `params`,
+          `query`, and `fragment`
+        exists(bool): Whether the file exists or not
+
+    Examples:
+        >>> from iceberg.io.pyarrow import PyArrowInputFile
+        >>> input_file = PyArrowInputFile("s3://foo/bar.txt")
+        >>> file_content = input_file.open().read()
+    """
+
+    def __init__(self, location: str):
+        parsed_location = urlparse(location)  # Create a ParseResult from the 
uri
+
+        if parsed_location.scheme and parsed_location.scheme not in (
+            "file",
+            "mock",
+            "s3fs",
+            "hdfs",
+            "viewfs",
+        ):  # Validate that a uri is provided with a scheme of `file`
+            raise ValueError("PyArrowInputFile location must have a scheme of 
`file`, `mock`, `s3fs`, `hdfs`, or `viewfs`")

Review comment:
       Based on the response on dev@arrow, it looks like caching here is 
reasonable. I updated the PR to do this and it actually streamlined a few 
things (removed the need for `parsed_location` to be a class attribute and made 
some patching in tests a bit easier where you can just override 
self._filesystem now).
   
   The approach I took was adding a module level `_FILESYSTEM_INSTANCES` 
dictionary to use as a cache and a module level function `get_filesystem`. The 
function takes a location, pulls out the scheme, and then checks 
`_FILESYSTEM_INSTANCES` to determine if a cached filesystem already exists for 
that scheme. If the key doesn't exist, it instantiates a new filesystem using 
`FileSystem.from_uri(location)` and adds that filesystem instance to 
`_FILESYSTEM_INSTANCES`.

##########
File path: python/src/iceberg/io/pyarrow.py
##########
@@ -0,0 +1,215 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""FileIO implementation for reading and writing table files that uses 
pyarrow.fs
+
+This file contains a FileIO implementation that relies on the filesystem 
interface provided
+by PyArrow. It relies on PyArrow's `from_uri` method that infers the correct 
filesytem
+type to use. Theoretically, this allows the supported storage types to grow 
naturally
+with the pyarrow library.
+"""
+
+from typing import Union
+from urllib.parse import ParseResult, urlparse
+
+from pyarrow import NativeFile
+from pyarrow.fs import FileSystem, FileType
+
+from iceberg.io.base import FileIO, InputFile, InputStream, OutputFile, 
OutputStream
+
+
+class PyArrowInputFile(InputFile):
+    """An InputFile implementation that uses a pyarrow filesystem to generate 
pyarrow.lib.NativeFile instances for reading
+
+    Args:
+        location(str): A URI or a path to a local file
+
+    Attributes:
+        location(str): The URI or path to a local file for a PyArrowInputFile 
instance
+        parsed_location(urllib.parse.ParseResult): The parsed location with 
attributes `scheme`, `netloc`, `path`, `params`,
+          `query`, and `fragment`
+        exists(bool): Whether the file exists or not
+
+    Examples:
+        >>> from iceberg.io.pyarrow import PyArrowInputFile
+        >>> input_file = PyArrowInputFile("s3://foo/bar.txt")
+        >>> file_content = input_file.open().read()
+    """
+
+    def __init__(self, location: str):
+        parsed_location = urlparse(location)  # Create a ParseResult from the 
uri
+
+        if parsed_location.scheme and parsed_location.scheme not in (
+            "file",
+            "mock",
+            "s3fs",
+            "hdfs",
+            "viewfs",
+        ):  # Validate that a uri is provided with a scheme of `file`
+            raise ValueError("PyArrowInputFile location must have a scheme of 
`file`, `mock`, `s3fs`, `hdfs`, or `viewfs`")

Review comment:
       Based on the response on dev@arrow, it looks like caching here is 
reasonable. I updated the PR to do this and it actually streamlined a few 
things (removed the need for `parsed_location` to be a class attribute and made 
some patching in tests a bit easier where you can just override 
self._filesystem now).
   
   The approach I took was adding a module level `_FILESYSTEM_INSTANCES` 
dictionary to use as a cache and a module level function `get_filesystem`. The 
function takes a location, pulls out the scheme, and then checks 
`_FILESYSTEM_INSTANCES` to determine if a cached filesystem already exists for 
that scheme. If the key doesn't exist, it instantiates a new filesystem using 
`FileSystem.from_uri(location)` and adds that filesystem instance to 
`_FILESYSTEM_INSTANCES`. The constructor for `PyArrowFile` then just does 
`self._filesystem = get_filesystem(location)`.

##########
File path: python/src/iceberg/io/pyarrow.py
##########
@@ -0,0 +1,215 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""FileIO implementation for reading and writing table files that uses 
pyarrow.fs
+
+This file contains a FileIO implementation that relies on the filesystem 
interface provided
+by PyArrow. It relies on PyArrow's `from_uri` method that infers the correct 
filesytem
+type to use. Theoretically, this allows the supported storage types to grow 
naturally
+with the pyarrow library.
+"""
+
+from typing import Union
+from urllib.parse import ParseResult, urlparse
+
+from pyarrow import NativeFile
+from pyarrow.fs import FileSystem, FileType
+
+from iceberg.io.base import FileIO, InputFile, InputStream, OutputFile, 
OutputStream
+
+
+class PyArrowInputFile(InputFile):
+    """An InputFile implementation that uses a pyarrow filesystem to generate 
pyarrow.lib.NativeFile instances for reading
+
+    Args:
+        location(str): A URI or a path to a local file
+
+    Attributes:
+        location(str): The URI or path to a local file for a PyArrowInputFile 
instance
+        parsed_location(urllib.parse.ParseResult): The parsed location with 
attributes `scheme`, `netloc`, `path`, `params`,
+          `query`, and `fragment`
+        exists(bool): Whether the file exists or not
+
+    Examples:
+        >>> from iceberg.io.pyarrow import PyArrowInputFile
+        >>> input_file = PyArrowInputFile("s3://foo/bar.txt")
+        >>> file_content = input_file.open().read()
+    """
+
+    def __init__(self, location: str):
+        parsed_location = urlparse(location)  # Create a ParseResult from the 
uri
+
+        if parsed_location.scheme and parsed_location.scheme not in (
+            "file",
+            "mock",
+            "s3fs",
+            "hdfs",
+            "viewfs",
+        ):  # Validate that a uri is provided with a scheme of `file`
+            raise ValueError("PyArrowInputFile location must have a scheme of 
`file`, `mock`, `s3fs`, `hdfs`, or `viewfs`")

Review comment:
       Based on the response on dev@arrow, it looks like caching here is 
reasonable. I updated the PR to do this and it actually streamlined a few 
things (removed the need for `parsed_location` to be a class attribute and made 
some patching in tests a bit easier where you can just override 
self._filesystem now).
   
   The approach I took was adding a module level `_FILESYSTEM_INSTANCES` 
dictionary to use as a cache and a module level function `get_filesystem`. The 
function takes a location, pulls out the scheme, and then checks 
`_FILESYSTEM_INSTANCES` to determine if a cached filesystem already exists for 
that scheme. If the key doesn't exist, it instantiates a new filesystem using 
`FileSystem.from_uri(location)` and adds that filesystem instance to 
`_FILESYSTEM_INSTANCES`. The constructor for `PyArrowFile` then just does 
`self._filesystem = get_filesystem(location)` and uses that attribute 
throughout the other methods.

##########
File path: python/src/iceberg/io/pyarrow.py
##########
@@ -0,0 +1,178 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""FileIO implementation for reading and writing table files that uses 
pyarrow.fs
+
+This file contains a FileIO implementation that relies on the filesystem 
interface provided
+by PyArrow. It relies on PyArrow's `from_uri` method that infers the correct 
filesystem
+type to use. Theoretically, this allows the supported storage types to grow 
naturally
+with the pyarrow library.
+"""
+
+import os
+from typing import Union
+from urllib.parse import urlparse
+
+from pyarrow.fs import FileSystem, FileType
+
+from iceberg.io.base import FileIO, InputFile, InputStream, OutputFile, 
OutputStream
+
+_FILESYSTEM_INSTANCES: dict = {}
+
+
+def get_filesystem(location: str):
+    """Retrieve a pyarrow.fs.FileSystem instance
+
+    This method checks _FILESYSTEM_INSTANCES for an existing filesystem based 
on the location's
+    scheme, i.e. s3, hdfs, viewfs. If an existing filesystem has not been 
cached, it instantiates a new
+    filesystem using `pyarrow.fs.FileSystem.from_uri(location)`, caches the 
returned filesystem, and
+    also returns that filesystem. If a path with no scheme is provided, it's 
assumed to be a path to
+    a local file.
+
+    Args:
+        location(str): The location of the file
+
+    Returns:
+        pyarrow.fs.FileSystem: An implementation of the FileSystem base class 
inferred from the location
+
+    Raises:
+        ArrowInvalid: A suitable FileSystem implementation cannot be found 
based on the location provided
+    """
+    parsed_location = urlparse(location)  # Create a ParseResult from the uri
+    if not parsed_location.scheme:  # If no scheme, assume the path is to a 
local file
+        if _FILESYSTEM_INSTANCES.get("local"):
+            filesystem = _FILESYSTEM_INSTANCES["local"]
+        else:
+            filesystem, _ = FileSystem.from_uri(os.path.abspath(location))
+            _FILESYSTEM_INSTANCES["local"] = filesystem
+    elif _FILESYSTEM_INSTANCES.get(parsed_location.scheme):  # Check for a 
cached filesystem

Review comment:
       Hmm, I see, I just read Weston's response. I'll revert this commit and 
just do `from_uri` in the `PyArrowFile` constructor. Maybe we can then just 
wait to optimize that and think through some approaches at that time. How does 
that sound?

##########
File path: python/src/iceberg/io/pyarrow.py
##########
@@ -0,0 +1,178 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""FileIO implementation for reading and writing table files that uses 
pyarrow.fs
+
+This file contains a FileIO implementation that relies on the filesystem 
interface provided
+by PyArrow. It relies on PyArrow's `from_uri` method that infers the correct 
filesystem
+type to use. Theoretically, this allows the supported storage types to grow 
naturally
+with the pyarrow library.
+"""
+
+import os
+from typing import Union
+from urllib.parse import urlparse
+
+from pyarrow.fs import FileSystem, FileType
+
+from iceberg.io.base import FileIO, InputFile, InputStream, OutputFile, 
OutputStream
+
+_FILESYSTEM_INSTANCES: dict = {}
+
+
+def get_filesystem(location: str):
+    """Retrieve a pyarrow.fs.FileSystem instance
+
+    This method checks _FILESYSTEM_INSTANCES for an existing filesystem based 
on the location's
+    scheme, i.e. s3, hdfs, viewfs. If an existing filesystem has not been 
cached, it instantiates a new
+    filesystem using `pyarrow.fs.FileSystem.from_uri(location)`, caches the 
returned filesystem, and
+    also returns that filesystem. If a path with no scheme is provided, it's 
assumed to be a path to
+    a local file.
+
+    Args:
+        location(str): The location of the file
+
+    Returns:
+        pyarrow.fs.FileSystem: An implementation of the FileSystem base class 
inferred from the location
+
+    Raises:
+        ArrowInvalid: A suitable FileSystem implementation cannot be found 
based on the location provided
+    """
+    parsed_location = urlparse(location)  # Create a ParseResult from the uri
+    if not parsed_location.scheme:  # If no scheme, assume the path is to a 
local file
+        if _FILESYSTEM_INSTANCES.get("local"):
+            filesystem = _FILESYSTEM_INSTANCES["local"]
+        else:
+            filesystem, _ = FileSystem.from_uri(os.path.abspath(location))
+            _FILESYSTEM_INSTANCES["local"] = filesystem
+    elif _FILESYSTEM_INSTANCES.get(parsed_location.scheme):  # Check for a 
cached filesystem

Review comment:
       I just realized urlparse actually parses the bucket name as `netloc` 
which makes sense since the bucket name is the authority in the URI. For 
example:
   ```py
   from urllib.parse import urlparse
   urlparse("s3://foobucket/test.txt")
   # ParseResult(scheme='s3', netloc='foobucket', path='/test.txt', params='', 
query='', fragment='')
   ```
   So we could cache for S3 only based on bucket like you suggested by using a 
tuple as the key. Something like this:
   ```py
   parsed_location = urlparse(location)
   if parsed_location.scheme == "s3":
     s3_fs_lookup = (parsed_location.scheme, parsed_location.netloc)
     if _FILESYSTEM_INSTANCES.get(s3_fs_lookup):
       filesystem = _FILESYSTEM_INSTANCES[s3_fs_lookup]
     else:
       filesystem, _ = FileSystem.from_uri(location)
       _FILESYSTEM_INSTANCES[s3_fs_lookup] = filesystem
       return filesystem
   
   filesystem, _ = FileSystem.from_uri(location)
   return filesystem
   ```

##########
File path: python/src/iceberg/io/pyarrow.py
##########
@@ -0,0 +1,178 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""FileIO implementation for reading and writing table files that uses 
pyarrow.fs
+
+This file contains a FileIO implementation that relies on the filesystem 
interface provided
+by PyArrow. It relies on PyArrow's `from_uri` method that infers the correct 
filesystem
+type to use. Theoretically, this allows the supported storage types to grow 
naturally
+with the pyarrow library.
+"""
+
+import os
+from typing import Union
+from urllib.parse import urlparse
+
+from pyarrow.fs import FileSystem, FileType

Review comment:
       Yes I would say it's a matter of not importing this module. If we import 
from this file anywhere else in the library then I think we'd have to do a 
graceful fail there (but I'm expecting that we'll just use the FileIO abc's 
throughout the library).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to