kunwp1 opened a new pull request, #4100:
URL: https://github.com/apache/texera/pull/4100
<!--
Thanks for sending a pull request (PR)! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines:
[Contributing to
Texera](https://github.com/apache/texera/blob/main/CONTRIBUTING.md)
2. Ensure you have added or run the appropriate tests for your PR
3. If the PR is work in progress, mark it a draft on GitHub.
4. Please write your PR title to summarize what this PR proposes, we
are following Conventional Commits style for PR titles as well.
5. Be sure to keep the PR description updated to reflect all changes.
-->
### What changes were proposed in this PR?
<!--
Please clarify what changes you are proposing. The purpose of this section
is to outline the changes. Here are some tips for you:
1. If you propose a new API, clarify the use case for a new API.
2. If you fix a bug, you can clarify why it is a bug.
3. If it is a refactoring, clarify what has been changed.
3. It would be helpful to include a before-and-after comparison using
screenshots or GIFs.
4. Please consider writing useful notes for better and faster reviews.
-->
This PR introduces Python support for the `big_object` attribute type,
enabling Python UDF operators to process data larger than 2 GB. Data is
offloaded to MinIO (S3), and the tuple retains only a pointer (URI). This
mirrors the existing Java BigObject implementation, ensuring cross-language
compatibility. (See #4067 for system diagram)
## Key Features
### 1. MinIO/S3 Integration
- Utilizes the shared `texera-big-objects` bucket.
- Implements lazy initialization of S3 clients and automatic bucket creation.
### 2. Streaming I/O
- **`BigObjectOutputStream`:** Writes data to S3 using multipart uploads
(64KB chunks) to prevent blocking the main execution.
- **`BigObjectInputStream`:** Lazily downloads data only when the read
operation begins. Implements standard Python `io.IOBase`.
### 3. Tuple & Iceberg Compatibility
- `BigObject` instances are automatically serialized to URI strings for
Iceberg storage and Arrow tables.
- Uses a magic suffix (`__texera_big_obj_ptr`) to distinguish pointers from
standard strings.
### 4. Serialization
- Pointers are stored as strings with metadata (texera_type: BIG_OBJECT).
Auto-conversion ensures UDFs always see BigObject instances, not raw strings.
## User API Usage
### 1. Creating & Writing (Output)
Use `BigObjectOutputStream` to stream large data into a new object.
```python
from pytexera import BigObject, BigObjectOutputStream
# Create a new handle
big_object = BigObject()
# Stream data to S3
with BigObjectOutputStream(big_object) as out:
out.write(my_large_data_bytes)
# Supports bytearray, bytes, etc.
```
### 2. Reading (Input)
Use `BigObjectInputStream` to read data back. It supports all standard
Python stream methods.
```python
from pytexera import BigObjectInputStream
with BigObjectInputStream(big_object) as stream:
# Option A: Read everything
all_data = stream.read()
# Option B: Chunked reading
chunk = stream.read(1024)
# Option C: Iteration
for line in stream:
process(line)
```
## Dependencies
- `boto3`: Required for S3 interactions.
- `StorageConfig`: Uses existing configuration for endpoints/credentials.
### Any related issues, documentation, discussions?
<!--
Please use this section to link other resources if not mentioned already.
1. If this PR fixes an issue, please include `Fixes #1234`, `Resolves
#1234`
or `Closes #1234`. If it is only related, simply mention the issue
number.
2. If there is design documentation, please add the link.
3. If there is a discussion in the mailing list, please add the link.
-->
### How was this PR tested?
<!--
If tests were added, say they were added here. Or simply mention that if the
PR
is tested with existing test cases. Make sure to include/update test cases
that
check the changes thoroughly including negative and positive cases if
possible.
If it was tested in a way different from regular unit tests, please clarify
how
you tested step by step, ideally copy and paste-able, so that other
reviewers can
test and check, and descendants can verify in the future. If tests were not
added,
please describe why they were not added and/or why it was difficult to add.
-->
### Was this PR authored or co-authored using generative AI tooling?
<!--
If generative AI tooling has been used in the process of authoring this PR,
please include the phrase: 'Generated-by: ' followed by the name of the tool
and its version. If no, write 'No'.
Please refer to the [ASF Generative Tooling
Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
-->
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]