The GitHub Actions job "Build and push images" on texera.git/main has failed.
Run started by GitHub user bobbai00 (triggered by bobbai00).

Head commit for run:
08c43343353aa88e572bd7796d5086c36ee8aa19 / Chris 
<[email protected]>
feat: Add BigObject Support for Handling Data Larger Than 2GB in Java (#4067)

<!--
Thanks for sending a pull request (PR)! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines:
[Contributing to
Texera](https://github.com/apache/texera/blob/main/CONTRIBUTING.md)
  2. Ensure you have added or run the appropriate tests for your PR
  3. If the PR is work in progress, mark it a draft on GitHub.
  4. Please write your PR title to summarize what this PR proposes, we 
    are following Conventional Commits style for PR titles as well.
  5. Be sure to keep the PR description updated to reflect all changes.
-->

### What changes were proposed in this PR?
<!--
Please clarify what changes you are proposing. The purpose of this
section
is to outline the changes. Here are some tips for you:
  1. If you propose a new API, clarify the use case for a new API.
  2. If you fix a bug, you can clarify why it is a bug.
  3. If it is a refactoring, clarify what has been changed.
  3. It would be helpful to include a before-and-after comparison using 
     screenshots or GIFs.
  4. Please consider writing useful notes for better and faster reviews.
-->
This PR introduces a new attribute type, `big_object`, that lets Java
operators pass data larger than 2 GB to downstream operators. Instead of
storing large data directly in the tuple, the data is uploaded to MinIO,
and the tuple stores a pointer to that object. Future PRs will add
support for Python and R UDF operators.

#### Main changes:
1. MinIO
- Added a new bucket: `texera-big-objects`.
- Implemented multipart upload (separate from LakeFS) to efficiently
handle large uploads
2. BigObjectManager (Internal Java API)
- `create()` → Generates a unique S3 URI, registers it in the database,
and returns the URI string
- `deleteAllObjects()` → Deletes all big objects from S3 (Please check
the Note section below)
3. Streaming I/O Classes
- `BigObjectOutputStream`: Streams data to S3 using background multipart
upload
- `BigObjectInputStream`: Lazily streams data from S3 when reading
4. Iceberg Integration
- BigObject pointers are stored as strings in Iceberg
- A magic suffix is added to attribute names to differentiate them from
normal strings

####  User API
##### Creating and Writing a BigObject:
```java
// In an OperatorExecutor
BigObject bigObject = new BigObject();
try (BigObjectOutputStream out = new BigObjectOutputStream(bigObject)) {
    out.write(myLargeDataBytes);
    // or: out.write(byteArray, offset, length);
}
// bigObject is now ready to be added to tuples
```

##### Reading a BigObject:
```java
// Option 1: Read all data at once
try (BigObjectInputStream in = new BigObjectInputStream(bigObject)) {
    byte[] allData = in.readAllBytes();
    // ... process data
}

// Option 2: Read a specific amount
try (BigObjectInputStream in = new BigObjectInputStream(bigObject)) {
    byte[] chunk = in.readNBytes(1024); // Read 1KB
    // ... process chunk
}

// Option 3: Use as a standard InputStream
try (BigObjectInputStream in = new BigObjectInputStream(bigObject)) {
    int bytesRead = in.read(buffer, offset, length);
    // ... process data
}
```

#### Note
This PR does NOT handle lifecycle management for big objects. For now,
when a workflow or workflow execution is deleted, all related big
objects in S3 are deleted immediately. We will add proper lifecycle
management in a future update.

#### System Diagram
<img width="3444" height="2684" alt="BigObject-Page-1 drawio (4)"
src="https://github.com/user-attachments/assets/98eded06-03b2-41be-b50b-0520a654ddca";
/>


### Any related issues, documentation, discussions?
<!--
Please use this section to link other resources if not mentioned
already.
1. If this PR fixes an issue, please include `Fixes #1234`, `Resolves
#1234`
or `Closes #1234`. If it is only related, simply mention the issue
number.
  4. If there is design documentation, please add the link.
  8. If there is a discussion in the mailing list, please add the link.
-->
Related to #3787. 


### How was this PR tested?
<!--
If tests were added, say they were added here. Or simply mention that if
the PR
is tested with existing test cases. Make sure to include/update test
cases that
check the changes thoroughly including negative and positive cases if
possible.
If it was tested in a way different from regular unit tests, please
clarify how
you tested step by step, ideally copy and paste-able, so that other
reviewers can
test and check, and descendants can verify in the future. If tests were
not added,
please describe why they were not added and/or why it was difficult to
add.
-->

Tested by running this workflow multiple times and check MinIO dashboard
to see whether three big objects are created and deleted. Specify the
file scan operator's property to use any file bigger than 2GB.
[Big Object Java
UDF.json](https://github.com/user-attachments/files/23666312/Big.Object.Java.UDF.json)


### Was this PR authored or co-authored using generative AI tooling?
<!--
If generative AI tooling has been used in the process of authoring this
PR,
please include the phrase: 'Generated-by: ' followed by the name of the
tool
and its version. If no, write 'No'. 
Please refer to the [ASF Generative Tooling
Guidance](https://www.apache.org/legal/generative-tooling.html) for
details.
-->
Yes.

---------

Signed-off-by: Chris <[email protected]>

Report URL: https://github.com/apache/texera/actions/runs/19690607510

With regards,
GitHub Actions via GitBox

Reply via email to