Hello everyone,

Recently, while developing a new feature, we implemented a function to copy data on S3 from one bucket to another. To achieve this, we used two methods: BlobStoreDAO.read and BlobStoreDAO.save. This approach is suboptimal as it downloads and uploads data, consuming unnecessary traffic and resources.

S3 APIs support internal object copying between buckets, eliminating external traffic. This has led me to consider introducing a new API, copy, to the BlobStoreDAO interface. It looks like:

`public Mono<BlobId> copy(BucketName sourceBucketName, BlobId sourceBlobId, BucketName targetBucketName);`

However, after some consideration, this would make BlobStoreDAO no longer abstract.

Upon receiving Benoit's idea of creating S3ClientFactory and S3Client, and injecting it into S3BlobStoreDAO (currently, we instantiate S3Client directly within the S3BlobStoreDAO constructor), Then it will resolving this issue.

// The copy API is not the only one that would benefit from this change. In the past, I have also wanted to configure S3 lifecycle settings (e.g., automatic deletion after a certain period), but I postponed it due to similar obstacles.

I appreciate your feedback and thoughts on this proposal.


Regards,
Tung, Tran Van


---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org

Reply via email to