Hi,

I have a quick question about hadoop s3 fast upload.

When using fast upload to upload a file of let's say 100G with a disk based
buffering set 128mb blocks ( active block = 1 for simplicity ), will my
disk usage be capped upto a limit or can it go upto a full 100G? i.e. will
hadoop s3 delete older chunks it buffered on the disk? We are trying to
understand how to size our volumes.

I know that without fast upload, a 100G disk usage is to be expected.

Thanks
Faiz

Reply via email to