On 23 Sep 2015, at 14:56, Michal Čizmazia <mici...@gmail.com<mailto:mici...@gmail.com>> wrote:
To get around the fact that flush does not work in S3, my custom WAL implementation stores a separate S3 object per each WriteAheadLog.write call. Do you see any gotchas with this approach? nothing obvious. the blob is PUT in the close() call; once that operation has completed then its in S3. For any attempt to open that file to read will immediately succeed, now even in US-east if you set the right endpoint: https://forums.aws.amazon.com/ann.jspa?annID=3112 http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions If you can avoid listing operations or overwrites, you avoid the fun there. You do have to bear in mind that the duration of stream.close() is now O(bytes) and may fail -a lot of code assumes it is instant and always works...