Turns out s3:GetObjectVersion was the one that was necessary.

On Thu, Aug 21, 2014 at 9:48 AM, <[email protected]> wrote:

> I've resolved this issue. Here's a brief summary of the process I followed:
>
> 1) I verified I could download the file using boto in a separate python
> script utilizing the wal-e env files as credentials (i used
> key.get_contents_to_filename)
> 2) I reverted from the latest version of wal-e to a previous version
> (0.6.6)
>
> Reverting to the older version exposed a S3 403 Forbidden exception that
> is not being shown in the newer version. I ended up adding several
> permissions to my IAM role and that appears to solve my problem. My guess
> is that streaming from S3 requires an additional permission that
> get_contents_to_filename does not. I'm not sure which on is required, but
> these are the permissions I granted:
>
> "s3:GetBucketAcl",
> "s3:GetBucketPolicy",
> "s3:GetBucketVersioning",
> "s3:GetObjectAcl",
> "s3:GetObjectVersion"
>
>
>
> On Wednesday, August 20, 2014 11:14:05 AM UTC-4, Mike Patterson wrote:
>>
>> I'm pushing wal files and base backups from my postgres master server to
>> S3. I believe this is functioning successfully, as I've verified the files
>> exist in S3 where I would expect them to be. However, when I try to
>> download a backup from a new machine - using wal-e backup-fetch - I'm
>> getting the error message pasted below repeatedly for each tar partition.
>>
>> It seems to indicate that the .tar.lzo files are not actually lzop files,
>> additionally that they are empty files. S3's file properties show this as
>> not being the case. I've verified that my S3 prefix is correct, and that
>> the absolute S3 key exists in the S3 console.
>>
>> I'm using Postgres 9.1, and the latest version of wal-e (my master was on
>> an older version of wal-e, but I have upgraded to ensure that both master
>> and replica are running the same versions of wal-e). My master is running
>> ubuntu 12.04, and my replica is running 14.04.
>>
>> If anyone can help I'd greatly appreciate it!
>>
>> -----
>> S3 File Properties:
>>  Bucket: ***** Folder: tar_partitions Name: part_00000003.tar.lzo Link:
>> https://s3.amazonaws.com/*****/basebackups_005/base_
>> 00000001000002570000008B_00000032/tar_partitions/part_00000003.tar.lzo
>>  Size: 308910681
>>
>> -----
>> Log message:
>>
>> wal_e.worker.s3.s3_worker INFO     MSG: beginning partition download
>>         DETAIL: The partition being downloaded is part_00000003.tar.lzo.
>>         HINT: The absolute S3 key is basebackups_005/base_
>> 00000001000002570000008B_00000032/tar_partitions/part_00000003.tar.lzo.
>>         STRUCTURED: time=2014-08-20T13:16:43.641803-00 pid=28773
>>  *lzop: <stdin>: not a lzop file*
>> wal_e.retries WARNING  MSG: retrying after encountering exception
>>         DETAIL: Exception information dump:
>>         Traceback (most recent call last):
>>           File "/usr/local/lib/python2.7/dist-packages/wal_e/retries.py",
>> line 62, in shim
>>             return f(*args, **kwargs)
>>           File 
>> "/usr/local/lib/python2.7/dist-packages/wal_e/worker/s3/s3_worker.py",
>> line 78, in fetch_partition
>>             TarPartition.tarfile_extract(pl.stdout, self.local_root)
>>           File 
>> "/usr/local/lib/python2.7/dist-packages/wal_e/tar_partition.py",
>> line 260, in tarfile_extract
>>             bufsize=pipebuf.PIPE_BUF_BYTES)
>>           File "/usr/lib/python2.7/tarfile.py", line 1690, in open
>>             **kwargs)
>>           File "/usr/lib/python2.7/tarfile.py", line 1574, in __init__
>>             self.firstmember = self.next()
>>           File "/usr/lib/python2.7/tarfile.py", line 2338, in next
>>             raise ReadError("empty file")
>>         *ReadError: empty file*
>>
>>         HINT: A better error message should be written to handle this
>> exception.  Please report this output and, if possible, the situation under
>> which it arises.
>>         STRUCTURED: time=2014-08-20T13:16:43.644180-00 pid=28773
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"wal-e" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to