My fault on this one. I mistakenly thought the environment variables (AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY) would override values set in hadoop-site.xml; I now see that this is not the case for the Hadoop FS shell commands.
John On Wed, Mar 4, 2009 at 5:18 PM, S D <sd.codewarr...@gmail.com> wrote: > I'm using Hadoop 0.19.0 with S3 Native. Up until a few days ago I was > successfully able to use the various shell functions successfully; e.g., > hadoop dfs -ls . > To ensure access to my Amazon S3 Native data store I set the following > environment variables: AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY. Up > until this morning, this was working fine. Today I tried invoking the above > shell command (along with several others) and received the following error: > > Exception in thread "main" org.apache.hadoop.fs.s3.S3Exception: > org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error > Message: <?xml version="1.0" > encoding="UTF-8"?><Error><Code>InvalidAccessKeyId</Code><Message>The AWS > Access Key Id you provided does not exist in our > records.</Message><RequestId>9294E59A199FB551</RequestId><HostId>HPdkhanFEvbz50k5FdwCgQRdOjO4aIBODXiMcBwy6DX7Ve3Jx2uXnujT6oEBhJpN</HostId><AWSAccessKeyId>882</AWSAccessKeyId></Error>... > > I have not changed my Amazon access identifiers and I can still use my S3 > Firefox Organizer to access my S3 data (using the same AMAZON_ACCESS_KEY_ID > and AMAZON_SECRET_ACCESS_KEY as before). > > Has anyone else experienced this problem? Has something changed with the > API that I am unaware of? > > Thanks, > John > > >