[ 
https://issues.apache.org/jira/browse/HDDS-5458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17384990#comment-17384990
 ] 

George Jahad commented on HDDS-5458:
------------------------------------


I'll start taking a look at the fixes, and let you know if there are any that 
seem to hard for a newbie.

Here is the branch you asked about:
https://github.com/GeorgeJahad/ozone/tree/HDDS-5458
and the diffs:
https://github.com/GeorgeJahad/ozone/compare/7b8bf983a1284b370f783f890cb7c06bbc717cc4..HDDS-5458

This change:
https://github.com/GeorgeJahad/ozone/compare/7b8bf983a1284b370f783f890cb7c06bbc717cc4..HDDS-5458#diff-059753a1485b65ba3db032f291cf8f747b3dc19cfc449f0df8f44ab39ce6b55cL91

modified the test to accept just the bucket-name, instead of  
${ENDPOINT_URL}/bucket-name.  It seems innocuous, but maybe we should change 
the gateway there too.


if you want to run the test in the branch:
first, install aws cli
   apt-get install awscli


set up your aws credentials:
   export AWS_ACCESS_KEY_ID=dummy
   export AWS_SECRET_ACCESS_KEY=dummy
   export AWS_DEFAULT_REGION=us-east-1

   

add env vars:
   export OZONE_TEST_S3_BUCKET1=bucket-gbj1
   export OZONE_TEST_S3_BUCKET2=bucket-gbj2
   export OZONE_TEST_S3_REGION=us-east-1



create dummy buckets in aws:
   aws create-bucket --bucket $OZONE_TEST_S3_BUCKET1
   aws create-bucket --bucket $OZONE_TEST_S3_BUCKET2



and finally to run the test:
   mvn clean install -DskipShade -DskipTests
   cd hadoop-ozone/dist/target/ozone-1.2.0-SNAPSHOT/smoketest/s3/
   ./s3_compatbility_check.sh


> s3_compatbility_check.sh/aws compatibility issues
> -------------------------------------------------
>
>                 Key: HDDS-5458
>                 URL: https://issues.apache.org/jira/browse/HDDS-5458
>             Project: Apache Ozone
>          Issue Type: Bug
>          Components: S3, test
>            Reporter: George Jahad
>            Assignee: George Jahad
>            Priority: Minor
>              Labels: newbie
>
> *Summary*:
>  The smoketest/s3/s3_compatbility_check.sh script was incomplete. I added the 
> rest of the robot scripts and got 4 failures.
> *Details*:
> I added the following smoketest/s3 robot files to the s3 compatibility test 
> script:
>  awss3.robot
>  bucketcreate.robot
>  bucketdelete.robot
>  buckethead.robot
>  bucketlist.robot
> The only s3 robot files I didn't include are:
>  __init__.robot
>  commonawslib.robot
>  boto3.robot
> I get the following failures from the new files. All except #4 fail because 
> our error messages differ from aws.
>  
>  
> *1. "Create bucket with invalid bucket name"* 
>  It expects: "InvalidBucketName" but gets:
>  _"An error occurred (BucketAlreadyExists) when calling the CreateBucket 
> operation: The requested bucket name is not available. The bucket namespace 
> is shared by all users of the system. Please select a different name and try 
> again."_
> It currently uses "bucket_1" as the bad bucket name. "Changing that name to 
> BadBucketName_1", causes it to pass.
> It seems even though the "bucket_1" name is invalid, it collides with some 
> existing name first, and generates a different error.
> I'm thinking the bad bucket name should be randomized.
>  
>  
> *2. "Delete non-existent bucket"* 
>  It expects: "NoSuchBucket" but gets:
>  _"An error occurred (AccessDenied) when calling the DeleteBucket operation: 
> Access Denied"_
> So the error message here is different than the one returned by our 
> s3gateway. Do we want to fix the gateway or change the test?
>  
>  
> *3. "Head Bucket not existent"*
>  It is expecting a 404 exit and a "Not Found" message. Instead it gets a 400 
> exit code and this message:
>  _"An error occurred (400) when calling the HeadBucket operation: Bad 
> Request"_
> Again, do we fix the gateway or change the test?
>  
>  
> *4. "Test Multipart Upload Put With Copy and range with IfModifiedSince"*
> Without any of my changes the current s3 script fails on this test, where a 
> file is uploaded and then tested with "IfModifiedSince".
> The original file hasn't been modified, so the upload is expected to fail. 
> But on AWS, the upload succeeds.  The problem is that the test sets the 
> "IfModifiedSince" into  the future.
> In that case AWS ignores the Precondition and does the upload even though 
> "IfModifiedSince" is false.
> This is a known issue with how the API works on AWS: 
> [https://forums.aws.amazon.com/thread.jspa?threadID=88985]
> Currently the test sets the "IfModifiedSince" time to a full day in the 
> future. To fix it, we could modify the test to set the "IfModifiedSince" time 
> to be a few seconds after the creation time, and pause till that time has 
> passed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to