[ 
https://issues.apache.org/jira/browse/HDDS-5458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Jahad updated HDDS-5458:
-------------------------------
    Description: 
*Summary*:
 The smoketest/s3/s3_compatbility_check.sh script was incomplete. I added the 
rest of the robot scripts and got 4 failures.

*Details*:

I added the following smoketest/s3 robot files to the s3 compatibility test 
script:
 awss3.robot
 bucketcreate.robot
 bucketdelete.robot
 buckethead.robot
 bucketlist.robot

The only s3 robot files I didn't include are:
 __init__.robot
 commonawslib.robot
 boto3.robot

I get the following failures from the new files. All except #4 fail because our 
error messages differ from aws.

 

 

*1. "Create bucket with invalid bucket name"* 
 It expects: "InvalidBucketName" but gets:
 _"An error occurred (BucketAlreadyExists) when calling the CreateBucket 
operation: The requested bucket name is not available. The bucket namespace is 
shared by all users of the system. Please select a different name and try 
again."_

It currently uses "bucket_1" as the bad bucket name. "Changing that name to 
BadBucketName_1", causes it to pass.

It seems even though the "bucket_1" name is invalid, it collides with some 
existing name first, and generates a different error.

I'm thinking the bad bucket name should be randomized.

 

 

*2. "Delete non-existent bucket"* 
 It expects: "NoSuchBucket" but gets:
 _"An error occurred (AccessDenied) when calling the DeleteBucket operation: 
Access Denied"_

So the error message here is different than the one returned by our s3gateway. 
Do we want to fix the gateway or change the test?

 

 

*3. "Head Bucket not existent"*
 It is expecting a 404 exit and a "Not Found" message. Instead it gets a 400 
exit code and this message:
 _"An error occurred (400) when calling the HeadBucket operation: Bad Request"_

Again, do we fix the gateway or change the test?

 

 

*4. Test Multipart Upload Put With Copy and range with IfModifiedSince*

Without any of my changes the current s3 script fails on this test, where a 
file is uploaded and then tested with "IfModifiedSince".

The original file hasn't been modified, so the upload is expected to fail. But 
on AWS, the upload succeeds.  The problem is that the test sets the 
"IfModifiedSince" into  the future.

In that case AWS ignores the Precondition and does the upload even though 
"IfModifiedSince" is false.

This is a known issue with how the API works on AWS: 
[https://forums.aws.amazon.com/thread.jspa?threadID=88985]

Currently the test sets the "IfModifiedSince" time to a full day in the future. 
To fix it, we could modify the test to set the "IfModifiedSince" time to be a 
few seconds after the creation time, and pause till that time has passed.

  was:
*Summary*:
The smoketest/s3/s3_compatbility_check.sh script was incomplete. I added the 
rest of the robot scripts and got 4 failures.


*Details*:

I added the following smoketest/s3 robot files to the s3 compatibility test 
script:
awss3.robot
bucketcreate.robot
bucketdelete.robot
buckethead.robot
bucketlist.robot

The only s3 robot files I didn't include are:
__init__.robot
commonawslib.robot
boto3.robot

I get the following failures from the new files. They all fail because our 
error messages differ from aws.

 

 

*1. "Create bucket with invalid bucket name"* 
It expects: "InvalidBucketName" but gets:
_"An error occurred (BucketAlreadyExists) when calling the CreateBucket 
operation: The requested bucket name is not available. The bucket namespace is 
shared by all users of the system. Please select a different name and try 
again."_

It currently uses "bucket_1" as the bad bucket name. "Changing that name to 
BadBucketName_1", causes it to pass.

It seems even though the "bucket_1" name is invalid, it collides with some 
existing name first, and generates a different error.

I'm thinking the bad bucket name should be randomized.

 

 

*2. "Delete non-existent bucket"* 
It expects: "NoSuchBucket" but gets:
_"An error occurred (AccessDenied) when calling the DeleteBucket operation: 
Access Denied"_

So the error message here is different than the one returned by our s3gateway. 
Do we want to fix the gateway or change the test?

 

 

*3. "Head Bucket not existent"*
It is expecting a 404 exit and a "Not Found" message. Instead it gets a 400 
exit code and this message:
_"An error occurred (400) when calling the HeadBucket operation: Bad Request"_

Again, do we fix the gateway or change the test?


*4. Test Multipart Upload Put With Copy and range with IfModifiedSince*

Without any of my changes the current s3 script fails on this test, where a 
file is uploaded and then tested with "IfModifiedSince".

The original file hasn't been modified, so the test is expected to fail. But on 
AWS, it passes. The problem is that the test sets the "IfModifiedSince" is in 
the future.

In that case AWS ignores the Precondition and does the upload even though it 
hasn't been modified since.

This is a known issue with how the API works on AWS: 
https://forums.aws.amazon.com/thread.jspa?threadID=88985

Currently the test sets the "IfModifiedSince" time to a full day in the future. 
To fix it, we could modify the test to be a few seconds after the creation 
time, and pause till that time has passed.


> s3_compatbility_check.sh/aws compatibility issues
> -------------------------------------------------
>
>                 Key: HDDS-5458
>                 URL: https://issues.apache.org/jira/browse/HDDS-5458
>             Project: Apache Ozone
>          Issue Type: Bug
>          Components: S3, test
>            Reporter: George Jahad
>            Priority: Minor
>              Labels: newbie
>
> *Summary*:
>  The smoketest/s3/s3_compatbility_check.sh script was incomplete. I added the 
> rest of the robot scripts and got 4 failures.
> *Details*:
> I added the following smoketest/s3 robot files to the s3 compatibility test 
> script:
>  awss3.robot
>  bucketcreate.robot
>  bucketdelete.robot
>  buckethead.robot
>  bucketlist.robot
> The only s3 robot files I didn't include are:
>  __init__.robot
>  commonawslib.robot
>  boto3.robot
> I get the following failures from the new files. All except #4 fail because 
> our error messages differ from aws.
>  
>  
> *1. "Create bucket with invalid bucket name"* 
>  It expects: "InvalidBucketName" but gets:
>  _"An error occurred (BucketAlreadyExists) when calling the CreateBucket 
> operation: The requested bucket name is not available. The bucket namespace 
> is shared by all users of the system. Please select a different name and try 
> again."_
> It currently uses "bucket_1" as the bad bucket name. "Changing that name to 
> BadBucketName_1", causes it to pass.
> It seems even though the "bucket_1" name is invalid, it collides with some 
> existing name first, and generates a different error.
> I'm thinking the bad bucket name should be randomized.
>  
>  
> *2. "Delete non-existent bucket"* 
>  It expects: "NoSuchBucket" but gets:
>  _"An error occurred (AccessDenied) when calling the DeleteBucket operation: 
> Access Denied"_
> So the error message here is different than the one returned by our 
> s3gateway. Do we want to fix the gateway or change the test?
>  
>  
> *3. "Head Bucket not existent"*
>  It is expecting a 404 exit and a "Not Found" message. Instead it gets a 400 
> exit code and this message:
>  _"An error occurred (400) when calling the HeadBucket operation: Bad 
> Request"_
> Again, do we fix the gateway or change the test?
>  
>  
> *4. Test Multipart Upload Put With Copy and range with IfModifiedSince*
> Without any of my changes the current s3 script fails on this test, where a 
> file is uploaded and then tested with "IfModifiedSince".
> The original file hasn't been modified, so the upload is expected to fail. 
> But on AWS, the upload succeeds.  The problem is that the test sets the 
> "IfModifiedSince" into  the future.
> In that case AWS ignores the Precondition and does the upload even though 
> "IfModifiedSince" is false.
> This is a known issue with how the API works on AWS: 
> [https://forums.aws.amazon.com/thread.jspa?threadID=88985]
> Currently the test sets the "IfModifiedSince" time to a full day in the 
> future. To fix it, we could modify the test to set the "IfModifiedSince" time 
> to be a few seconds after the creation time, and pause till that time has 
> passed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to