On 14 May 2018, at 13:43, Pete Zaitcev wrote:

On Thu, 10 May 2018 20:07:03 +0800
Yuxin Wang <wang.yu...@ostorage.com.cn> wrote:

I'm working on a swift project. Our customer cares about S3 compatibility very much. I tested our swift cluster with ceph/s3-tests and analyzed the failed cases. It turns out that lots of the failed cases are related to unique container/bucket. But as we know, containers are just unique in a tenant/project.
[...]
Do you have any ideas on how to do or maybe why not to do? I'd highly appreciate any suggestions.

I don't have a recipy, but here's a thought: try making all the accounts that need the interoperability with S3 belong to the same Keystone tenant.
As long as you do not give those accounts the owner role (one of those
listed in operator_roles=), they will not be able to access each other's buckets (Swift containers). Unfortunately, I think they will not be able
to create any buckets either, but perhaps it's something that can be
tweaked - for sure if you're willing to far enough to make new middleware.

-- Pete



Pete's idea is interesting. The upstream Swift community has talked about what it will take to support this sort of S3 compatibility, and we've got some pretty good ideas. We'd love your help to implement something. You can find us in #openstack-swift in freenode IRC.

As a general overview, swift3 (which has now been integrated into Swift's repo as the "s3api" middleware) maps S3 buckets to a unique (account, container) pair in Swift. This mapping is critical because the Swift account plays a part in Swift's data placement algorithm. This allows both you and I to both have an "images" container in the same Swift cluster in our respective accounts. However, AWS doesn't have an exposed "thing" that's analogous to the account. In order to fill in this missing info, we have to map the S3 bucket name to the appropriate (account, container) pair in Swift. Currently, the s3api middleware does this by encoding the account name into the auth token. This way, when you and I are each accessing our own "images" container as a bucket via the S3 API, our requests go to the right place and do the right thing.

This mapping technique has a couple of significant limits. First, we can't do the mapping without the token, so unauthenticated (ie public) S3 API calls can never work. Second, bucket names are not unique. This second issue may or may not be a bug. In your case, it's an issue, but it may be of benefit to others. Either way, it's a difference from the way S3 works.

In order to fix this, we need a new way to do the bucket->(account, container) mapping. One idea is to have a key-value registry. There may be other ways to solve this too, but it's not a trivial change. We'd welcome your help in figuring out the right solution!

--John



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to