[ 
https://issues.apache.org/jira/browse/HADOOP-17198?focusedWorklogId=642568&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-642568
 ]

ASF GitHub Bot logged work on HADOOP-17198:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 26/Aug/21 20:31
            Start Date: 26/Aug/21 20:31
    Worklog Time Spent: 10m 
      Work Description: bogthe commented on pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260#issuecomment-906722483


   That's an interesting idea 🤔 . It definitely sounds like a great way to 
improve security in a VPN from the software stack. What's the general way to 
approach this, add it to this PR or come back later with another PR and only 
then cherry pick both into release branch? I prefer smaller changes but don't 
mind adding it here either.
   
   > had another thought. What if we had an option to require access points? 
You could then set that globally and it would be an error to try and connect to 
any bucket which didn't have an AP ARN defined., something like 
fs.s3a.access.point.required
   The idea being you could have a policy in a VPN that you weren't allowed to 
talk to anything except through an AP; any mistyped/misreferenced bucket would 
fail to initialise. If you really need to talk to a bucket externally you could 
disable the switch on a bucket by bucket basis.
   
   You don't need to be inside a VPN to test access points. You only need an 
access point created for any bucket and you're good to go. Of course, if you 
have the set up and want to test it there, that's great but what you're testing 
then is more on AWS integration than `S3A` changes. And yes! Update to the docs 
coming right up! 
   > I'm happy with this; don't see any obvious regressions.
   >
   > One thing (and I've suggested it to mehakmeet for the CSE work) is 
mentioning AP testing in the testing docs, especially qualifying SDK updates.
   >
   > It's going to be hard as you'll need one set up (I don't have 
locally...not sure if we have one on our VPN), so it should be something like:
   >
   > You SHOULD run tests against an S3 access point if you have the setup to 
do so.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 642568)
    Time Spent: 8h 50m  (was: 8h 40m)

> Support S3 Access Points
> ------------------------
>
>                 Key: HADOOP-17198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17198
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>            Reporter: Steve Loughran
>            Assignee: Bogdan Stolojan
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> Improve VPC integration by supporting access points for buckets
> https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points.html
> Not sure how to do this *at all*; 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to