steveloughran commented on issue #24934: [SPARK-28124] [SS] SQS source for 
Structured Streaming
URL: https://github.com/apache/spark/pull/24934#issuecomment-515414620
 
 
   ## General comments
   
   it's good to see this; I've thought of adding this for a while.
   
   1. if you pull in the `spark-hadoop-cloud` module then for Hadoop > 2.9 you 
get the full AWS SDK, shaded to avoid transitive classpath grief.
   1. You get the `hadoop-aws` module s3a filesystem which should be the only 
ASF-open-source one to play with. Even on EMR it ends up on the CP, so you can 
use classes marked as @Public on it.
   
   One thing that the S3A connector does is per-bucket config; when you 
instantiate the bucket it maps fs.s3a.bucket.BUCKET.* to fs.s3a.*. You can pick 
that up from the FS instance simply by using the configuration to bind to SQS 
from the filesystem, e.g
   
   ```
   val fs=sourcePath.getFileSystem(conf)
   val sqlsConf = fs.getConf();
   ```
   for all other filesystems that will work, you just don't get the bucket 
override mechanism
   
   
   
   ## Auth
   
   You can use the hadoop-aws credential providers declared as public/stable:
   ```
   org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
   org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
   ```
   *if you want to use some of the others we don't tag @Public, talk to me and 
I'll see whether we can mark them as such*
   
   Plus the standard AWS ones
   ```
   com.amazonaws.auth.EnvironmentVariableCredentialsProvider
   com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper
   ```
   The latter one is recent; it can pick up credentials through containers as 
well as EC2 instances. 
   
   If you are just going to read the hadoop config values, know that the 
recommended way to store them is actually in a Hadoop Credential Provider; 
`Configuration.getPassword()` retrieves them for either a credential 
source/file or falls back to an inline string value.
   
   We have been doing some really fancy stuff in Hadoop 3.3 with Delegation 
Tokens, so that you can submit a spark job with local credentials and have a 
set of session/role credentials attached as a delegation token, then extracted 
at the far end. This allows you to submit Spark jobs with more rights than the 
deployed EC2 cluster and yet avoid passing in any long-lived secrets. 
   
   We let DDB at these through `S3AFileSystem.shareCredentials()`; though 
that's a Hadoop 3.2+ API. I can backport it if you want to pick that up
   
   Hope this helps
   ### Tests
   
   Ignoring real-world integration tests (FWIW, I keep my downstream of spark 
in 
(https://github.com/hortonworks-spark/cloud-integration](https://github.com/hortonworks-spark/cloud-integration),
 you should be able to add some which test the JSON parsing of responses.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to