georgew5656 opened a new pull request, #15630:
URL: https://github.com/apache/druid/pull/15630
### Description
Currently the "azure" input source schema only supports ingesting files that
are stored in the Azure Storage Account specified in `druid.azure.account`. To
support ingesting data from different storage accounts, add a new
"azureStorage" input source schema for azure with a slightly different spec.
#### Fixed the bug ...
#### Renamed the class ...
#### Added a forbidden-apis entry ...
1. Added AzureStorageAccountInputSource
I would have preferred to keep using the regular AzureInputSource class but
that class assumes CloudObjectLocation.bucket to be the container of the file
and CloudObjectLocation.path to be the path the file within the bucket. I
couldn't think of a way to keep the behavior backwards compatible with existing
ingestion specs and support multiple storage accounts.
2. Create a new AzureStorage instance with a AzureClientFactory with
credentials passed in from AzureStorageAccountInputSource.
Since the AzureStorage instance is a abstraction over Azure Blob Storage
clients, I thought it made sense to create a instance of it per Azure ingestion
spec, since we do something similar with S3 (create a s3 client for each s3
ingestion spec).
It would have also been possible to pass the AzureInpuSourceConfig to the
relevant functions in AzureStorage and generate different clients but I thought
this would have been confusing.
3. Auth methods
I added support for key, sas token, and app registration authentication when
ingesting from storage accounts. Managed/Workload identity auth can also work
by not specifying properties and making sure the identity the cluster is
deployed with can access the external storage account.
#### Release note
Support Azure ingestion from multiple Storage Accounts.
<hr>
##### Key changed/added classes in this PR
* `AzureStorageAccountInputSource`
* `AzureInpuSourceConfig`
* `AzureClientFactory`
* `AzureStorage`
* `AzureEntity`
<hr>
<!-- Check the items by putting "x" in the brackets for the done things. Not
all of these items apply to every PR. Remove the items which are not done or
not relevant to the PR. None of the items from the checklist below are strictly
necessary, but it would be very helpful if you at least self-review the PR. -->
This PR has:
- [X] been self-reviewed.
- [ ] using the [concurrency
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
(Remove this item if the PR doesn't have any relation to concurrency.)
- [ ] added documentation for new or modified features or behaviors.
- [ ] a release note entry in the PR description.
- [ ] added Javadocs for most classes and all non-trivial methods. Linked
related entities via Javadoc links.
- [ ] added or updated version, license, or notice information in
[licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md)
- [X] added comments explaining the "why" and the intent of the code
wherever would not be obvious for an unfamiliar reader.
- [X] added unit tests or modified existing tests to cover new code paths,
ensuring the threshold for [code
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
is met.
- [ ] added integration tests.
- [X] been tested in a test Druid cluster.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]