[ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17613701#comment-17613701
 ] 

ASF GitHub Bot commented on HADOOP-17912:
-----------------------------------------

steveloughran commented on code in PR #3440:
URL: https://github.com/apache/hadoop/pull/3440#discussion_r989213630


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -228,16 +231,65 @@ List<AbfsHttpHeader> createDefaultHeaders() {
     return requestHeaders;
   }
 
-  private void addCustomerProvidedKeyHeaders(
-      final List<AbfsHttpHeader> requestHeaders) {
-    if (clientProvidedEncryptionKey != null) {
-      requestHeaders.add(
-          new AbfsHttpHeader(X_MS_ENCRYPTION_KEY, 
clientProvidedEncryptionKey));
-      requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_KEY_SHA256,
-          clientProvidedEncryptionKeySHA));
-      requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_ALGORITHM,
-          SERVER_SIDE_ENCRYPTION_ALGORITHM));
+  private void addEncryptionKeyRequestHeaders(String path,

Review Comment:
   1. needs javadocs
   2. this may now do a HEAD request on any operation, and seems to be called 
everywhere, including getPathStatus. Which methods actually need the header and 
how can we minimise that IO?



##########
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##########
@@ -888,6 +888,38 @@ specified SSL channel mode. Value should be of the enum
 DelegatingSSLSocketFactory.SSLChannelMode. The default value will be
 DelegatingSSLSocketFactory.SSLChannelMode.Default.
 
+### <a name="encryptionconfigoptions"></a> Encryption Options
+Only one of the following two options can be configured. If config values of
+both types are set, ABFS driver will throw an exception. If using the global
+key type, ensure both pre-computed values are provided.
+
+#### <a name="globalcpkconfigoptions"></a> Customer-Provided Global Key
+A global encryption key can be configured by providing the following
+pre-computed values. The key will be applied to any new files created post
+setting the configuration, and will be required in the requests to read ro

Review Comment:
   nit; ro should be "or"



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -346,24 +398,29 @@ public AbfsRestOperation deleteFilesystem(TracingContext 
tracingContext) throws
     return op;
   }
 
-  public AbfsRestOperation createPath(final String path, final boolean isFile, 
final boolean overwrite,
-                                      final String permission, final String 
umask,
-                                      final boolean isAppendBlob, final String 
eTag,
-                                      TracingContext tracingContext) throws 
AzureBlobFileSystemException {
+  public AbfsRestOperation createPath(final String path, final boolean isFile,
+      final boolean overwrite, final Permissions permissions,
+      final boolean isAppendBlob, final String eTag,
+      EncryptionAdapter encryptionAdapter, TracingContext tracingContext)

Review Comment:
   there's a lot of args here
   1. add javadocs
   2. put one parameter to a line
   3. make these two final



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -228,16 +231,65 @@ List<AbfsHttpHeader> createDefaultHeaders() {
     return requestHeaders;
   }
 
-  private void addCustomerProvidedKeyHeaders(
-      final List<AbfsHttpHeader> requestHeaders) {
-    if (clientProvidedEncryptionKey != null) {
-      requestHeaders.add(
-          new AbfsHttpHeader(X_MS_ENCRYPTION_KEY, 
clientProvidedEncryptionKey));
-      requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_KEY_SHA256,
-          clientProvidedEncryptionKeySHA));
-      requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_ALGORITHM,
-          SERVER_SIDE_ENCRYPTION_ALGORITHM));
+  private void addEncryptionKeyRequestHeaders(String path,
+      List<AbfsHttpHeader> requestHeaders, boolean isCreateFileRequest,
+      EncryptionAdapter encryptionAdapter, TracingContext tracingContext)
+      throws IOException {
+    String encodedKey, encodedKeySHA256;
+    boolean encryptionAdapterCreated = false;
+    switch (encryptionType) {
+    case GLOBAL_KEY:
+      encodedKey = clientProvidedEncryptionKey;
+      encodedKeySHA256 = clientProvidedEncryptionKeySHA;
+      break;
+
+    case ENCRYPTION_CONTEXT:
+      if (isCreateFileRequest) {
+        // get new context for create file request
+        requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_CONTEXT,
+            encryptionAdapter.getEncodedContext()));
+      } else if (encryptionAdapter == null) {
+        // get encryption context from GetPathStatus response header
+        byte[] encryptionContext;
+        try {
+          encryptionContext = getPathStatus(path, false, tracingContext)
+              .getResult().getResponseHeader(X_MS_ENCRYPTION_CONTEXT)
+              .getBytes(StandardCharsets.UTF_8);
+        } catch (NullPointerException e) {

Review Comment:
   i would prefer checking the result or response header for being null, rather 
than relying on exceptions and downgrading. it's a lot less efficient



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsCustomEncryption.java:
##########
@@ -0,0 +1,330 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.*;
+
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;

Review Comment:
   review ordering and use of .*



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java:
##########
@@ -26,6 +26,7 @@
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.Permissions;

Review Comment:
   move to L38



##########
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##########
@@ -888,6 +888,38 @@ specified SSL channel mode. Value should be of the enum
 DelegatingSSLSocketFactory.SSLChannelMode. The default value will be
 DelegatingSSLSocketFactory.SSLChannelMode.Default.
 
+### <a name="encryptionconfigoptions"></a> Encryption Options
+Only one of the following two options can be configured. If config values of
+both types are set, ABFS driver will throw an exception. If using the global
+key type, ensure both pre-computed values are provided.
+
+#### <a name="globalcpkconfigoptions"></a> Customer-Provided Global Key

Review Comment:
   there needs to be a way to set different keys for different stores



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -228,16 +231,65 @@ List<AbfsHttpHeader> createDefaultHeaders() {
     return requestHeaders;
   }
 
-  private void addCustomerProvidedKeyHeaders(
-      final List<AbfsHttpHeader> requestHeaders) {
-    if (clientProvidedEncryptionKey != null) {
-      requestHeaders.add(
-          new AbfsHttpHeader(X_MS_ENCRYPTION_KEY, 
clientProvidedEncryptionKey));
-      requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_KEY_SHA256,
-          clientProvidedEncryptionKeySHA));
-      requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_ALGORITHM,
-          SERVER_SIDE_ENCRYPTION_ALGORITHM));
+  private void addEncryptionKeyRequestHeaders(String path,
+      List<AbfsHttpHeader> requestHeaders, boolean isCreateFileRequest,
+      EncryptionAdapter encryptionAdapter, TracingContext tracingContext)
+      throws IOException {
+    String encodedKey, encodedKeySHA256;
+    boolean encryptionAdapterCreated = false;
+    switch (encryptionType) {
+    case GLOBAL_KEY:
+      encodedKey = clientProvidedEncryptionKey;
+      encodedKeySHA256 = clientProvidedEncryptionKeySHA;
+      break;
+
+    case ENCRYPTION_CONTEXT:
+      if (isCreateFileRequest) {
+        // get new context for create file request
+        requestHeaders.add(new AbfsHttpHeader(X_MS_ENCRYPTION_CONTEXT,
+            encryptionAdapter.getEncodedContext()));
+      } else if (encryptionAdapter == null) {
+        // get encryption context from GetPathStatus response header
+        byte[] encryptionContext;
+        try {
+          encryptionContext = getPathStatus(path, false, tracingContext)
+              .getResult().getResponseHeader(X_MS_ENCRYPTION_CONTEXT)
+              .getBytes(StandardCharsets.UTF_8);
+        } catch (NullPointerException e) {
+          LOG.debug("EncryptionContext not present in GetPathStatus 
response.");
+          throw new IOException(
+              "EncryptionContext not present in GetPathStatus response", e);
+        }
+        try {
+          encryptionAdapter = new EncryptionAdapter(encryptionContextProvider,
+              new Path(path).toUri().getPath(), encryptionContext);
+          encryptionAdapterCreated = true;
+        } catch (IOException e) {
+          LOG.debug("Could not initialize EncryptionAdapter");
+          throw e;
+        }
+      }
+      // else use cached encryption keys from input/output streams
+      encodedKey = encryptionAdapter.getEncodedKey();
+      encodedKeySHA256 = encryptionAdapter.getEncodedKeySHA();
+      break;
+
+    default: return; // no client-provided encryption keys
+    }
+
+    if (encryptionAdapterCreated && encryptionAdapter != null) {
+      try {
+        encryptionAdapter.destroy();
+      } catch (DestroyFailedException e) {
+        throw new IOException(

Review Comment:
   can cut this once encryptionAdapter changes signature





> ABFS: Support for Encryption Context
> ------------------------------------
>
>                 Key: HADOOP-17912
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17912
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.3.1
>            Reporter: Sumangala Patki
>            Assignee: Pranav Saxena
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to