[GitHub] [hadoop] bshashikant closed pull request #1989: HDFS-15313. Ensure inodes in active filesytem are not deleted during snapshot delete
bshashikant closed pull request #1989: URL: https://github.com/apache/hadoop/pull/1989 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16852) ABFS: Send error back to client for Read Ahead request failure
[ https://issues.apache.org/jira/browse/HADOOP-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-16852: --- Resolution: Fixed Status: Resolved (was: Patch Available) > ABFS: Send error back to client for Read Ahead request failure > -- > > Key: HADOOP-16852 > URL: https://issues.apache.org/jira/browse/HADOOP-16852 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > > Issue seen by a customer: > The failed requests we were seeing in the AbfsClient logging actually never > made it out over the wire. We have found that there’s an issue with ADLS > passthrough and the 8 read ahead threads that ADLSv2 spawns in > ReadBufferManager.java. We depend on thread local storage in order to get the > right JWT token and those threads do not have the right information in their > thread local storage. Thus, when they pick up a task from the read ahead > queue they fail by throwing an AzureCredentialNotFoundException exception in > AbfsRestOperation.executeHttpOperation() where it calls > client.getAccessToken(). This exception is silently swallowed by the read > ahead threads in ReadBufferWorker.run(). As a result, every read ahead > attempt results in a failed executeHttpOperation(), but still calls > AbfsClientThrottlingIntercept.updateMetrics() and contributes to throttling > (despite not making it out over the wire). After the read aheads fail, the > main task thread performs the read with the right thread local storage > information and succeeds, but first sleeps for up to 10 seconds due to the > throttling. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17053) ABFS: FS initialize fails for incompatible account-agnostic Token Provider setting
[ https://issues.apache.org/jira/browse/HADOOP-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-17053: --- Resolution: Fixed Status: Resolved (was: Patch Available) > ABFS: FS initialize fails for incompatible account-agnostic Token Provider > setting > --- > > Key: HADOOP-17053 > URL: https://issues.apache.org/jira/browse/HADOOP-17053 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > When AuthType and Auth token provider configs are set for both generic and > account specific config, as below: > // account agnostic > fs.azure.account.auth.type=CUSTOM > fs.azure.account.oauth.provider.type=ClassExtendingCustomTokenProviderAdapter > // account specific > fs.azure.account.auth.type.account_name=OAuth > fs.azure.account.oauth.provider.type.account_name=ClassExtendingAccessTokenProvider > For account_name, OAuth with provider as ClassExtendingAccessTokenProvider > is expected to be in effect. > When the token provider class is being read from the config, account agnostic > config setting is read first in the assumption that it can serve as default > if account-specific config setting is absent. But this logic leads to failure > when AuthType set for account specific and otherwise are different as the > Interface implementing the token provider is different for various Auth > Types. This leads to a Runtime exception when trying to create the oAuth > access token provider. > This Jira is to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2036: HADOOP-17052. Throw UnknownHostException in NetUtils.connect when host is not resolvable
hadoop-yetus commented on pull request #2036: URL: https://github.com/apache/hadoop/pull/2036#issuecomment-635053610 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 22m 18s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 18s | trunk passed | | +1 :green_heart: | compile | 17m 3s | trunk passed | | +1 :green_heart: | checkstyle | 0m 52s | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | trunk passed | | +1 :green_heart: | shadedclient | 16m 41s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 5s | trunk passed | | +0 :ok: | spotbugs | 2m 6s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 4s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | the patch passed | | +1 :green_heart: | compile | 16m 24s | the patch passed | | +1 :green_heart: | javac | 16m 24s | the patch passed | | +1 :green_heart: | checkstyle | 0m 49s | the patch passed | | +1 :green_heart: | mvnsite | 1m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 45s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 4s | the patch passed | | +1 :green_heart: | findbugs | 2m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 8s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | The patch does not generate ASF License warnings. | | | | 128m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2036/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2036 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 79a5ee2fd48e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4c5cd751e39 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2036/1/testReport/ | | Max. process+thread count | 3259 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2036/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dhirajh opened a new pull request #2036: HADOOP-17052 Throw UnknownHostException in NetUtils.connect when host is not resolvable
dhirajh opened a new pull request #2036: URL: https://github.com/apache/hadoop/pull/2036 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17053) ABFS: FS initialize fails for incompatible account-agnostic Token Provider setting
[ https://issues.apache.org/jira/browse/HADOOP-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17118109#comment-17118109 ] Hudson commented on HADOOP-17053: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18302 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18302/]) HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing (github: rev 4c5cd751e3911e350c7437dcb28c0ed67735f635) * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java > ABFS: FS initialize fails for incompatible account-agnostic Token Provider > setting > --- > > Key: HADOOP-17053 > URL: https://issues.apache.org/jira/browse/HADOOP-17053 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > When AuthType and Auth token provider configs are set for both generic and > account specific config, as below: > // account agnostic > fs.azure.account.auth.type=CUSTOM > fs.azure.account.oauth.provider.type=ClassExtendingCustomTokenProviderAdapter > // account specific > fs.azure.account.auth.type.account_name=OAuth > fs.azure.account.oauth.provider.type.account_name=ClassExtendingAccessTokenProvider > For account_name, OAuth with provider as ClassExtendingAccessTokenProvider > is expected to be in effect. > When the token provider class is being read from the config, account agnostic > config setting is read first in the assumption that it can serve as default > if account-specific config setting is absent. But this logic leads to failure > when AuthType set for account specific and otherwise are different as the > Interface implementing the token provider is different for various Auth > Types. This leads to a Runtime exception when trying to create the oAuth > access token provider. > This Jira is to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16852) ABFS: Send error back to client for Read Ahead request failure
[ https://issues.apache.org/jira/browse/HADOOP-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17118095#comment-17118095 ] Hudson commented on HADOOP-16852: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18301 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18301/]) HADOOP-16852: Report read-ahead error back (github: rev 53b993e6048ffaaf98e460690211fc08efb20cf2) * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBuffer.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferWorker.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/TestCachedSASToken.java > ABFS: Send error back to client for Read Ahead request failure > -- > > Key: HADOOP-16852 > URL: https://issues.apache.org/jira/browse/HADOOP-16852 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > > Issue seen by a customer: > The failed requests we were seeing in the AbfsClient logging actually never > made it out over the wire. We have found that there’s an issue with ADLS > passthrough and the 8 read ahead threads that ADLSv2 spawns in > ReadBufferManager.java. We depend on thread local storage in order to get the > right JWT token and those threads do not have the right information in their > thread local storage. Thus, when they pick up a task from the read ahead > queue they fail by throwing an AzureCredentialNotFoundException exception in > AbfsRestOperation.executeHttpOperation() where it calls > client.getAccessToken(). This exception is silently swallowed by the read > ahead threads in ReadBufferWorker.run(). As a result, every read ahead > attempt results in a failed executeHttpOperation(), but still calls > AbfsClientThrottlingIntercept.updateMetrics() and contributes to throttling > (despite not making it out over the wire). After the read aheads fail, the > main task thread performs the read with the right thread local storage > information and succeeds, but first sleeps for up to 10 seconds due to the > throttling. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] DadanielZ merged pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
DadanielZ merged pull request #2034: URL: https://github.com/apache/hadoop/pull/2034 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] DadanielZ merged pull request #1898: HADOOP-16852: Report read-ahead error back
DadanielZ merged pull request #1898: URL: https://github.com/apache/hadoop/pull/1898 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #2035: HDFS-15374. Add documentation for command `fedbalance`.
goiri commented on a change in pull request #2035: URL: https://github.com/apache/hadoop/pull/2035#discussion_r431392232 ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,156 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) + - [Usage](#Usage) + - [Basic Usage](#Basic_Usage) + - [Command Options](#Command_Options) + - [Configuration Options](#Configuration_Options) + - [Architecture of Federation Balance](#Architecture_of_Federation_Balance) + +--- + +Overview + + + Federation Balance is a tool balancing data across different federation + namespaces. It uses DistCp to copy data from the source path to the target + path. First it creates a snapshot at the source path and submit the initial Review comment: submits ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,156 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) + - [Usage](#Usage) + - [Basic Usage](#Basic_Usage) + - [Command Options](#Command_Options) + - [Configuration Options](#Configuration_Options) + - [Architecture of Federation Balance](#Architecture_of_Federation_Balance) + +--- + +Overview + + + Federation Balance is a tool balancing data across different federation + namespaces. It uses DistCp to copy data from the source path to the target Review comment: link to distcp ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,156 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) + - [Usage](#Usage) + - [Basic Usage](#Basic_Usage) + - [Command Options](#Command_Options) + - [Configuration Options](#Configuration_Options) + - [Architecture of Federation Balance](#Architecture_of_Federation_Balance) + +--- + +Overview + + + Federation Balance is a tool balancing data across different federation + namespaces. It uses DistCp to copy data from the source path to the target + path. First it creates a snapshot at the source path and submit the initial + distcp. Then it uses distcp diff to do the incremental copy. Finally when the + source and the target are the same, it updates the mount table in Router and + move the source to trash. + + This document aims to describe the design and usage of the Federation Balance. + +Usage +- + +### Basic Usage + + The federation balance tool supports both normal federation cluster and + router-based federation cluster. Taking rbf for example. Supposing we have a + mount entry in Router: + +/foo/src --> hdfs://nn0:8020/foo/src + + Submit a federation balance job locally. The first parameter should be a mount + entry. The second parameter is the target path. The target path must includes + the target cluster. + +bash$ /bin/hadoop fedbalance submit /foo/src hdfs://nn1:8020/foo/dst + + This will copy data from hdfs://nn0:8020/foo/src to hdfs://nn1:8020/foo/dst + incrementally and finally update the mount entry to: + +/foo/src --> hdfs://nn1:8020/foo/dst + + If the hadoop shell process exits unexpectedly and we want to continue the + unfinished job, we can use command: + +bash$ /bin/hadoop fedbalance continue Review comment: Should this have code format like 73 (I think there might even be a better way). ## File path: hadoop-tools/hadoop-federation-balance/src/site/markdown/FederationBalance.md ## @@ -0,0 +1,156 @@ + + +Federation Balance Guide += + +--- + + - [Overview](#Overview) + - [Usage](#Usage) + - [Basic Usage](#Basic_Usage) + - [Command Options](#Command_Options) + - [Configuration Options](#Configuration_Options) + - [Architecture of Federation Balance](#Architecture_of_Federation_Balance) + +--- + +Overview + + + Federation Balance is a tool balancing data across different federation + namespaces. It uses DistCp to copy data from the source path to the target + path. First it creates a snapshot at the source path and submit the initial + distcp. Then it uses distcp diff to do the incremental copy. Finally when the + source and the target are the same, it updates the mount table in Router and + move the source to trash. + + This document aims to describe the design and usage of the Federation Balance. + +Usage +- + +### Basic Usage + + The federation balance tool supports both normal federation cluster and + router-based federation cluster. Taking rbf for example. Supposing we have a + mount entry in Router: + +/foo/src --> hdfs://nn0:8020/foo/src + + Submit a federation balance job locally. The first parameter should be a mount + entry. The second parameter is the target path. The target path must includes + the target cluster. + +bash$ /bin/hadoop fedbalance submit /foo/src hdfs://nn1:8020/foo
[GitHub] [hadoop] snvijaya commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
snvijaya commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431382991 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java ## @@ -325,31 +325,91 @@ public String getPasswordString(String key) throws IOException { } /** - * Returns the account-specific Class if it exists, then looks for an - * account-agnostic value, and finally tries the default value. + * Returns account-specific token provider class if it exists, else checks if + * an account-agnostic setting is present for token provider class if AuthType + * matches with authType passed. + * @param authType AuthType effective on the account * @param name Account-agnostic configuration key * @param defaultValue Class returned if none is configured * @param xface Interface shared by all possible values + * @param Interface class type * @return Highest-precedence Class object that was found */ - public Class getClass(String name, Class defaultValue, Class xface) { + public Class getTokenProviderClass(AuthType authType, Review comment: Inputs AuthType, name of the relevant TokenProvider config key, xface (interface) are derived by the caller of getTokenProviderClass based on account-specific config settings. As it applies to all the inputs equally and is clear from the calling method's perspective, will retain the naming. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.
[ https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117880#comment-17117880 ] Hadoop QA commented on HADOOP-17047: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 13s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 33s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 44s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16954/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17047 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004145/HADOOP-17047.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 50c77e6a285c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 593af878c00 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16954/artifact/out/branch-mvnsite-hadoop-common-project_h
[jira] [Updated] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.
[ https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rungroj Maipradit updated HADOOP-17047: --- Attachment: HADOOP-17047.001.patch Status: Patch Available (was: In Progress) > TODO comments exist in trunk while the related issues are already fixed. > > > Key: HADOOP-17047 > URL: https://issues.apache.org/jira/browse/HADOOP-17047 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Rungroj Maipradit >Assignee: Rungroj Maipradit >Priority: Trivial > Attachments: HADOOP-17047.001.patch, HADOOP-17047.001.patch > > > In a research project, we analyzed the source code of Hadoop looking for > comments with on-hold SATDs (self-admitted technical debt) that could be > fixed already. An on-hold SATD is a TODO/FIXME comment blocked by an issue. > If this blocking issue is already resolved, the related todo can be > implemented (or sometimes it is already implemented, but the comment is left > in the code causing confusions). As we found a few instances of these in > Hadoop, we decided to collect them in a ticket, so they are documented and > can be addressed sooner or later. > A list of code comments that mention already closed issues. > * A code comment suggests making the setJobConf method deprecated along with > a mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, > but the method is still not annotated as deprecated. > {code:java} > /** >* This code is to support backward compatibility and break the compile >* time dependency of core on mapred. >* This should be made deprecated along with the mapred package > HADOOP-1230. >* Should be removed when mapred package is removed. >*/ {code} > Comment location: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88] > * A comment mentions that the return type of the getDefaultFileSystem method > should be changed to AFS when HADOOP-6223 is completed. > Indeed, this change was done in the related commit of HADOOP-6223: > ([https://github.com/apache/hadoop/commit/3f371a0a644181b204111ee4e12c995fc7b5e5f5#diff-cd86a2b9ce3efd2232c2ace0e9084508L395)] > Thus, the comment could be removed. > {code:java} > @InterfaceStability.Unstable /* return type will change to AFS once > HADOOP-6223 is completed */ > {code} > Comment location: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java#L512] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.
[ https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-17047 started by Rungroj Maipradit. -- > TODO comments exist in trunk while the related issues are already fixed. > > > Key: HADOOP-17047 > URL: https://issues.apache.org/jira/browse/HADOOP-17047 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Rungroj Maipradit >Assignee: Rungroj Maipradit >Priority: Trivial > Attachments: HADOOP-17047.001.patch > > > In a research project, we analyzed the source code of Hadoop looking for > comments with on-hold SATDs (self-admitted technical debt) that could be > fixed already. An on-hold SATD is a TODO/FIXME comment blocked by an issue. > If this blocking issue is already resolved, the related todo can be > implemented (or sometimes it is already implemented, but the comment is left > in the code causing confusions). As we found a few instances of these in > Hadoop, we decided to collect them in a ticket, so they are documented and > can be addressed sooner or later. > A list of code comments that mention already closed issues. > * A code comment suggests making the setJobConf method deprecated along with > a mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, > but the method is still not annotated as deprecated. > {code:java} > /** >* This code is to support backward compatibility and break the compile >* time dependency of core on mapred. >* This should be made deprecated along with the mapred package > HADOOP-1230. >* Should be removed when mapred package is removed. >*/ {code} > Comment location: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88] > * A comment mentions that the return type of the getDefaultFileSystem method > should be changed to AFS when HADOOP-6223 is completed. > Indeed, this change was done in the related commit of HADOOP-6223: > ([https://github.com/apache/hadoop/commit/3f371a0a644181b204111ee4e12c995fc7b5e5f5#diff-cd86a2b9ce3efd2232c2ace0e9084508L395)] > Thus, the comment could be removed. > {code:java} > @InterfaceStability.Unstable /* return type will change to AFS once > HADOOP-6223 is completed */ > {code} > Comment location: > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java#L512] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
hadoop-yetus commented on pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#issuecomment-634605214 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 11s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 22s | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | trunk passed | | +1 :green_heart: | shadedclient | 16m 14s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | mvnsite | 0m 24s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 25s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | | +1 :green_heart: | findbugs | 0m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 62m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2034/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2034 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 53fa973bce84 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c30c23cb665 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2034/3/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2034/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
bilaharith commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431013165 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java ## @@ -325,31 +325,91 @@ public String getPasswordString(String key) throws IOException { } /** - * Returns the account-specific Class if it exists, then looks for an - * account-agnostic value, and finally tries the default value. + * Returns account-specific token provider class if it exists, else checks if + * an account-agnostic setting is present for token provider class if AuthType + * matches with authType passed. + * @param authType AuthType effective on the account * @param name Account-agnostic configuration key * @param defaultValue Class returned if none is configured * @param xface Interface shared by all possible values + * @param Interface class type * @return Highest-precedence Class object that was found */ - public Class getClass(String name, Class defaultValue, Class xface) { + public Class getTokenProviderClass(AuthType authType, Review comment: I meant for the paarameter currently named authType This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
bilaharith commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431013165 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java ## @@ -325,31 +325,91 @@ public String getPasswordString(String key) throws IOException { } /** - * Returns the account-specific Class if it exists, then looks for an - * account-agnostic value, and finally tries the default value. + * Returns account-specific token provider class if it exists, else checks if + * an account-agnostic setting is present for token provider class if AuthType + * matches with authType passed. + * @param authType AuthType effective on the account * @param name Account-agnostic configuration key * @param defaultValue Class returned if none is configured * @param xface Interface shared by all possible values + * @param Interface class type * @return Highest-precedence Class object that was found */ - public Class getClass(String name, Class defaultValue, Class xface) { + public Class getTokenProviderClass(AuthType authType, Review comment: I meant for the parameter currently named authType This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
snvijaya commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431011571 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java ## @@ -325,31 +325,91 @@ public String getPasswordString(String key) throws IOException { } /** - * Returns the account-specific Class if it exists, then looks for an - * account-agnostic value, and finally tries the default value. + * Returns account-specific token provider class if it exists, else checks if + * an account-agnostic setting is present for token provider class if AuthType + * matches with authType passed. + * @param authType AuthType effective on the account * @param name Account-agnostic configuration key * @param defaultValue Class returned if none is configured * @param xface Interface shared by all possible values + * @param Interface class type * @return Highest-precedence Class object that was found */ - public Class getClass(String name, Class defaultValue, Class xface) { + public Class getTokenProviderClass(AuthType authType, Review comment: This method fetches TokenProviderClass instance based on the input AuthType. It is in sync with the naming followed by other methods that resolve account-specific config and in it's absence default to account-agnostic value. Will retain the naming. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
snvijaya commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431009863 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java ## @@ -20,13 +20,25 @@ import java.io.IOException; +import org.assertj.core.api.Assertions; +import org.junit.Test; + import org.apache.hadoop.conf.Configuration; + Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17052) NetUtils.connect() throws an exception the prevents any retries when hostname resolution fails
[ https://issues.apache.org/jira/browse/HADOOP-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117602#comment-17117602 ] hemanthboyina commented on HADOOP-17052: thanks for providing more details [~dhegde] {quote}The code change could be made a level above in places like newConnectedPeer() {quote} i don't think this will cover the scenario of Write call IMO it will be better to handle in Netutils.connect() by catching the exception and checking if that is an instanceof , and throw the required exception to retry [~aajisaka] [~liuml07] thoughts ? > NetUtils.connect() throws an exception the prevents any retries when hostname > resolution fails > -- > > Key: HADOOP-17052 > URL: https://issues.apache.org/jira/browse/HADOOP-17052 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.10.0, 2.9.2, 3.2.1, 3.1.3 >Reporter: Dhiraj Hegde >Assignee: Dhiraj Hegde >Priority: Major > Attachments: stack_trace2 > > > Hadoop components are increasingly being deployed on VMs and containers. One > aspect of this environment is that DNS is dynamic. Hostname records get > modified (or deleted/recreated) as a container in Kubernetes (or even VM) is > being created/recreated. In such dynamic environments, the initial DNS > resolution request might return resolution failure briefly as DNS client > doesn't always get the latest records. This has been observed in Kubernetes > in particular. In such cases NetUtils.connect() appears to throw > java.nio.channels.UnresolvedAddressException. In much of Hadoop code (like > DFSInputStream and DFSOutputStream), the code is designed to retry > IOException. However, since UnresolvedAddressException is not child of > IOException, no retry happens and the code aborts immediately. It is much > better if NetUtils.connect() throws java.net.UnknownHostException as that is > derived from IOException and the code will treat this as a retry-able error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
bilaharith commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431001729 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java ## @@ -20,13 +20,25 @@ import java.io.IOException; +import org.assertj.core.api.Assertions; +import org.junit.Test; + import org.apache.hadoop.conf.Configuration; + Review comment: Nit: This new line can be removed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
bilaharith commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431000224 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java ## @@ -325,31 +325,91 @@ public String getPasswordString(String key) throws IOException { } /** - * Returns the account-specific Class if it exists, then looks for an - * account-agnostic value, and finally tries the default value. + * Returns account-specific token provider class if it exists, else checks if + * an account-agnostic setting is present for token provider class if AuthType + * matches with authType passed. + * @param authType AuthType effective on the account * @param name Account-agnostic configuration key * @param defaultValue Class returned if none is configured * @param xface Interface shared by all possible values + * @param Interface class type * @return Highest-precedence Class object that was found */ - public Class getClass(String name, Class defaultValue, Class xface) { + public Class getTokenProviderClass(AuthType authType, Review comment: nit: I could see it from the javadocs, still naming it something like authTypeForAccount could improve readability This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
bilaharith commented on a change in pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#discussion_r431000224 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java ## @@ -325,31 +325,91 @@ public String getPasswordString(String key) throws IOException { } /** - * Returns the account-specific Class if it exists, then looks for an - * account-agnostic value, and finally tries the default value. + * Returns account-specific token provider class if it exists, else checks if + * an account-agnostic setting is present for token provider class if AuthType + * matches with authType passed. + * @param authType AuthType effective on the account * @param name Account-agnostic configuration key * @param defaultValue Class returned if none is configured * @param xface Interface shared by all possible values + * @param Interface class type * @return Highest-precedence Class object that was found */ - public Class getClass(String name, Class defaultValue, Class xface) { + public Class getTokenProviderClass(AuthType authType, Review comment: nit: naming it something like authTypeForAccount could improve readability This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2035: HDFS-15374. Add documentation for command `fedbalance`.
hadoop-yetus commented on pull request #2035: URL: https://github.com/apache/hadoop/pull/2035#issuecomment-634496678 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 10s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 6s | trunk passed | | +1 :green_heart: | mvnsite | 2m 36s | trunk passed | | +1 :green_heart: | shadedclient | 37m 55s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 25s | the patch passed | | +1 :green_heart: | mvnsite | 2m 23s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 50s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 59m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2035/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2035 | | Optional Tests | dupname asflicense mvnsite xml markdownlint | | uname | Linux c9caa8420966 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c30c23cb665 | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-2035/1/artifact/out/whitespace-eol.txt | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2035/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2034: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing
hadoop-yetus commented on pull request #2034: URL: https://github.com/apache/hadoop/pull/2034#issuecomment-634482497 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 27m 46s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 5s | trunk passed | | +1 :green_heart: | compile | 0m 29s | trunk passed | | +1 :green_heart: | checkstyle | 0m 22s | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | trunk passed | | +1 :green_heart: | shadedclient | 16m 26s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 50s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 48s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | the patch passed | | +1 :green_heart: | compile | 0m 24s | the patch passed | | +1 :green_heart: | javac | 0m 24s | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | mvnsite | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 21s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | | +1 :green_heart: | findbugs | 0m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 90m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2034/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2034 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 12ff492686ae 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c30c23cb665 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2034/2/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2034/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org