[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497445#comment-16497445
 ] 

genericqa commented on HADOOP-15407:
------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 51 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 7s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
27s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m  
2s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
14s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m  8s{color} | {color:orange} root: The patch generated 194 new + 0 unchanged 
- 0 fixed = 194 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-tools/hadoop-azure generated 17 new + 0 
unchanged - 0 fixed = 17 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 24s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  Hard coded reference to an absolute pathname in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getHomeDirectory()  At 
AzureBlobFileSystem.java:absolute pathname in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getHomeDirectory()  At 
AzureBlobFileSystem.java:[line 435] |
|  |  Should 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem$FileSystemOperation be a 
_static_ inner class?  At AzureBlobFileSystem.java:inner class?  At 
AzureBlobFileSystem.java:[lines 661-671] |
|  |  org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes.ABFS_SCHEMES 
is a mutable array  At FileSystemUriSchemes.java: At 
FileSystemUriSchemes.java:[line 32] |
|  |  
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException.toString()
 may return null  At AzureBlobFileSystemException.java:At 
AzureBlobFileSystemException.java:[line 41] |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureServiceErrorResponseException.formatMessage(AbfsHttpOperation)
  At AzureServiceErrorResponseException.java:rather than n in 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureServiceErrorResponseException.formatMessage(AbfsHttpOperation)
  At AzureServiceErrorResponseException.java:[line 75] |
|  |  Redundant nullcheck of stream, which is known to be non-null in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processResponse(byte[],
 int, int)  Redundant null check at AbfsHttpOperation.java:is known to be 
non-null in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processResponse(byte[],
 int, int)  Redundant null check at AbfsHttpOperation.java:[line 267] |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl.openFileForRead(AzureBlobFileSystem,
 Path, FileSystem$Statistics)  At 
AbfsHttpServiceImpl.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl.openFileForRead(AzureBlobFileSystem,
 Path, FileSystem$Statistics)  At AbfsHttpServiceImpl.java:[line 259] |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl.parseContentLength(String)
  At 
AbfsHttpServiceImpl.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl.parseContentLength(String)
  At AbfsHttpServiceImpl.java:[line 527] |
|  |  
org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl.convertXmsPropertiesToCommaSeparatedString(Hashtable)
 concatenates strings using + in a loop  At AbfsHttpServiceImpl.java:in a loop  
At AbfsHttpServiceImpl.java:[line 561] |
|  |  
org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl.convertXmsPropertiesToCommaSeparatedString(Hashtable)
 makes inefficient use of keySet iterator instead of entrySet iterator  At 
AbfsHttpServiceImpl.java:keySet iterator instead of entrySet iterator  At 
AbfsHttpServiceImpl.java:[line 550] |
|  |  
org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl$VersionedFileStatus 
doesn't override org.apache.hadoop.fs.FileStatus.equals(Object)  At 
AbfsHttpServiceImpl.java:At AbfsHttpServiceImpl.java:[line 1] |
|  |  Should 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpServiceImpl$VersionedFileStatus 
be a _static_ inner class?  At AbfsHttpServiceImpl.java:inner class?  At 
AbfsHttpServiceImpl.java:[lines 634-642] |
|  |  Dead store to op in 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream$1.call()  At 
AbfsOutputStream.java:org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream$1.call()
  At AbfsOutputStream.java:[line 239] |
|  |  Should 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream$WriteOperation be a 
_static_ inner class?  At AbfsOutputStream.java:inner class?  At 
AbfsOutputStream.java:[lines 320-333] |
|  |  
org.apache.hadoop.fs.azurebfs.services.ConfigurationServiceImpl.validateStorageAccountKeys()
 makes inefficient use of keySet iterator instead of entrySet iterator  At 
ConfigurationServiceImpl.java:keySet iterator instead of entrySet iterator  At 
ConfigurationServiceImpl.java:[line 243] |
|  |  Unread field:ReadBufferWorker.java:[line 31] |
|  |  Useless object stored in variable newValues of method 
org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials.parseQueryString(String)
  At SharedKeyCredentials.java:newValues of method 
org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials.parseQueryString(String)
  At SharedKeyCredentials.java:[line 378] |
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15407 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926002/HADOOP-15407-HADOOP-15407.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 4d6eef22b22b 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 51ce02b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14707/artifact/out/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14707/artifact/out/whitespace-tabs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14707/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14707/artifact/out/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14707/testReport/ |
| Max. process+thread count | 1355 (vs. ulimit of 10000) |
| modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-tools/hadoop-azure . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14707/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support Windows Azure Storage - Blob file system in Hadoop
> ----------------------------------------------------------
>
>                 Key: HADOOP-15407
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15407
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: fs/azure
>    Affects Versions: 3.2.0
>            Reporter: Esfandiar Manii
>            Assignee: Esfandiar Manii
>            Priority: Major
>         Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, 
> HADOOP-15407-HADOOP-15407.006.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://<filesystem>@<account>.dfs.core.windows.net/<path>{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}·         Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}·         Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}·         Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}·         Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to