[ 
https://issues.apache.org/jira/browse/HDFS-17316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815466#comment-17815466
 ] 

ASF GitHub Bot commented on HDFS-17316:
---------------------------------------

hadoop-yetus commented on PR #6535:
URL: https://github.com/apache/hadoop/pull/6535#issuecomment-1933237018

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   2m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  12m  9s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |  16m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  17m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   7m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   7m 16s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   6m 38s |  |  the patch passed  |
   | -1 :x: |  shellcheck  |   0m  0s | 
[/results-shellcheck.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/artifact/out/results-shellcheck.txt)
 |  The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   4m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |  16m 31s | 
[/new-spotbugs-root.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/artifact/out/new-spotbugs-root.html)
 |  root generated 19 new + 0 unchanged - 0 fixed = 19 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  19m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  | 628m 32s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 48s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/artifact/out/results-asflicense.txt)
 |  The patch generated 18 ASF License warnings.  |
   |  |   | 826m 36s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | SpotBugs | module:root |
   |  |  Random object created and used only once in 
org.apache.hadoop.compat.AbstractHdfsCompatCase.getUniquePath(Path)  At 
AbstractHdfsCompatCase.java:only once in 
org.apache.hadoop.compat.AbstractHdfsCompatCase.getUniquePath(Path)  At 
AbstractHdfsCompatCase.java:[line 60] |
   |  |  org.apache.hadoop.compat.HdfsCompatEnvironment.getStoragePolicyNames() 
may expose internal representation by returning 
HdfsCompatEnvironment.defaultStoragePolicyNames  At 
HdfsCompatEnvironment.java:by returning 
HdfsCompatEnvironment.defaultStoragePolicyNames  At 
HdfsCompatEnvironment.java:[line 115] |
   |  |  Call to method of static java.text.DateFormat in 
org.apache.hadoop.compat.HdfsCompatEnvironment.init()  At 
HdfsCompatEnvironment.java:java.text.DateFormat in 
org.apache.hadoop.compat.HdfsCompatEnvironment.init()  At 
HdfsCompatEnvironment.java:[line 64] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.HdfsCompatShellScope.readLines(File):in 
org.apache.hadoop.compat.HdfsCompatShellScope.readLines(File): new 
java.io.FileReader(File)  At HdfsCompatShellScope.java:[line 356] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.HdfsCompatibility.printReport(HdfsCompatReport, 
OutputStream):in 
org.apache.hadoop.compat.HdfsCompatibility.printReport(HdfsCompatReport, 
OutputStream): new java.io.OutputStreamWriter(OutputStream)  At 
HdfsCompatibility.java:[line 209] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.HdfsCompatibility.printReport(HdfsCompatReport, 
OutputStream):in 
org.apache.hadoop.compat.HdfsCompatibility.printReport(HdfsCompatReport, 
OutputStream): String.getBytes()  At HdfsCompatibility.java:[line 208] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.cases.function.HdfsCompatCreate.createFile():in 
org.apache.hadoop.compat.cases.function.HdfsCompatCreate.createFile(): 
String.getBytes()  At HdfsCompatCreate.java:[line 100] |
   |  |  Random object created and used only once in 
org.apache.hadoop.compat.cases.function.HdfsCompatLocal.prepare()  At 
HdfsCompatLocal.java:only once in 
org.apache.hadoop.compat.cases.function.HdfsCompatLocal.prepare()  At 
HdfsCompatLocal.java:[line 54] |
   |  |  Random object created and used only once in 
org.apache.hadoop.compat.cases.function.HdfsCompatTpcds.create()  At 
HdfsCompatTpcds.java:only once in 
org.apache.hadoop.compat.cases.function.HdfsCompatTpcds.create()  At 
HdfsCompatTpcds.java:[line 56] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.getXAttr():in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.getXAttr(): 
String.getBytes()  At HdfsCompatXAttr.java:[line 57] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.getXAttrs():in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.getXAttrs(): 
String.getBytes()  At HdfsCompatXAttr.java:[line 65] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.listXAttrs():in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.listXAttrs(): 
String.getBytes()  At HdfsCompatXAttr.java:[line 77] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.removeXAttr():in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.removeXAttr(): 
String.getBytes()  At HdfsCompatXAttr.java:[line 87] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.setXAttr():in 
org.apache.hadoop.compat.cases.function.HdfsCompatXAttr.setXAttr(): 
String.getBytes()  At HdfsCompatXAttr.java:[line 48] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.compat.cases.implement.HdfsCompatFileSystemImpl.lambda$setXAttr$77():in
 
org.apache.hadoop.compat.cases.implement.HdfsCompatFileSystemImpl.lambda$setXAttr$77():
 String.getBytes()  At HdfsCompatFileSystemImpl.java:[line 611] |
   |  |  org.apache.hadoop.compat.suites.HdfsCompatSuiteForAll.getApiCases() 
may expose internal representation by returning HdfsCompatSuiteForAll.API_CASES 
 At HdfsCompatSuiteForAll.java:by returning HdfsCompatSuiteForAll.API_CASES  At 
HdfsCompatSuiteForAll.java:[line 61] |
   |  |  org.apache.hadoop.compat.suites.HdfsCompatSuiteForAll.getShellCases() 
may expose internal representation by returning 
HdfsCompatSuiteForAll.SHELL_CASES  At HdfsCompatSuiteForAll.java:by returning 
HdfsCompatSuiteForAll.SHELL_CASES  At HdfsCompatSuiteForAll.java:[line 66] |
   |  |  
org.apache.hadoop.compat.suites.HdfsCompatSuiteForShell.getShellCases() may 
expose internal representation by returning HdfsCompatSuiteForShell.SHELL_CASES 
 At HdfsCompatSuiteForShell.java:by returning 
HdfsCompatSuiteForShell.SHELL_CASES  At HdfsCompatSuiteForShell.java:[line 50] |
   |  |  org.apache.hadoop.compat.suites.HdfsCompatSuiteForTpcds.getApiCases() 
may expose internal representation by returning 
HdfsCompatSuiteForTpcds.API_CASES  At HdfsCompatSuiteForTpcds.java:by returning 
HdfsCompatSuiteForTpcds.API_CASES  At HdfsCompatSuiteForTpcds.java:[line 37] |
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6535 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient xmllint 
shellcheck shelldocs spotbugs checkstyle |
   | uname | Linux 3c19973f82c5 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d822ff5dae91e367d3bcae21c0e8bb03d38a8ca6 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/testReport/ |
   | Max. process+thread count | 4784 (vs. ulimit of 5500) |
   | modules | C: . hadoop-compat-bench U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6535/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Compatibility Benchmark over HCFS Implementations
> -------------------------------------------------
>
>                 Key: HDFS-17316
>                 URL: https://issues.apache.org/jira/browse/HDFS-17316
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Han Liu
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDFS Compatibility Benchmark Design.pdf
>
>
> {*}Background:{*}Hadoop-Compatible File System (HCFS) is a core conception in 
> big data storage ecosystem, providing unified interfaces and generally clear 
> semantics, and has become the de-factor standard for industry storage systems 
> to follow and conform with. There have been a series of HCFS implementations 
> in Hadoop, such as S3AFileSystem for Amazon's S3 Object Store, WASB for 
> Microsoft's Azure Blob Storage and OSS connector for Alibaba Cloud Object 
> Storage, and more from storage service's providers on their own.
> {*}Problems:{*}However, as indicated by introduction.md, there is no formal 
> suite to do compatibility assessment of a file system for all such HCFS 
> implementations. Thus, whether the functionality is well accomplished and 
> meets the core compatible expectations mainly relies on service provider's 
> own report. Meanwhile, Hadoop is also developing and new features are 
> continuously contributing to HCFS interfaces for existing implementations to 
> follow and update, in which case, Hadoop also needs a tool to quickly assess 
> if these features are supported or not for a specific HCFS implementation. 
> Besides, the known hadoop command line tool or hdfs shell is used to directly 
> interact with a HCFS storage system, where most commands correspond to 
> specific HCFS interfaces and work well. Still, there are cases that are 
> complicated and may not work, like expunge command. To check such commands 
> for an HCFS, we also need an approach to figure them out.
> {*}Proposal:{*}Accordingly, we propose to define a formal HCFS compatibility 
> benchmark and provide corresponding tool to do the compatibility assessment 
> for an HCFS storage system. The benchmark and tool should consider both HCFS 
> interfaces and hdfs shell commands. Different scenarios require different 
> kinds of compatibilities. For such consideration, we could define different 
> suites in the benchmark.
> *Benefits:* We intend the benchmark and tool to be useful for both storage 
> providers and storage users. For end users, it can be used to evalute the 
> compatibility level and determine if the storage system in question is 
> suitable for the required scenarios. For storage providers, it helps to 
> quickly generate an objective and reliable report about core functioins of 
> the storage service. As an instance, if the HCFS got a 100% on a suite named 
> 'tpcds', it is demonstrated that all functions needed by a tpcds program have 
> been well achieved. It is also a guide indicating how storage service 
> abilities can map to HCFS interfaces, such as storage class on S3.
> Any thoughts? Comments and feedback are mostly welcomed. Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to