[
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740212#comment-17740212
]
ASF GitHub Bot commented on HDFS-17069:
---------------------------------------
hadoop-yetus commented on PR #5808:
URL: https://github.com/apache/hadoop/pull/5808#issuecomment-1621765572
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 43s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available.
|
| +0 :ok: | xmllint | 0m 0s | | xmllint was not available. |
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include
any new or modified tests. Please justify why no new tests are needed for this
patch. Also please list what manual steps were performed to verify this patch.
|
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 50m 46s | | trunk passed |
| +1 :green_heart: | compile | 1m 28s | | trunk passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | mvnsite | 1m 32s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 16s | | trunk passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | shadedclient | 96m 51s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 1m 19s | | the patch passed |
| +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javac | 1m 23s | | the patch passed |
| +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | javac | 1m 15s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | mvnsite | 1m 24s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 57s | | the patch passed with JDK
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 1m 31s | | the patch passed with JDK
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| +1 :green_heart: | shadedclient | 37m 39s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 215m 2s | | hadoop-hdfs in the patch
passed. |
| +1 :green_heart: | asflicense | 0m 54s | | The patch does not
generate ASF License warnings. |
| | | 360m 16s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.43 ServerAPI=1.43 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5808/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/5808 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient codespell detsecrets xmllint |
| uname | Linux b090c96bf77c 4.15.0-212-generic #223-Ubuntu SMP Tue May 23
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / e6e6ee0c1fdf2e0b99cf40973d0740fb93eea1d3 |
| Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5808/1/testReport/ |
| Max. process+thread count | 3115 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5808/1/console |
| versions | git=2.25.1 maven=3.6.3 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -------------------------------------------------------------------------
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: dfs, documentation
> Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
> Reporter: ECFuzz
> Priority: Major
> Labels: pull-request-available
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://localhost:9000</value>
> </property>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/hadoop/Mutil_Component/tmp</value>
> </property>
>
> </configuration>{code}
> hdfs-site.xml like below.
> {code:java}
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
> <property>
> <name>dfs.blocksize</name>
> <value>128k</value>
> </property>
>
> </configuration>{code}
> And then format the namenode, and start the hdfs.
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$
> bin/hdfs namenode -format
> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx(many info)
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$
> sbin/start-dfs.sh
> Starting namenodes on [localhost]
> Starting datanodes
> Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
> Finally, use dfs to put a file. Then I get the message which means 128k is
> less than 1M.
>
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$
> bin/hdfs dfs -mkdir -p /user/hadoop
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$
> bin/hdfs dfs -mkdir input
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$
> bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
> put: Specified block size is less than configured minimum value
> (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
> {code}
> But I find that in the document, dfs.blocksize can be set like 128k and other
> values in hdfs-default.xml .
> {code:java}
> The default block size for new files, in bytes. You can use the following
> suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta),
> e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide
> complete size in bytes (such as 134217728 for 128 MB).{code}
> So, should there be some issues with the documents here?Or should notice user
> to set this configuration to be larger than 1M?
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]