[ 
https://issues.apache.org/jira/browse/HDFS-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17238:
--------------------------
    Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}
<configuration>
  <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/Mutil_Component/tmp</value>
    </property>
   
</configuration>{code}
hdfs-site.xml like below.
{code:java}
<configuration>
   <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
<property>
        <name>dfs.blocksize</name>
        <value>1342177280000</value>
    </property>
   
</configuration>{code}
Then format the name node and start hdfs.

 

 

 

  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}
<configuration>
  <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/Mutil_Component/tmp</value>
    </property>
   
</configuration>{code}
hdfs-site.xml like below.
{code:java}
<configuration>
   <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
<property>
        <name>dfs.blocksize</name>
        <value>1342177280000</value>
    </property>
   
</configuration>{code}




> Setting the value of "dfs.blocksize" too large will cause HDFS to be unable 
> to write to files
> ---------------------------------------------------------------------------------------------
>
>                 Key: HDFS-17238
>                 URL: https://issues.apache.org/jira/browse/HDFS-17238
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.3.6
>            Reporter: ECFuzz
>            Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> <configuration>
>   <property>
>         <name>fs.defaultFS</name>
>         <value>hdfs://localhost:9000</value>
>     </property>
>     <property>
>         <name>hadoop.tmp.dir</name>
>         <value>/home/hadoop/Mutil_Component/tmp</value>
>     </property>
>    
> </configuration>{code}
> hdfs-site.xml like below.
> {code:java}
> <configuration>
>    <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
> <property>
>         <name>dfs.blocksize</name>
>         <value>1342177280000</value>
>     </property>
>    
> </configuration>{code}
> Then format the name node and start hdfs.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to