[ 
https://issues.apache.org/jira/browse/HDFS-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-16653:
--------------------------
    Description: 
 
{code:java}
<property>
  <name>dfs.client.mmap.cache.size</name>
  <value>256</value>
  <description>
    When zero-copy reads are used, the DFSClient keeps a cache of recently used
    memory mapped regions.  This parameter controls the maximum number of
    entries that we will keep in that cache.
    The larger this number is, the more file descriptors we will potentially
    use for memory-mapped files.  mmaped files also use virtual address space.
    You may need to increase your ulimit virtual address space limits before
    increasing the client mmap cache size.
    
    Note that you can still do zero-copy reads when this size is set to 0.
  </description>
</property>
{code}
When the configuration item “dfs.client.mmap.cache.size” is set to a negative 
number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all the 
operation options including enter, leave, get, wait and forceExit are invalid, 
the terminal returns security mode is null and no exceptions are thrown.

In summary, I think we need to improve the check mechanism related to this 
configuration item, add maxEvictableMmapedSize that is 
"dfs.client.mmap.cache.size" related Precondition check suite error message,and 
give a clear indication when the configuration is abnormal in order to solve 
the problem in time and reduce the impact on the safe mode related operations.

The details are as follows.

I think that since the constructor of the ShortCircuitCache class in 
ShortCircuitCache.java in the source code already uses 
Preconditions.checkArgument() to check whether the configuration item value is 
greater than or equal to zero.So when set to a negative number, it will lead to 
the creation of ShortCircuitCache class object in ClientContext.java failed.

But due to Preconditions.checkArgument () in the lack of error information, 
resulting in the terminal using hdfs dfsadmin script appears as follows:
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave 
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode enter
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode get
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode forceExit
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
And hdfs logs and terminal are not related to the exception thrown.

Therefore, the cause of the situation can be found directly after adding an 
error message to the original Preconditions.checkArgument(), as follows:
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave
safemode: Invalid argument: dfs.client.mmap.cache.size must be greater than 
zero.
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
 

  was:
 
{code:java}
<property>
  <name>dfs.client.mmap.cache.size</name>
  <value>256</value>
  <description>
    When zero-copy reads are used, the DFSClient keeps a cache of recently used
    memory mapped regions.  This parameter controls the maximum number of
    entries that we will keep in that cache.
    The larger this number is, the more file descriptors we will potentially
    use for memory-mapped files.  mmaped files also use virtual address space.
    You may need to increase your ulimit virtual address space limits before
    increasing the client mmap cache size.
    
    Note that you can still do zero-copy reads when this size is set to 0.
  </description>
</property>
{code}
When the configuration item “dfs.client.mmap.cache.size” is set to a negative 
number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all the 
operation options including enter, leave, get, wait and forceExit are invalid, 
the terminal returns security mode is null and no exceptions are thrown.
{code:java}
hadoop@ljq1:~/hadoop-3.1.3-work/etc/hadoop$ hdfs dfsadmin -safemode leave
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit] {code}
In summary, I think we need to improve the check mechanism related to this 
configuration item, add maxEvictableMmapedSize that is 
"dfs.client.mmap.cache.size" related Precondition check suite error message,and 
give a clear indication when the configuration is abnormal in order to solve 
the problem in time and reduce the impact on the safe mode related operations.

 

 

 

 


> Safe mode related operations cannot be performed when 
> “dfs.client.mmap.cache.size” is set to a negative number
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-16653
>                 URL: https://issues.apache.org/jira/browse/HDFS-16653
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: dfsadmin
>    Affects Versions: 3.1.3
>         Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
>            Reporter: ECFuzz
>            Assignee: ECFuzz
>            Priority: Major
>              Labels: pull-request-available
>
>  
> {code:java}
> <property>
>   <name>dfs.client.mmap.cache.size</name>
>   <value>256</value>
>   <description>
>     When zero-copy reads are used, the DFSClient keeps a cache of recently 
> used
>     memory mapped regions.  This parameter controls the maximum number of
>     entries that we will keep in that cache.
>     The larger this number is, the more file descriptors we will potentially
>     use for memory-mapped files.  mmaped files also use virtual address space.
>     You may need to increase your ulimit virtual address space limits before
>     increasing the client mmap cache size.
>     
>     Note that you can still do zero-copy reads when this size is set to 0.
>   </description>
> </property>
> {code}
> When the configuration item “dfs.client.mmap.cache.size” is set to a negative 
> number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all the 
> operation options including enter, leave, get, wait and forceExit are 
> invalid, the terminal returns security mode is null and no exceptions are 
> thrown.
> In summary, I think we need to improve the check mechanism related to this 
> configuration item, add maxEvictableMmapedSize that is 
> "dfs.client.mmap.cache.size" related Precondition check suite error 
> message,and give a clear indication when the configuration is abnormal in 
> order to solve the problem in time and reduce the impact on the safe mode 
> related operations.
> The details are as follows.
> I think that since the constructor of the ShortCircuitCache class in 
> ShortCircuitCache.java in the source code already uses 
> Preconditions.checkArgument() to check whether the configuration item value 
> is greater than or equal to zero.So when set to a negative number, it will 
> lead to the creation of ShortCircuitCache class object in ClientContext.java 
> failed.
> But due to Preconditions.checkArgument () in the lack of error information, 
> resulting in the terminal using hdfs dfsadmin script appears as follows:
> hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave 
> safemode: null
> Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
> hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode enter
> safemode: null
> Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
> hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode get
> safemode: null
> Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
> hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode forceExit
> safemode: null
> Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
> And hdfs logs and terminal are not related to the exception thrown.
> Therefore, the cause of the situation can be found directly after adding an 
> error message to the original Preconditions.checkArgument(), as follows:
> hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave
> safemode: Invalid argument: dfs.client.mmap.cache.size must be greater than 
> zero.
> Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to