Elek, Marton created HDFS-12718:
-----------------------------------

             Summary: Ozone: fix thread number calculation in CBlockManager
                 Key: HDFS-12718
                 URL: https://issues.apache.org/jira/browse/HDFS-12718
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: ozone
    Affects Versions: HDFS-7240
            Reporter: Elek, Marton


When starting cblock server or during the unit tests I got many 
IllegalArgumentException:

{code}
testCliInfoVolume(org.apache.hadoop.cblock.TestCBlockCLI)  Time elapsed: 0.004 
sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: java.lang.IllegalArgumentException
        at 
java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1307)
        at 
java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1265)
        at 
org.apache.hadoop.cblock.storage.StorageManager.createVolumeContainers(StorageManager.java:212)
        at 
org.apache.hadoop.cblock.storage.StorageManager.createVolume(StorageManager.java:304)
        at 
org.apache.hadoop.cblock.CBlockManager.createVolume(CBlockManager.java:257)
        at 
org.apache.hadoop.cblock.protocolPB.CBlockServiceProtocolServerSideTranslatorPB.createVolume(CBlockServiceProtocolServerSideTranslatorPB.java:57)
        at 
org.apache.hadoop.cblock.protocol.proto.CBlockServiceProtocolProtos$CBlockServiceProtocolService$2.callBlockingMethod(CBlockServiceProtocolProtos.java:6056)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
        at org.apache.hadoop.ipc.Client.call(Client.java:1437)
        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy10.createVolume(Unknown Source)
        at 
org.apache.hadoop.cblock.client.CBlockServiceProtocolClientSideTranslatorPB.createVolume(CBlockServiceProtocolClientSideTranslatorPB.java:64)
        at 
org.apache.hadoop.cblock.client.CBlockVolumeClient.createVolume(CBlockVolumeClient.java:64)
        at 
org.apache.hadoop.cblock.cli.CBlockCli.createVolume(CBlockCli.java:239)
        at org.apache.hadoop.cblock.cli.CBlockCli.run(CBlockCli.java:173)
        at 
org.apache.hadoop.cblock.TestCBlockCLI.testCliInfoVolume(TestCBlockCLI.java:232)
{code}

The root cause is that in CBlock Manager we create ThreadGroups:

{code}
    ThreadPoolExecutor executor = new ThreadPoolExecutor(numThreads,
        MAX_THREADS, 1, TimeUnit.SECONDS,
        new ArrayBlockingQueue<>(MAX_QUEUE_CAPACITY),
new ThreadPoolExecutor.CallerRunsPolicy());
{code}

Where numThreads (the number of always active threads) comes from config and 16 
by default MAX_THREADS is `Runtime.getRuntime().availableProcessors() * 2`.

My problem was that MAX_THREAD was lower than numThreads (as I have only 2 
processors, shame on me), so I got IllegalArgumentException.

In the fix I suggest:
 * Limit the maximum number of threads not the always active threads, as 
ususally this is the number which is needed to adjust
 * Use the core dependent numbers as the number of active threas (but if 
numThreads is smaller, that should be used).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to