[ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16841461#comment-16841461
 ] 

Hudson commented on HDDS-1297:
------------------------------

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16564 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16564/])
HDDS-1297. Fix IllegalArgumentException thrown with MiniOzoneCluster (elek: rev 
03ea8ea92e641a3fa2ce0cc1ef38022e0f6d8f20)
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestHddsServerUtils.java


> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> ------------------------------------------------------------------------
>
>                 Key: HDDS-1297
>                 URL: https://issues.apache.org/jira/browse/HDDS-1297
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: SCM
>    Affects Versions: 0.3.0
>            Reporter: Mukul Kumar Singh
>            Assignee: Yiqun Lin
>            Priority: Major
>             Fix For: 0.4.1
>
>         Attachments: HDDS-1297.001.patch, HDDS-1297.002.patch, 
> HDDS-1297.003.patch, HDDS-1297.004.patch, HDDS-1297.05.patch
>
>
> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> {code}
> ava.lang.IllegalArgumentException: 300000 is not within min = 500 or max = 
> 100000
>       at 
> org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
>       at 
> org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
>       at 
> org.apache.hadoop.hdds.scm.node.NodeStateManager.<init>(NodeStateManager.java:136)
>       at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.<init>(SCMNodeManager.java:105)
>       at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
>       at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.<init>(StorageContainerManager.java:286)
>       at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.<init>(StorageContainerManager.java:218)
>       at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
>       at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
>       at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
>       at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
>       at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>       at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to