[jira] [Resolved] (HDDS-1292) Fix nightly run findbugs and checkstyle issues

2019-03-15 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka resolved HDDS-1292.
-
Resolution: Duplicate

> Fix nightly run findbugs and checkstyle issues
> --
>
> Key: HDDS-1292
> URL: https://issues.apache.org/jira/browse/HDDS-1292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Priority: Major
>
> [https://ci.anzix.net/job/ozone/3775/findbugs/]
>  
> https://ci.anzix.net/job/ozone/3775/checkstyle/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1263.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

Thank You [~vivekratnavel] for the contribution.

I have committed this to the trunk.

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-595) Add robot test for OM Delegation Token

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-595.
-
Resolution: Won't Fix

> Add robot test for OM Delegation Token 
> ---
>
> Key: HDDS-595
> URL: https://issues.apache.org/jira/browse/HDDS-595
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-600.
-
Resolution: Not A Problem

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>  Labels: app-compat, test-badlands
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Resolved] (HDDS-859) Fix NPE ServerUtils#getOzoneMetaDirPath

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-859.
-
Resolution: Not A Problem

> Fix NPE ServerUtils#getOzoneMetaDirPath
> ---
>
> Key: HDDS-859
> URL: https://issues.apache.org/jira/browse/HDDS-859
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: test-badlands
>
> This can be reproed with "mvn test" under hadoop-ozone project but not with 
> individual test run under IntelliJ.
>  
> {code:java}
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.33 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestOmUtils
> testNoOmDbDirConfigured(org.apache.hadoop.ozone.TestOmUtils)  Time elapsed: 
> 0.028 s  <<< FAILURE!
> java.lang.AssertionError:
>  
> Expected: an instance of java.lang.IllegalArgumentException
>      but:  is a java.lang.NullPointerException
> Stacktrace was: java.lang.NullPointerException
>         at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>         at 
> org.apache.hadoop.hdds.server.ServerUtils.getOzoneMetaDirPath(ServerUtils.java:130)
>         at org.apache.hadoop.ozone.OmUtils.getOmDbDir(OmUtils.java:141)
>         at 
> org.apache.hadoop.ozone.TestOmUtils.testNoOmDbDirConfigured(TestOmUtils.java:89)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14374) Expose total number of delegation tokens in AbstractDelegationTokenSecretManager

2019-03-15 Thread CR Hota (JIRA)
CR Hota created HDFS-14374:
--

 Summary: Expose total number of delegation tokens in 
AbstractDelegationTokenSecretManager
 Key: HDFS-14374
 URL: https://issues.apache.org/jira/browse/HDFS-14374
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: CR Hota
Assignee: CR Hota


AbstractDelegationTokenSecretManager should expose total number of active 
delegation tokens for specific implementations to track for observability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1292) Fix nightly run findbugs and checkstyle issues

2019-03-15 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1292:


 Summary: Fix nightly run findbugs and checkstyle issues
 Key: HDDS-1292
 URL: https://issues.apache.org/jira/browse/HDDS-1292
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao


https://ci.anzix.net/job/ozone/3775/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1138) OzoneManager should return the pipeline info of the allocated block along with block info

2019-03-15 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1138.
--
   Resolution: Fixed
 Assignee: Xiaoyu Yao  (was: Mukul Kumar Singh)
Fix Version/s: 0.4.0

> OzoneManager should return the pipeline info of the allocated block along 
> with block info
> -
>
> Key: HDDS-1138
> URL: https://issues.apache.org/jira/browse/HDDS-1138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-1138.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, while a block is allocated from OM, the request is forwarded to 
> SCM. However, even though the pipeline information is present with the OM for 
> block allocation, this information is passed through to the client.
> This optimization will help in reducing the number of hops for the client by 
> reducing 1 RPC round trip for each block allocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1291) Set OmKeyArgs#refreshPipeline flag properly when client reads a stale pipeline

2019-03-15 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1291:


 Summary: Set OmKeyArgs#refreshPipeline flag properly when client 
reads a stale pipeline
 Key: HDDS-1291
 URL: https://issues.apache.org/jira/browse/HDDS-1291
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


After HDDS-1138, the OM client will not talk to SCM directly to fetch the 
pipeline info. Instead the pipeline info is returned as part of the keyLocation 
cached by OM. 

 

In case SCM pipeline is changed such as closed, the client may get invalid 
pipeline exception. In this case, the client need to getKeyLocation with 
OmKeyArgs#refreshPipeline = true to force OM update its pipeline cache for this 
key. 

 

An optimization could be queue a background task to update all the keyLocations 
that is affected when OM does a refreshPipeline. (This part can be done in 0.5)
{code:java}
oldpipeline->newpipeline{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1076/

[Mar 14, 2019 9:48:06 AM] (msingh) HDDS-1241. Update ozone to latest ratis 
snapshot build
[Mar 14, 2019 11:01:25 AM] (elek) HDDS-1247. Bump trunk ozone version to 0.5.0. 
Contributed by Elek,
[Mar 14, 2019 11:19:43 AM] (msingh) HDDS-1237. Fix test 
TestSecureContainerServer.testClientServerRatisGrpc.
[Mar 14, 2019 2:02:36 PM] (shashikant) HDDS-1257. Incorrect object because of 
mismatch in block lengths.
[Mar 14, 2019 7:39:00 PM] (ebadger) YARN-8376. Separate white list for 
docker.trusted.registries and
[Mar 14, 2019 7:41:52 PM] (bharat) HDDS-917. Expose NodeManagerMXBean as a 
MetricsSource. Contributed by
[Mar 15, 2019 12:21:06 AM] (bharat) HDDS-1265. ozone sh s3 getsecret throws 
Null Pointer Exception for




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setEvents(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 159] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 142] 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 
   Switch statement found in 
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregate(TimelineMetric,
 TimelineMetric) where default case is missing At 
FlowRunDocument.java:TimelineMetric) where default case is missing At 
FlowRunDocument.java:[lines 121-136] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregateMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:[line 103] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:[lines 73-75] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:[lines 66-68] 

FindBugs :

   module:hadoop-hdds/container-service 
   Unread field:KeyValueContainerCheck.java:[line 68] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1076/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1076/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1076/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   

[jira] [Created] (HDDS-1290) ozone.log is not getting created in logs directory

2019-03-15 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-1290:


 Summary: ozone.log is not getting created in logs directory
 Key: HDDS-1290
 URL: https://issues.apache.org/jira/browse/HDDS-1290
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nilotpal Nandi


ozone.log is getting created in the log directory of the client or any other 
nodes of ozone cluster.

ozone version :

Source code repository g...@github.com:hortonworks/ozone.git -r 
67b7c4fd071b3f557bdb54be2a266b8a611cbce6
Compiled by jenkins on 2019-03-06T22:02Z
Compiled with protoc 2.5.0
>From source with checksum 65be9a337d178cd3855f5c5a2f111

Using HDDS 0.4.0.3.0.100.0-348
Source code repository g...@github.com:hortonworks/ozone.git -r 
67b7c4fd071b3f557bdb54be2a266b8a611cbce6
Compiled by jenkins on 2019-03-06T22:01Z
Compiled with protoc 2.5.0
>From source with checksum 324109cb3e8b188c1b89dc0b328c3a

[root@ctr-e139-1542663976389-86524-01-06 hdfs]# hadoop version
Hadoop 3.1.1.3.0.100.0-348
Source code repository g...@github.com:hortonworks/hadoop.git -r 
484434b1c2480bdc9314a7ee1ade8a0f4db1758f
Compiled by jenkins on 2019-03-06T22:14Z
Compiled with protoc 2.5.0
>From source with checksum ba6aad94c14256ef3ad8634e3b5086
This command was run using 
/usr/hdp/3.0.100.0-348/hadoop/hadoop-common-3.1.1.3.0.100.0-348.jar



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1289) get Key failed on SCM restart

2019-03-15 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-1289:


 Summary: get Key failed on SCM restart
 Key: HDDS-1289
 URL: https://issues.apache.org/jira/browse/HDDS-1289
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi
 Attachments: hadoop-hdfs-scm-ctr-e139-1542663976389-86524-01-03.log

Seeing ContainerNotFoundException in scm log when get key operation tried after 
scm restart.

scm.log:

[^hadoop-hdfs-scm-ctr-e139-1542663976389-86524-01-03.log]

 

 
{noformat}
2019-03-13 17:00:54,348 ERROR container.ContainerReportHandler 
(ContainerReportHandler.java:processContainerReplicas(173)) - Received 
container report for an unknown container 22 from datanode 
80f046cb-6fe2-4a05-bb67-9bf46f48723b{ip: 172.27.69.155, host: 
ctr-e139-1542663976389-86524-01-05.hwx.site} {} 
org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #22 at 
org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:543)
 at 
org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.updateContainerReplica(ContainerStateMap.java:230)
 at 
org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerReplica(ContainerStateManager.java:565)
 at 
org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerReplica(SCMContainerManager.java:393)
 at 
org.apache.hadoop.hdds.scm.container.ReportHandlerHelper.processContainerReplica(ReportHandlerHelper.java:74)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:159)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:110)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:51)
 at 
org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) 2019-03-13 17:00:54,349 ERROR 
container.ContainerReportHandler 
(ContainerReportHandler.java:processContainerReplicas(173)) - Received 
container report for an unknown container 23 from datanode 
80f046cb-6fe2-4a05-bb67-9bf46f48723b{ip: 172.27.69.155, host: 
ctr-e139-1542663976389-86524-01-05.hwx.site} {} 
org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #23 at 
org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:543)
 at 
org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.updateContainerReplica(ContainerStateMap.java:230)
 at 
org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerReplica(ContainerStateManager.java:565)
 at 
org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerReplica(SCMContainerManager.java:393)
 at 
org.apache.hadoop.hdds.scm.container.ReportHandlerHelper.processContainerReplica(ReportHandlerHelper.java:74)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:159)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:110)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:51)
 at 
org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) 2019-03-13 17:01:24,230 ERROR 
container.ContainerReportHandler 
(ContainerReportHandler.java:processContainerReplicas(173)) - Received 
container report for an unknown container 22 from datanode 
076fd0d8-ab5f-4fbe-ad10-b71a1ccb19bf{ip: 172.27.39.88, host: 
ctr-e139-1542663976389-86524-01-04.hwx.site} {} 
org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #22 at 
org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:543)
 at 
org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.updateContainerReplica(ContainerStateMap.java:230)
 at 
org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerReplica(ContainerStateManager.java:565)
 at 
org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerReplica(SCMContainerManager.java:393)
 at 
org.apache.hadoop.hdds.scm.container.ReportHandlerHelper.processContainerReplica(ReportHandlerHelper.java:74)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:159)
 at 

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/261/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [8.0K]