[jira] [Created] (HDFS-14400) Namenode ExpiredHeartbeats metric

2019-03-29 Thread Karthik Palanisamy (JIRA)
Karthik Palanisamy created HDFS-14400:
-

 Summary: Namenode ExpiredHeartbeats metric
 Key: HDFS-14400
 URL: https://issues.apache.org/jira/browse/HDFS-14400
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.2
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Noticed incorrect value in ExpiredHeartbeats metrics under namenode JMX.

We will increment ExpiredHeartbeats count when Datanode is dead but somehow we 
missed to decrement when datanode is alive back.

{code}

{ "name" : "Hadoop:service=NameNode,name=FSNamesystem", "modelerType" : 
"FSNamesystem", "tag.Context" : "dfs", "tag.TotalSyncTimes" : "7 ", 
"tag.HAState" : "active", ... "ExpiredHeartbeats" : 2, ... }

{code}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1359) In OM HA getDelegation call Should happen only leader OM

2019-03-29 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1359:


 Summary:  In OM HA getDelegation call Should happen only leader OM
 Key: HDDS-1359
 URL: https://issues.apache.org/jira/browse/HDDS-1359
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In Om HA getS3Secret  should happen only leader OM.

 

The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14399) Backport HDFS-10536 to branch-2

2019-03-29 Thread Chao Sun (JIRA)
Chao Sun created HDFS-14399:
---

 Summary: Backport HDFS-10536 to branch-2
 Key: HDFS-14399
 URL: https://issues.apache.org/jira/browse/HDFS-14399
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chao Sun
Assignee: Chao Sun


As multi-SBN feature is already backported to branch-2, this is a follow-up to 
backport HADOOP-10536.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1358) Recon Server REST API not working as expected.

2019-03-29 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1358:
---

 Summary: Recon Server REST API not working as expected.
 Key: HDDS-1358
 URL: https://issues.apache.org/jira/browse/HDDS-1358
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


Guice Jetty integration that is being used for Recon Server API layer is not 
working as expected. Fixing that in this JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1134) OzoneFileSystem#create should allocate alteast one block for future writes.

2019-03-29 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reopened HDDS-1134:
---

Reopening issue as it was not fixed in HDDS-1300.

> OzoneFileSystem#create should allocate alteast one block for future writes.
> ---
>
> Key: HDDS-1134
> URL: https://issues.apache.org/jira/browse/HDDS-1134
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-1134.001.patch
>
>
> While opening a new key, OM should at least allocate one block for the key, 
> this should be done in case the client is not sure about the number of block. 
> However for users of OzoneFS, if the key is being created for a directory, 
> then no blocks should be allocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1357) ozone s3 shell command has confusing subcommands

2019-03-29 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1357:
--

 Summary: ozone s3 shell command has confusing subcommands
 Key: HDDS-1357
 URL: https://issues.apache.org/jira/browse/HDDS-1357
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton


Let's check the potential subcommands of ozone sh:

{code}
[hadoop@om-0 keytabs]$ ozone sh
Incomplete command
Usage: ozone sh [-hV] [--verbose] [-D=]... [COMMAND]
Shell for Ozone object store
  --verbose   More verbose output. Show the stack trace of the errors.
  -D, --set=

  -h, --help  Show this help message and exit.
  -V, --version   Print version information and exit.
Commands:
  volume, vol  Volume specific operations
  bucket   Bucket specific operations
  key  Key specific operations
  tokenToken specific operations
{code}

This is fine, but for ozone s3:

{code}
[hadoop@om-0 keytabs]$ ozone s3
Incomplete command
Usage: ozone s3 [-hV] [--verbose] [-D=]... [COMMAND]
Shell for S3 specific operations
  --verbose   More verbose output. Show the stack trace of the errors.
  -D, --set=

  -h, --help  Show this help message and exit.
  -V, --version   Print version information and exit.
Commands:
  getsecretReturns s3 secret for current user
  path Returns the ozone path for S3Bucket
  volume, vol  Volume specific operations
  bucket   Bucket specific operations
  key  Key specific operations
  tokenToken specific operations
{code}

This list should contain only the getsecret/path commands and not the 
volume/bucket/key subcommands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1356) Wrong response code in s3g in case of an invalid access key

2019-03-29 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1356:
--

 Summary: Wrong response code in s3g in case of an invalid access 
key
 Key: HDDS-1356
 URL: https://issues.apache.org/jira/browse/HDDS-1356
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3
Reporter: Elek, Marton


In case of a wrong aws credential the s3g returns with HTTP 500:

{code}
[hadoop@om-0 keytabs]$ aws s3api --endpoint=http://s3g-0.s3g:9878 create-bucket 
--bucket qwe

An error occurred (500) when calling the CreateBucket operation (reached max 
retries: 4): Internal Server Error
{code}

And throws an exception server side:

{code}
s3g-0 s3g 3ff4582bec94fee02ae4babcd4294c5a1c46cf7a6f750bfd5de4e894e41663c5, 
signature=73ea5e939f47de1389e26624c91444d6b88fa70c64e5ee1e39e6804269736a99, 
awsAccessKeyId=scm/om-0.om.perf.svc.cluster.lo...@example.co
s3g-0 s3g at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)
s3g-0 s3g at org.apache.hadoop.ipc.Client.call(Client.java:1457)
s3g-0 s3g at org.apache.hadoop.ipc.Client.call(Client.java:1367)
s3g-0 s3g at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
s3g-0 s3g at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
s3g-0 s3g at com.sun.proxy.$Proxy77.submitRequest(Unknown Source)
s3g-0 s3g at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown 
Source)
s3g-0 s3g at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
s3g-0 s3g at java.lang.reflect.Method.invoke(Method.java:498)
s3g-0 s3g at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
s3g-0 s3g at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
s3g-0 s3g at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
s3g-0 s3g at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
s3g-0 s3g at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
s3g-0 s3g at com.sun.proxy.$Proxy77.submitRequest(Unknown Source)
s3g-0 s3g at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
s3g-0 s3g at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
s3g-0 s3g at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
s3g-0 s3g at java.lang.reflect.Method.invoke(Method.java:498)
s3g-0 s3g at 
org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
s3g-0 s3g at com.sun.proxy.$Proxy77.submitRequest(Unknown Source)
s3g-0 s3g at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:284)
s3g-0 s3g at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1097)
s3g-0 s3g at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:219)
s3g-0 s3g at 
org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:148)
s3g-0 s3g at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
s3g-0 s3g at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
s3g-0 s3g at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
s3g-0 s3g at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
s3g-0 s3g at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
s3g-0 s3g at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClient(OzoneClientFactory.java:92)
s3g-0 s3g at 
org.apache.hadoop.ozone.s3.OzoneClientProducer.getClient(OzoneClientProducer.java:108)
s3g-0 s3g at 
org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:68)
s3g-0 s3g at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
s3g-0 s3g at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
s3g-0 s3g at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
s3g-0 s3g at java.lang.reflect.Method.invoke(Method.java:498)
s3g-0 s3g at 
org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
s3g-0 s3g ... 92 more
{code}

The right response would be something like this:

{code}
aws s3api create-bucket --bucket qweqweqwe123123qwesdi

An error occurred (InvalidAccessKeyId) when calling 

[jira] [Created] (HDDS-1355) Only FQDN is accepted for OM rpc address in secure environment

2019-03-29 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1355:
--

 Summary: Only FQDN is accepted for OM rpc address in secure 
environment
 Key: HDDS-1355
 URL: https://issues.apache.org/jira/browse/HDDS-1355
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton


While the scm address can be a host name (relative to the current search 
domain) if the om address is just a hostname and not a FQDN a NPE is thrown:

{code}
  10   │   OZONE-SITE.XML_ozone.om.address: "om-0.om"
  11   │   OZONE-SITE.XML_ozone.scm.client.address: "scm-0.scm"
  12   │   OZONE-SITE.XML_ozone.scm.names: "scm-0.scm"
{code} 

{code}
2019-03-29 14:37:52 ERROR OzoneManager:865 - Failed to start the OzoneManager.
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.om.OzoneManager.getSCMSignedCert(OzoneManager.java:1372)
at 
org.apache.hadoop.ozone.om.OzoneManager.initializeSecurity(OzoneManager.java:1018)
at org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:971)
at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:928)
at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:859)
{code}

I don't know what is the right validation rule here, but I am pretty sure that 
NPE should be avoided and a meaningful error should be thrown. (and the 
behaviour should be the same for scm and om)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-29 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1090/

[Mar 28, 2019 4:52:36 AM] (github) HDDS-1346. Remove hard-coded version 
ozone-0.5.0 from ReadMe of
[Mar 28, 2019 10:37:33 AM] (inigoiri) HDFS-14295. Add Threadpool for 
DataTransfers. Contributed by David
[Mar 28, 2019 3:49:56 PM] (stevel) HADOOP-16186. S3Guard: NPE in 
DynamoDBMetadataStore.lambda$listChildren.
[Mar 28, 2019 3:59:25 PM] (stevel) HADOOP-15999. S3Guard: Better support for 
out-of-band operations.
[Mar 28, 2019 5:01:57 PM] (stevel) HADOOP-16195 MarshalledCredentials toString
[Mar 28, 2019 6:16:01 PM] (gifuma) HDFS-14395. Remove WARN Logging From 
Interrupts. Contributed by David
[Mar 28, 2019 6:48:15 PM] (rakeshr) HDFS-14393. Refactor FsDatasetCache for SCM 
cache implementation.
[Mar 28, 2019 7:00:58 PM] (github) HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs.
[Mar 28, 2019 7:13:28 PM] (shashikant) HDDS-1293. ExcludeList#getProtoBuf throws
[Mar 28, 2019 7:22:44 PM] (bharat) HDDS-1309 . change logging from warn to 
debug in XceiverClient.
[Mar 28, 2019 9:50:34 PM] (bharat) HDDS-1350. Fix checkstyle issue in 
TestDatanodeStateMachine. Contributed




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setEvents(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 159] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 142] 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 
   Switch statement found in 
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregate(TimelineMetric,
 TimelineMetric) where default case is missing At 
FlowRunDocument.java:TimelineMetric) where default case is missing At 
FlowRunDocument.java:[lines 121-136] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregateMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:[line 103] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:[lines 73-75] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:[lines 66-68] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.ozone.container.common.TestDatanodeStateMachine 
   hadoop.hdds.scm.block.TestBlockManager 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1090/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   

[jira] [Resolved] (HDDS-1298) blockade tests failing as the nodes are not able to communicate with Ozone Manager

2019-03-29 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi resolved HDDS-1298.
--
Resolution: Duplicate

> blockade tests failing as the nodes are not able to communicate with Ozone 
> Manager
> --
>
> Key: HDDS-1298
> URL: https://issues.apache.org/jira/browse/HDDS-1298
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Critical
> Attachments: alllogs.log
>
>
> steps taken:
> 
>  # started 3 datanodes docker cluster.
>  # freon run fails with error : "No such service: ozoneManager"
>  
> {noformat}
> om_1 | STARTUP_MSG: build = https://github.com/apache/hadoop.git -r 
> e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 
> 2019-01-15T17:34Z
> om_1 | STARTUP_MSG: java = 11.0.1
> om_1 | /
> om_1 | 2019-03-18 06:31:41 INFO OzoneManager:51 - registered UNIX signal 
> handlers for [TERM, HUP, INT]
> om_1 | 2019-03-18 06:31:41 WARN ScmUtils:77 - ozone.om.db.dirs is not 
> configured. We recommend adding this setting. Falling back to 
> ozone.metadata.dirs instead.
> om_1 | 2019-03-18 06:31:41 INFO OzoneManager:484 - OM Service ID is not set. 
> Setting it to the default ID: omServiceIdDefault
> om_1 | 2019-03-18 06:31:41 INFO OzoneManager:490 - OM Node ID is not set. 
> Setting it to the OmStorage's OmID: 25501758-f7f6-42d5-8196-52a885af7e23
> om_1 | 2019-03-18 06:31:41 INFO OzoneManager:441 - Found matching OM address 
> with OMServiceId: null, OMNodeId: null, RPC Address: om:9862 and Ratis port: 
> 9872
> om_1 | 2019-03-18 06:31:42 WARN ScmUtils:77 - ozone.om.db.dirs is not 
> configured. We recommend adding this setting. Falling back to 
> ozone.metadata.dirs instead.
> om_1 | 2019-03-18 06:31:42 INFO log:192 - Logging initialized @4061ms
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: userTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:userTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: volumeTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:volumeTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: bucketTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:bucketTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: keyTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:keyTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: deletedTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:deletedTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: openKeyTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:openKeyTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: s3Table
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:s3Table
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: multipartInfoTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:multipartInfoTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: s3SecretTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:152 - Using default column 
> profile:DBProfile.DISK for Table:s3SecretTable
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:101 - using custom profile for 
> table: default
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:158 - Using default column 
> profile:DBProfile.DISK for Table:default
> om_1 | 2019-03-18 06:31:42 INFO DBStoreBuilder:189 - Using default options. 
> DBProfile.DISK
> om_1 | 2019-03-18 06:31:42 INFO CallQueueManager:84 - Using callQueue: class 
> java.util.concurrent.LinkedBlockingQueue, queueCapacity: 2000, scheduler: 
> class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
> om_1 | 2019-03-18 06:31:42 INFO Server:1074 - Starting Socket Reader #1 for 
> port 9862
> om_1 | 2019-03-18 06:31:43 WARN ScmUtils:77 - ozone.om.db.dirs is not 
> configured. We recommend adding this setting. Falling back to 
> ozone.metadata.dirs instead.
> om_1 | 2019-03-18 06:31:43 

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-29 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/

[Mar 28, 2019 5:39:45 PM] (cliang) HDFS-14391. Backport HDFS-9659 to branch-2. 
Contributed by Chao Sun.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskChecker 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.fs.contract.hdfs.TestHDFSContractSeek 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.fs.contract.hdfs.TestHDFSContractOpen 
   hadoop.fs.contract.hdfs.TestHDFSContractDelete 
   hadoop.fs.contract.hdfs.TestHDFSContractAppend 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/275/artifact/out/whitespace-tabs.txt
  [1.2M]

   

[jira] [Created] (HDDS-1353) Metrics scm_pipeline_metrics_num_pipeline_creation_failed keeps increasin

2019-03-29 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1353:
-

 Summary: Metrics scm_pipeline_metrics_num_pipeline_creation_failed 
keeps increasin
 Key: HDDS-1353
 URL: https://issues.apache.org/jira/browse/HDDS-1353
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Nanda kumar


There is a {{BackgroundPipelineCreator}} thread in SCM which runs in a fixed 
interval and tries to create pipelines. This BackgroundPipelineCreator uses 
{{IOException}} as exit criteria (no more pipelines can be created). In each 
run of BackgroundPipelineCreator we exit when we are not able to create any 
more pipelines, i.e. when we get IOException while trying to create the 
pipeline. This means that {{scm_pipeline_metrics_num_pipeline_creation_failed}} 
value will get incremented in each run of BackgroundPipelineCreator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org