[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-10 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815092#comment-16815092
 ] 

Fei Hui commented on HDFS-13596:


Upload v006 patch, fix the checkstyle
TestDataNodeErasureCodingMetrics#testReconstructionBytesPartialGroup2 passed 
locally

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> 

[jira] [Updated] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-10 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13596:
---
Attachment: HDFS-13596.006.patch

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at 

[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2019-04-10 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815088#comment-16815088
 ] 

He Xiaoqiao commented on HDFS-13248:


Thanks [~elgoiri], I just send mail to hdfs-dev and common-dev to invite more 
folks involve to give some more suggestions and votes. Thanks again.

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> HDFS-13248.002.patch, HDFS-13248.003.patch, HDFS-13248.004.patch, 
> HDFS-13248.005.patch, HDFS-Router-Data-Locality.odt, RBF Data Locality 
> Design.pdf, clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1423) Spark job fails to create ozone rpc client

2019-04-10 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1423:
-
Priority: Blocker  (was: Major)

> Spark job fails to create ozone rpc client
> --
>
> Key: HDDS-1423
> URL: https://issues.apache.org/jira/browse/HDDS-1423
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>
> spark job fails to run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1423) Spark job fails to create ozone rpc client

2019-04-10 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815067#comment-16815067
 ] 

Ajay Kumar commented on HDDS-1423:
--

{code}2019-04-10 23:11:46 ERROR OMFailoverProxyProvider:211 - Failed to connect 
to OM. Attempted 10 retries and 10 failovers
2019-04-10 23:11:46 ERROR OzoneClientFactory:294 - Couldn't create protocol 
class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:131)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:95)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:80)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:35)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.lambda$createAdapter$1(OzoneClientAdapterFactory.java:66)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.createAdapter(OzoneClientAdapterFactory.java:116)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.createAdapter(OzoneClientAdapterFactory.java:62)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:92)
at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:146)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
at 
org.apache.spark.deploy.yarn.Client$$anonfun$5.apply(Client.scala:121)
at 
org.apache.spark.deploy.yarn.Client$$anonfun$5.apply(Client.scala:121)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.yarn.Client.(Client.scala:121)
at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.(SparkContext.scala:500)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: DestHost:destPort om:9862 , LocalHost:localPort 
68987e5f884a/172.20.0.10:0. Failed on local exception: 

[jira] [Created] (HDDS-1423) Spark job fails to create ozone rpc client

2019-04-10 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-1423:


 Summary: Spark job fails to create ozone rpc client
 Key: HDDS-1423
 URL: https://issues.apache.org/jira/browse/HDDS-1423
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Affects Versions: 0.4.0
Reporter: Ajay Kumar
Assignee: Ajay Kumar


spark job fails to run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815066#comment-16815066
 ] 

Hadoop QA commented on HDFS-13596:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 774 unchanged - 24 fixed = 777 total (was 798) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13596 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965536/HDFS-13596.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c48141afaebf 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 586826f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26612/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26612/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26612/testReport/ |
| Max. process+thread count | 2791 (vs. ulimit 

[jira] [Created] (HDFS-14421) HDFS block two replicas exist in one DataNode

2019-04-10 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HDFS-14421:
-

 Summary: HDFS block two replicas exist in one DataNode
 Key: HDFS-14421
 URL: https://issues.apache.org/jira/browse/HDFS-14421
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yuanbo Liu


We're using Hadoop-2.7.0.

There is a file which replication factor is 2. Those two replicas exist in one 
Datande. the fsck info is here:

{color:#707070}BP-499819267-xx.xxx.131.201-1452072365222:blk_1400651575_326942161
 len=484045 repl=2 
[DatanodeInfoWithStorage[xx.xxx.80.205:50010,DS-d321be27-cbd4-4edd-81ad-29b3d021ee82,DISK],
 
DatanodeInfoWithStorage[xx.xx.80.205:50010,DS-d321be27-cbd4-4edd-81ad-29b3d021ee82,DISK]].{color}

and this is the exception from xx.xx.80.205

{color:#707070}org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 
Replica not found for 
BP-499819267-xx.xxx.131.201-1452072365222:blk_1400651575_326942161{color}

It's confusing that why NameNode doesn't update block map after exception. 
What's the reason of two replicas exist in one Datande?

Hope to get anyone's comments. Thanks in advance.

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815063#comment-16815063
 ] 

Hadoop QA commented on HDFS-14117:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
38s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 5 unchanged - 0 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m  5s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterTrash |
|   | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcSingleNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14117 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965540/HDFS-14117-HDFS-13891.012.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f9d8ca4ddf25 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / e508ab9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26613/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 

[jira] [Work logged] (HDDS-1376) Datanode exits while executing client command when scmId is null

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1376?focusedWorklogId=225942=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225942
 ]

ASF GitHub Bot logged work on HDDS-1376:


Author: ASF GitHub Bot
Created on: 11/Apr/19 03:10
Start Date: 11/Apr/19 03:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #724: HDDS-1376. 
Datanode exits while executing client command when scmId is null
URL: https://github.com/apache/hadoop/pull/724#issuecomment-481948102
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 62 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 85 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1355 | trunk passed |
   | +1 | compile | 1137 | trunk passed |
   | +1 | checkstyle | 214 | trunk passed |
   | +1 | mvnsite | 128 | trunk passed |
   | +1 | shadedclient | 1138 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 54 | trunk passed |
   | +1 | javadoc | 61 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 68 | the patch passed |
   | +1 | compile | 937 | the patch passed |
   | +1 | javac | 937 | the patch passed |
   | +1 | checkstyle | 217 | the patch passed |
   | +1 | mvnsite | 82 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 747 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 62 | the patch passed |
   | +1 | javadoc | 61 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 74 | container-service in the patch failed. |
   | -1 | unit | 1239 | integration-test in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7717 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-724/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/724 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux d661e9ea1d87 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 586826f |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-724/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-724/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-724/1/testReport/ |
   | Max. process+thread count | 4534 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-724/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225942)
Time Spent: 0.5h  (was: 20m)

> Datanode exits while executing client command when scmId is null
> 
>
> Key: HDDS-1376
> URL: https://issues.apache.org/jira/browse/HDDS-1376
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>

[jira] [Work logged] (HDDS-1396) Recon start fails due to changes in Aggregate Schema definition (HDDS-1189).

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1396?focusedWorklogId=225938=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225938
 ]

ASF GitHub Bot logged work on HDDS-1396:


Author: ASF GitHub Bot
Created on: 11/Apr/19 02:56
Start Date: 11/Apr/19 02:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #700: 
HDDS-1396 : Recon start fails due to changes in Aggregate Schema definition.
URL: https://github.com/apache/hadoop/pull/700#discussion_r274242333
 
 

 ##
 File path: hadoop-ozone/ozone-recon-codegen/pom.xml
 ##
 @@ -21,11 +21,10 @@
 0.5.0-SNAPSHOT
   
   4.0.0
-  hadoop-ozone-recon-codegen
+  hadoop-ozone-reconcodegen
 
 Review comment:
   Thanks got info from PR description.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225938)
Time Spent: 50m  (was: 40m)

> Recon start fails due to changes in Aggregate Schema definition (HDDS-1189).
> 
>
> Key: HDDS-1396
> URL: https://issues.apache.org/jira/browse/HDDS-1396
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Recon Server start fails due to ClassNotFound exception when looking for 
> org.apache.hadoop.ozone.recon.ReconServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1420) Tracing exception in DataNode HddsDispatcher

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1420?focusedWorklogId=225925=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225925
 ]

ASF GitHub Bot logged work on HDDS-1420:


Author: ASF GitHub Bot
Created on: 11/Apr/19 02:46
Start Date: 11/Apr/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #723: HDDS-1420. 
Tracing exception in DataNode HddsDispatcher. Contributed …
URL: https://github.com/apache/hadoop/pull/723#issuecomment-481943864
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 96 | Maven dependency ordering for branch |
   | +1 | mvninstall | 2170 | trunk passed |
   | +1 | compile | 1219 | trunk passed |
   | +1 | checkstyle | 215 | trunk passed |
   | +1 | mvnsite | 123 | trunk passed |
   | +1 | shadedclient | 1119 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 59 | trunk passed |
   | +1 | javadoc | 61 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 71 | the patch passed |
   | +1 | compile | 1013 | the patch passed |
   | +1 | javac | 1013 | the patch passed |
   | +1 | checkstyle | 223 | the patch passed |
   | +1 | mvnsite | 97 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 814 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 78 | the patch passed |
   | +1 | javadoc | 76 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 112 | container-service in the patch failed. |
   | -1 | unit | 1666 | integration-test in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 9328 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-723/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/723 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b8e1661b7594 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e9c4109 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-723/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-723/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-723/1/testReport/ |
   | Max. process+thread count | 3776 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-723/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking

[jira] [Commented] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-04-10 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815025#comment-16815025
 ] 

Íñigo Goiri commented on HDFS-14117:


Thanks for the comments [~ayushtkn].
I added more tests and I found a couple issues.
There are still some things to figure.
For example, the way the error is reported is not very clean and it should 
report the proper issue.
This last point relates to mounting {{/user}}.
Let's start with the case where we have {{/mnt}} mounted on {{ns0}} and {{ns1}} 
mounted and then we have files in each.
/ by default is mounted only to {{ns0}}.
If we try to remove a file that is in {{ns1}} we cannot move the file from 
{{ns1}} to the Trash in {{ns0}}.
For us to be able to move to the Trash, we need {{/user/user0/.Trash}} to be 
available in {{ns1}}.
For this to have we have to have a mount point for {{/}} or {{/user}} or 
{{/user/user0}} or {{/user/user0/.Trash}} with a destination subcluster {{ns1}}

I'll fix the rest of the comments once I figure how to provide a better 
exception.

Thanks [~crh] for taking a look, let's follow up on the other JIRA.

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117-HDFS-13891.001.patch, 
> HDFS-14117-HDFS-13891.002.patch, HDFS-14117-HDFS-13891.003.patch, 
> HDFS-14117-HDFS-13891.004.patch, HDFS-14117-HDFS-13891.005.patch, 
> HDFS-14117-HDFS-13891.006.patch, HDFS-14117-HDFS-13891.007.patch, 
> HDFS-14117-HDFS-13891.008.patch, HDFS-14117-HDFS-13891.009.patch, 
> HDFS-14117-HDFS-13891.010.patch, HDFS-14117-HDFS-13891.011.patch, 
> HDFS-14117-HDFS-13891.012.patch, HDFS-14117.001.patch, HDFS-14117.002.patch, 
> HDFS-14117.003.patch, HDFS-14117.004.patch, HDFS-14117.005.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-04-10 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14117:
---
Attachment: HDFS-14117-HDFS-13891.012.patch

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> 
>
> Key: HDFS-14117
> URL: https://issues.apache.org/jira/browse/HDFS-14117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14117-HDFS-13891.001.patch, 
> HDFS-14117-HDFS-13891.002.patch, HDFS-14117-HDFS-13891.003.patch, 
> HDFS-14117-HDFS-13891.004.patch, HDFS-14117-HDFS-13891.005.patch, 
> HDFS-14117-HDFS-13891.006.patch, HDFS-14117-HDFS-13891.007.patch, 
> HDFS-14117-HDFS-13891.008.patch, HDFS-14117-HDFS-13891.009.patch, 
> HDFS-14117-HDFS-13891.010.patch, HDFS-14117-HDFS-13891.011.patch, 
> HDFS-14117-HDFS-13891.012.patch, HDFS-14117.001.patch, HDFS-14117.002.patch, 
> HDFS-14117.003.patch, HDFS-14117.004.patch, HDFS-14117.005.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup   6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

2019-04-10 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815018#comment-16815018
 ] 

CR Hota commented on HDFS-14117:


[~elgoiri] Thanks for reporting the test failure.

I tried it reproduce it locally and could not. However upon multiple executions 
of another TestRouterHttpDelegationToken I did reproduce this once locally.
{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.076 s 
<<< FAILURE! - in 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken
[ERROR] 
testGetDelegationToken(org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken)
  Time elapsed: 0.115 s  <<< ERROR!
org.apache.hadoop.service.ServiceStateException: 
org.apache.hadoop.security.KerberosAuthException: failure to login: for 
principal: router/localh...@example.com from keytab 
/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
 javax.security.auth.login.LoginException: Integrity check on decrypted field 
failed (31) - PREAUTH_FAILED
at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:173)
at 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.hadoop.security.KerberosAuthException: failure to login: 
for principal: router/localh...@example.com from keytab 
/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
 javax.security.auth.login.LoginException: Integrity check on decrypted field 
failed (31) - PREAUTH_FAILED
at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:2008)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1376)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1156)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
at 
org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:159)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
... 27 more

{code}
 

Looks to me like some intermittent issue with the way 

[jira] [Work logged] (HDDS-1421) Avoid unnecessary object allocations in TracingUtil

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1421?focusedWorklogId=225918=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225918
 ]

ASF GitHub Bot logged work on HDDS-1421:


Author: ASF GitHub Bot
Created on: 11/Apr/19 02:10
Start Date: 11/Apr/19 02:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #722: HDDS-1421. Avoid 
unnecessary object allocations in TracingUtil. Contr…
URL: https://github.com/apache/hadoop/pull/722#issuecomment-481937667
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1023 | trunk passed |
   | +1 | compile | 48 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 42 | trunk passed |
   | +1 | shadedclient | 728 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 72 | trunk passed |
   | +1 | javadoc | 45 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 43 | the patch passed |
   | +1 | compile | 35 | the patch passed |
   | +1 | javac | 35 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 35 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 734 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 74 | the patch passed |
   | +1 | javadoc | 37 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 63 | common in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3156 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-722/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/722 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 75a5e346cd64 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 586826f |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-722/1/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-722/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-722/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225918)
Time Spent: 0.5h  (was: 20m)

> Avoid unnecessary object allocations in TracingUtil
> ---
>
> Key: HDDS-1421
> URL: https://issues.apache.org/jira/browse/HDDS-1421
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Avoid unnecessary object allocations in TracingUtil#exportCurrentSpan and 
> #exportSpan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1421) Avoid unnecessary object allocations in TracingUtil

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1421?focusedWorklogId=225917=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225917
 ]

ASF GitHub Bot logged work on HDDS-1421:


Author: ASF GitHub Bot
Created on: 11/Apr/19 02:10
Start Date: 11/Apr/19 02:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #722: HDDS-1421. 
Avoid unnecessary object allocations in TracingUtil. Contr…
URL: https://github.com/apache/hadoop/pull/722#discussion_r274235883
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/TracingUtil.java
 ##
 @@ -33,6 +33,8 @@
  * Utility class to collect all the tracing helper methods.
  */
 public final class TracingUtil {
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225917)
Time Spent: 20m  (was: 10m)

> Avoid unnecessary object allocations in TracingUtil
> ---
>
> Key: HDDS-1421
> URL: https://issues.apache.org/jira/browse/HDDS-1421
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Avoid unnecessary object allocations in TracingUtil#exportCurrentSpan and 
> #exportSpan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1422) Exception during DataNode shutdown

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1422?focusedWorklogId=225916=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225916
 ]

ASF GitHub Bot logged work on HDDS-1422:


Author: ASF GitHub Bot
Created on: 11/Apr/19 02:08
Start Date: 11/Apr/19 02:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #725: HDDS-1422. 
Exception during DataNode shutdown. Contributed by Arpit A…
URL: https://github.com/apache/hadoop/pull/725#issuecomment-481937427
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1058 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 711 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 51 | trunk passed |
   | +1 | javadoc | 28 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | -0 | checkstyle | 13 | hadoop-hdds/container-service: The patch generated 
3 new + 0 unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 729 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 57 | hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | javadoc | 25 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 77 | container-service in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3072 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds/container-service |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.ozone.container.common.volume.VolumeUsage.scmUsage; locked 
60% of time  Unsynchronized access at VolumeUsage.java:60% of time  
Unsynchronized access at VolumeUsage.java:[line 183] |
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.container.common.volume.TestVolumeSet |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-725/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/725 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 21d98372751e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 586826f |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-725/1/artifact/out/diff-checkstyle-hadoop-hdds_container-service.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-725/1/artifact/out/new-findbugs-hadoop-hdds_container-service.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-725/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-725/1/testReport/ |
   | Max. process+thread count | 432 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-725/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225916)
Time Spent: 0.5h  (was: 20m)

> Exception during DataNode shutdown
> --
>
> Key: HDDS-1422
> URL: https://issues.apache.org/jira/browse/HDDS-1422
> Project: Hadoop 

[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2019-04-10 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815013#comment-16815013
 ] 

star commented on HDFS-10477:
-

[~jojochuang], uploaded patch for branch-2.8. 

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.006.patch, 
> HDFS-10477.007.patch, HDFS-10477.branch-2.8.patch, HDFS-10477.branch-2.patch, 
> HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> 

[jira] [Updated] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2019-04-10 Thread star (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

star updated HDFS-10477:

Attachment: HDFS-10477.branch-2.8.patch

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.006.patch, 
> HDFS-10477.007.patch, HDFS-10477.branch-2.8.patch, HDFS-10477.branch-2.patch, 
> HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 280219 over-replicated 

[jira] [Comment Edited] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-10 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815008#comment-16815008
 ] 

Fei Hui edited comment on HDFS-13596 at 4/11/19 1:56 AM:
-

[~daryn] Thanks for you comments.Upload v005 patch. Could you please take a 
look?
{quote}
The check for EC support should be in FSNamesystem methods, not 
NameNodeRpcServer, since there can be multiple entry points to the namesystem 
like webhdfs.
{quote}
move check to FSNamesystem
{quote}
DFSUtil.isSupportedErasureCoding probably doesn't belong in DFSUtil since it's 
not something that should called outside of the NN.
{quote}
delete it from DFSUtil
{quote}
In FSEditLogOp, please call the former method instead of duplicating the logic.
{quote}
do not see duplicate logic
{quote}
Super trivial, might rename new layoutVersion parameter in the write methods to 
logVersion to be consistent with the signatures for the read methods.
{quote}
change layoutVertion to logVersion
{quote}
How about a FSNamesystem.checkErasureCodingSupported(String op) to avoid all 
the redundant check/throw code in the methods?
{quote}
add checkErasureCodingSupported in FSNamesystem
{quote}
A test case is needed to prove the edits are correctly read/written.
{quote}
add a test case, write op in lower version and read it in lower version.



was (Author: ferhui):
[~daryn] Thanks for you comments.Upload v005 patch
{quote}
The check for EC support should be in FSNamesystem methods, not 
NameNodeRpcServer, since there can be multiple entry points to the namesystem 
like webhdfs.
{quote}
move check to FSNamesystem
{quote}
DFSUtil.isSupportedErasureCoding probably doesn't belong in DFSUtil since it's 
not something that should called outside of the NN.
{quote}
delete it from DFSUtil
{quote}
In FSEditLogOp, please call the former method instead of duplicating the logic.
{quote}
do not see duplicate logic
{quote}
Super trivial, might rename new layoutVersion parameter in the write methods to 
logVersion to be consistent with the signatures for the read methods.
{quote}
change layoutVertion to logVersion
{quote}
How about a FSNamesystem.checkErasureCodingSupported(String op) to avoid all 
the redundant check/throw code in the methods?
{quote}
add checkErasureCodingSupported in FSNamesystem
{quote}
A test case is needed to prove the edits are correctly read/written.
{quote}
add a test case, write op in lower version and read it in lower version.


> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  

[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-10 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815008#comment-16815008
 ] 

Fei Hui commented on HDFS-13596:


[~daryn] Thanks for you comments.Upload v005 patch
{quote}
The check for EC support should be in FSNamesystem methods, not 
NameNodeRpcServer, since there can be multiple entry points to the namesystem 
like webhdfs.
{quote}
move check to FSNamesystem
{quote}
DFSUtil.isSupportedErasureCoding probably doesn't belong in DFSUtil since it's 
not something that should called outside of the NN.
{quote}
delete it from DFSUtil
{quote}
In FSEditLogOp, please call the former method instead of duplicating the logic.
{quote}
do not see duplicate logic
{quote}
Super trivial, might rename new layoutVersion parameter in the write methods to 
logVersion to be consistent with the signatures for the read methods.
{quote}
change layoutVertion to logVersion
{quote}
How about a FSNamesystem.checkErasureCodingSupported(String op) to avoid all 
the redundant check/throw code in the methods?
{quote}
add checkErasureCodingSupported in FSNamesystem
{quote}
A test case is needed to prove the edits are correctly read/written.
{quote}
add a test case, write op in lower version and read it in lower version.


> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where 

[jira] [Updated] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-10 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13596:
---
Attachment: HDFS-13596.005.patch

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at 

[jira] [Work logged] (HDDS-1422) Exception during DataNode shutdown

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1422?focusedWorklogId=225907=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225907
 ]

ASF GitHub Bot logged work on HDDS-1422:


Author: ASF GitHub Bot
Created on: 11/Apr/19 01:35
Start Date: 11/Apr/19 01:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #725: HDDS-1422. 
Exception during DataNode shutdown. Contributed by Arpit A…
URL: https://github.com/apache/hadoop/pull/725#issuecomment-481931721
 
 
   Hi @arp7 
   Thanks for the fix.
   To understand the fix, we have removed the setting of usage to null, so that 
even when DN is shut down, and if someone calls volume usage we return old 
value, instead of throwing an exception.
   
   Is my understanding correct or am I missing something here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225907)
Time Spent: 20m  (was: 10m)

> Exception during DataNode shutdown
> --
>
> Key: HDDS-1422
> URL: https://issues.apache.org/jira/browse/HDDS-1422
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following exception during DN shutdown should be avoided, as it adds 
> noise to the logs and is not a real issue.
> {code}
> 2019-04-10 17:48:27,307 WARN  volume.VolumeSet 
> (VolumeSet.java:getNodeReport(476)) - Failed to get scmUsed and remaining for 
> container storage location 
> /Users/agarwal/src/hadoop/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-f4d89966-146a-4690-8841-36af1993522f/datanode-17/data/containers
> java.io.IOException: Volume Usage thread is not running. This error is 
> usually seen during DataNode shutdown.
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:119)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:472)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:238)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:115)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814999#comment-16814999
 ] 

Hadoop QA commented on HDFS-13972:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 6s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 16 unchanged - 0 fixed = 18 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
54s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965532/HDFS-13972-HDFS-13891.012.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 91a7193e852b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / e508ab9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26611/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26611/testReport/ |
| Max. process+thread count | 997 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Updated] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-10 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1406:
-
Status: Open  (was: Patch Available)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12245) Fix INodeId javadoc

2019-04-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814988#comment-16814988
 ] 

Hudson commented on HDFS-12245:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16381 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16381/])
HDFS-12245. Fix INodeId javadoc (aajisaka: rev 
586826fe99038d4fcfa07a8a9a6b19cdf301aaf7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java


> Fix INodeId javadoc
> ---
>
> Key: HDFS-12245
> URL: https://issues.apache.org/jira/browse/HDFS-12245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, namenode
>Reporter: Wei-Chiu Chuang
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12245.001.patch, HDFS-12245.002.patch, 
> HDFS-12245.003.patch, HDFS-12245.004.patch
>
>
> The INodeId javadoc states that id 1 to 1000 is reserved and root inode id 
> start from 1001. That is no longer true after HDFS-4434.
> Also, it's a little weird in INodeId
> {code}
>   public static final long LAST_RESERVED_ID = 2 << 14 - 1;
>   public static final long ROOT_INODE_ID = LAST_RESERVED_ID + 1;
> {code}
> It seems the intent was for LAST_RESERVED_ID to be (2^14) - 1 = 32767. But 
> due to Java operator precedence, LAST_RESERVED_ID = 2^(14-1) = 16384. Maybe 
> it doesn't matter, not sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3246) pRead equivalent for direct read path

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814986#comment-16814986
 ] 

Hadoop QA commented on HDFS-3246:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 20m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} root: The patch generated 0 new + 112 unchanged - 7 
fixed = 112 total (was 119) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
3s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 59s{color} | 
{color:black} {color} |
\\
\\

[jira] [Commented] (HDFS-3246) pRead equivalent for direct read path

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814985#comment-16814985
 ] 

Hadoop QA commented on HDFS-3246:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} root: The patch generated 0 new + 112 unchanged - 7 
fixed = 112 total (was 119) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
6s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
15s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}217m 25s{color} | 
{color:black} {color} |
\\
\\

[jira] [Updated] (HDDS-1420) Tracing exception in DataNode HddsDispatcher

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1420:

Status: Patch Available  (was: Open)

> Tracing exception in DataNode HddsDispatcher
> 
>
> Key: HDDS-1420
> URL: https://issues.apache.org/jira/browse/HDDS-1420
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following exception is seen in some unit tests:
> {code}
> 2019-04-10 13:00:27,537 WARN  
> internal.PropagationRegistry$ExceptionCatchingExtractorDecorator 
> (PropagationRegistry.java:extract(60)) - Error when extracting SpanContext 
> from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 90041ce6-81f3-4733-8e2b-6aceaa697b77
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:98)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$5(ContainerStateMachine.java:613)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1422) Exception during DataNode shutdown

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1422:

Status: Patch Available  (was: Open)

> Exception during DataNode shutdown
> --
>
> Key: HDDS-1422
> URL: https://issues.apache.org/jira/browse/HDDS-1422
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following exception during DN shutdown should be avoided, as it adds 
> noise to the logs and is not a real issue.
> {code}
> 2019-04-10 17:48:27,307 WARN  volume.VolumeSet 
> (VolumeSet.java:getNodeReport(476)) - Failed to get scmUsed and remaining for 
> container storage location 
> /Users/agarwal/src/hadoop/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-f4d89966-146a-4690-8841-36af1993522f/datanode-17/data/containers
> java.io.IOException: Volume Usage thread is not running. This error is 
> usually seen during DataNode shutdown.
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:119)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:472)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:238)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:115)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1422) Exception during DataNode shutdown

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1422?focusedWorklogId=225902=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225902
 ]

ASF GitHub Bot logged work on HDDS-1422:


Author: ASF GitHub Bot
Created on: 11/Apr/19 01:11
Start Date: 11/Apr/19 01:11
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #725: HDDS-1422. 
Exception during DataNode shutdown. Contributed by Arpit A…
URL: https://github.com/apache/hadoop/pull/725
 
 
   …garwal.
   
   Change-Id: I981fbd087baca80cc6b4ff58e91e63dcd9726c13
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225902)
Time Spent: 10m
Remaining Estimate: 0h

> Exception during DataNode shutdown
> --
>
> Key: HDDS-1422
> URL: https://issues.apache.org/jira/browse/HDDS-1422
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following exception during DN shutdown should be avoided, as it adds 
> noise to the logs and is not a real issue.
> {code}
> 2019-04-10 17:48:27,307 WARN  volume.VolumeSet 
> (VolumeSet.java:getNodeReport(476)) - Failed to get scmUsed and remaining for 
> container storage location 
> /Users/agarwal/src/hadoop/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-f4d89966-146a-4690-8841-36af1993522f/datanode-17/data/containers
> java.io.IOException: Volume Usage thread is not running. This error is 
> usually seen during DataNode shutdown.
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:119)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:472)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:238)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:115)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1422) Exception during DataNode shutdown

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1422:
-
Labels: pull-request-available  (was: )

> Exception during DataNode shutdown
> --
>
> Key: HDDS-1422
> URL: https://issues.apache.org/jira/browse/HDDS-1422
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>
> The following exception during DN shutdown should be avoided, as it adds 
> noise to the logs and is not a real issue.
> {code}
> 2019-04-10 17:48:27,307 WARN  volume.VolumeSet 
> (VolumeSet.java:getNodeReport(476)) - Failed to get scmUsed and remaining for 
> container storage location 
> /Users/agarwal/src/hadoop/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-f4d89966-146a-4690-8841-36af1993522f/datanode-17/data/containers
> java.io.IOException: Volume Usage thread is not running. This error is 
> usually seen during DataNode shutdown.
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:119)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:472)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:238)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:115)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1422) Exception during DataNode shutdown

2019-04-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1422:
---

 Summary: Exception during DataNode shutdown
 Key: HDDS-1422
 URL: https://issues.apache.org/jira/browse/HDDS-1422
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The following exception during DN shutdown should be avoided, as it adds noise 
to the logs and is not a real issue.
{code}
2019-04-10 17:48:27,307 WARN  volume.VolumeSet 
(VolumeSet.java:getNodeReport(476)) - Failed to get scmUsed and remaining for 
container storage location 
/Users/agarwal/src/hadoop/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-f4d89966-146a-4690-8841-36af1993522f/datanode-17/data/containers
java.io.IOException: Volume Usage thread is not running. This error is usually 
seen during DataNode shutdown.
  at 
org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:119)
  at 
org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:472)
  at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:238)
  at 
org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:115)
  at 
org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12245) Fix INodeId javadoc

2019-04-10 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12245:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~adam.antal] for the contribution and 
[~jojochuang] for the report.

> Fix INodeId javadoc
> ---
>
> Key: HDFS-12245
> URL: https://issues.apache.org/jira/browse/HDFS-12245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, namenode
>Reporter: Wei-Chiu Chuang
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12245.001.patch, HDFS-12245.002.patch, 
> HDFS-12245.003.patch, HDFS-12245.004.patch
>
>
> The INodeId javadoc states that id 1 to 1000 is reserved and root inode id 
> start from 1001. That is no longer true after HDFS-4434.
> Also, it's a little weird in INodeId
> {code}
>   public static final long LAST_RESERVED_ID = 2 << 14 - 1;
>   public static final long ROOT_INODE_ID = LAST_RESERVED_ID + 1;
> {code}
> It seems the intent was for LAST_RESERVED_ID to be (2^14) - 1 = 32767. But 
> due to Java operator precedence, LAST_RESERVED_ID = 2^(14-1) = 16384. Maybe 
> it doesn't matter, not sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12245) Fix INodeId javadoc

2019-04-10 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12245:
-
Component/s: documentation

> Fix INodeId javadoc
> ---
>
> Key: HDFS-12245
> URL: https://issues.apache.org/jira/browse/HDFS-12245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, namenode
>Reporter: Wei-Chiu Chuang
>Assignee: Adam Antal
>Priority: Major
> Attachments: HDFS-12245.001.patch, HDFS-12245.002.patch, 
> HDFS-12245.003.patch, HDFS-12245.004.patch
>
>
> The INodeId javadoc states that id 1 to 1000 is reserved and root inode id 
> start from 1001. That is no longer true after HDFS-4434.
> Also, it's a little weird in INodeId
> {code}
>   public static final long LAST_RESERVED_ID = 2 << 14 - 1;
>   public static final long ROOT_INODE_ID = LAST_RESERVED_ID + 1;
> {code}
> It seems the intent was for LAST_RESERVED_ID to be (2^14) - 1 = 32767. But 
> due to Java operator precedence, LAST_RESERVED_ID = 2^(14-1) = 16384. Maybe 
> it doesn't matter, not sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12245) Fix INodeId javadoc

2019-04-10 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12245:
-
Summary: Fix INodeId javadoc  (was: Update INodeId javadoc)

> Fix INodeId javadoc
> ---
>
> Key: HDFS-12245
> URL: https://issues.apache.org/jira/browse/HDFS-12245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: Adam Antal
>Priority: Major
> Attachments: HDFS-12245.001.patch, HDFS-12245.002.patch, 
> HDFS-12245.003.patch, HDFS-12245.004.patch
>
>
> The INodeId javadoc states that id 1 to 1000 is reserved and root inode id 
> start from 1001. That is no longer true after HDFS-4434.
> Also, it's a little weird in INodeId
> {code}
>   public static final long LAST_RESERVED_ID = 2 << 14 - 1;
>   public static final long ROOT_INODE_ID = LAST_RESERVED_ID + 1;
> {code}
> It seems the intent was for LAST_RESERVED_ID to be (2^14) - 1 = 32767. But 
> due to Java operator precedence, LAST_RESERVED_ID = 2^(14-1) = 16384. Maybe 
> it doesn't matter, not sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1376) Datanode exits while executing client command when scmId is null

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1376?focusedWorklogId=225897=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225897
 ]

ASF GitHub Bot logged work on HDDS-1376:


Author: ASF GitHub Bot
Created on: 11/Apr/19 01:01
Start Date: 11/Apr/19 01:01
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #724: HDDS-1376. 
Datanode exits while executing client command when scmId is null
URL: https://github.com/apache/hadoop/pull/724#discussion_r274226324
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
 ##
 @@ -106,7 +106,8 @@ public VersionEndpointTask(EndpointStateMachine 
rpcEndPoint,
   volumeSet.writeUnlock();
 }
 
-ozoneContainer.getDispatcher().setScmId(scmId);
+// Start the container services after getting the version information
+ozoneContainer.start(scmId);
 
 Review comment:
   How does the fix work? I don't understand this startup sequence well enough 
to figure it out.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225897)
Time Spent: 20m  (was: 10m)

> Datanode exits while executing client command when scmId is null
> 
>
> Key: HDDS-1376
> URL: https://issues.apache.org/jira/browse/HDDS-1376
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ozone Datanode exits with the following error, this happens because DN hasn't 
> received a scmID from the SCM after registration but is processing a client 
> command.
> {code}
> 2019-04-03 17:02:10,958 ERROR storage.RaftLogWorker 
> (ExitUtils.java:terminate(133)) - Terminating with exit status 1: 
> df6b578e-8d35-44f5-9b21-db7184dcc54e-RaftLogWorker failed.
> java.io.IOException: java.lang.NullPointerException: scmId cannot be null
> at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
> at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
> at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:83)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$StateMachineDataPolicy.getFromFuture(RaftLogWorker.java:76)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:354)
> at 
> org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: scmId cannot be null
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:110)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:243)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:165)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:350)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:224)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:149)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$0(ContainerStateMachine.java:385)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1590)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> 

[jira] [Updated] (HDDS-1376) Datanode exits while executing client command when scmId is null

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1376:

Status: Patch Available  (was: Open)

> Datanode exits while executing client command when scmId is null
> 
>
> Key: HDDS-1376
> URL: https://issues.apache.org/jira/browse/HDDS-1376
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone Datanode exits with the following error, this happens because DN hasn't 
> received a scmID from the SCM after registration but is processing a client 
> command.
> {code}
> 2019-04-03 17:02:10,958 ERROR storage.RaftLogWorker 
> (ExitUtils.java:terminate(133)) - Terminating with exit status 1: 
> df6b578e-8d35-44f5-9b21-db7184dcc54e-RaftLogWorker failed.
> java.io.IOException: java.lang.NullPointerException: scmId cannot be null
> at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
> at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
> at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:83)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$StateMachineDataPolicy.getFromFuture(RaftLogWorker.java:76)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:354)
> at 
> org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: scmId cannot be null
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:110)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:243)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:165)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:350)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:224)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:149)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$0(ContainerStateMachine.java:385)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1590)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14406) Add per user RPC Processing time

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814977#comment-16814977
 ] 

Hadoop QA commented on HDFS-14406:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 9 new + 374 unchanged - 0 fixed = 383 total (was 374) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 42s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14406 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965530/HDFS-14406.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5dc274ffae59 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8740755 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26610/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26610/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26610/testReport/ |
| Max. 

[jira] [Updated] (HDDS-1376) Datanode exits while executing client command when scmId is null

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1376:
-
Labels: MiniOzoneChaosCluster pull-request-available  (was: 
MiniOzoneChaosCluster)

> Datanode exits while executing client command when scmId is null
> 
>
> Key: HDDS-1376
> URL: https://issues.apache.org/jira/browse/HDDS-1376
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>
> Ozone Datanode exits with the following error, this happens because DN hasn't 
> received a scmID from the SCM after registration but is processing a client 
> command.
> {code}
> 2019-04-03 17:02:10,958 ERROR storage.RaftLogWorker 
> (ExitUtils.java:terminate(133)) - Terminating with exit status 1: 
> df6b578e-8d35-44f5-9b21-db7184dcc54e-RaftLogWorker failed.
> java.io.IOException: java.lang.NullPointerException: scmId cannot be null
> at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
> at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
> at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:83)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$StateMachineDataPolicy.getFromFuture(RaftLogWorker.java:76)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:354)
> at 
> org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: scmId cannot be null
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:110)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:243)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:165)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:350)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:224)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:149)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$0(ContainerStateMachine.java:385)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1590)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1376) Datanode exits while executing client command when scmId is null

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1376?focusedWorklogId=225895=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225895
 ]

ASF GitHub Bot logged work on HDDS-1376:


Author: ASF GitHub Bot
Created on: 11/Apr/19 00:53
Start Date: 11/Apr/19 00:53
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #724: 
HDDS-1376. Datanode exits while executing client command when scmId is null
URL: https://github.com/apache/hadoop/pull/724
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225895)
Time Spent: 10m
Remaining Estimate: 0h

> Datanode exits while executing client command when scmId is null
> 
>
> Key: HDDS-1376
> URL: https://issues.apache.org/jira/browse/HDDS-1376
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone Datanode exits with the following error, this happens because DN hasn't 
> received a scmID from the SCM after registration but is processing a client 
> command.
> {code}
> 2019-04-03 17:02:10,958 ERROR storage.RaftLogWorker 
> (ExitUtils.java:terminate(133)) - Terminating with exit status 1: 
> df6b578e-8d35-44f5-9b21-db7184dcc54e-RaftLogWorker failed.
> java.io.IOException: java.lang.NullPointerException: scmId cannot be null
> at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
> at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
> at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:83)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$StateMachineDataPolicy.getFromFuture(RaftLogWorker.java:76)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:354)
> at 
> org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: scmId cannot be null
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:110)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:243)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:165)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:350)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:224)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:149)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$0(ContainerStateMachine.java:385)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1590)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-10 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814975#comment-16814975
 ] 

star commented on HDFS-13596:
-

Sorry for my mistake. Indeed length field is not behaving the way as my 
expectation. It just skip 4 bytes of checksum.
{quote}IOUtils.skipFully(in, 4 + 8); // skip length and txid
op.readFields(in, logVersion);
// skip over the checksum, which we validated above.
IOUtils.skipFully(in, CHECKSUM_LENGTH);{quote}
 

 

 

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> 

[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-04-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814961#comment-16814961
 ] 

Anu Engineer commented on HDFS-14234:
-

[~clayb] the patch looks quite good, here are some very minor comments.

 
1. DatanodeHTTPserver.java, there are some minor checkstyle fixes needed.
2. The changes in hadoop-env.sh are accidental?
3. For production purposes, we should remove the  log4j.properties settings for 
web handlers?
4. I am not sure if this is possible in real life, but from the test case, it 
is possible to trigger a NullPointerException.
?? ??
??    httpRequest =??
??        new DefaultFullHttpRequest(HttpVersion.HTTP_1_1,??
??            HttpMethod.GET,??
??            WebHdfsFileSystem.PATH_PREFIX + "/user/myName/fooFile");??
If we send a request without a query portion then it looks like the 
\{{HostRestrictingAuthorizationFilter.handleInteraction}}
will throw a null pointer java.lang.NullPointerException

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0003-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814957#comment-16814957
 ] 

Hudson commented on HDDS-1417:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16380 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16380/])
HDDS-1417. After successfully importing a container, datanode should (bharat: 
rev e9c4109004ca378806f167ed580b57fd0d191fd4)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/DownloadAndImportReplicator.java


> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1420) Tracing exception in DataNode HddsDispatcher

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1420?focusedWorklogId=225890=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225890
 ]

ASF GitHub Bot logged work on HDDS-1420:


Author: ASF GitHub Bot
Created on: 11/Apr/19 00:08
Start Date: 11/Apr/19 00:08
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #723: HDDS-1420. Tracing 
exception in DataNode HddsDispatcher. Contributed …
URL: https://github.com/apache/hadoop/pull/723
 
 
   …by Arpit Agarwal.
   
   Change-Id: I73394296b7de3b31b4d136fd5ed020a82bc063aa
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225890)
Time Spent: 10m
Remaining Estimate: 0h

> Tracing exception in DataNode HddsDispatcher
> 
>
> Key: HDDS-1420
> URL: https://issues.apache.org/jira/browse/HDDS-1420
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following exception is seen in some unit tests:
> {code}
> 2019-04-10 13:00:27,537 WARN  
> internal.PropagationRegistry$ExceptionCatchingExtractorDecorator 
> (PropagationRegistry.java:extract(60)) - Error when extracting SpanContext 
> from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 90041ce6-81f3-4733-8e2b-6aceaa697b77
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:98)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$5(ContainerStateMachine.java:613)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1420) Tracing exception in DataNode HddsDispatcher

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1420:
-
Labels: pull-request-available  (was: )

> Tracing exception in DataNode HddsDispatcher
> 
>
> Key: HDDS-1420
> URL: https://issues.apache.org/jira/browse/HDDS-1420
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>
> The following exception is seen in some unit tests:
> {code}
> 2019-04-10 13:00:27,537 WARN  
> internal.PropagationRegistry$ExceptionCatchingExtractorDecorator 
> (PropagationRegistry.java:extract(60)) - Error when extracting SpanContext 
> from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 90041ce6-81f3-4733-8e2b-6aceaa697b77
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:98)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$5(ContainerStateMachine.java:613)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1420) Tracing exception in DataNode HddsDispatcher

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1420:
---

Assignee: Arpit Agarwal

> Tracing exception in DataNode HddsDispatcher
> 
>
> Key: HDDS-1420
> URL: https://issues.apache.org/jira/browse/HDDS-1420
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> The following exception is seen in some unit tests:
> {code}
> 2019-04-10 13:00:27,537 WARN  
> internal.PropagationRegistry$ExceptionCatchingExtractorDecorator 
> (PropagationRegistry.java:extract(60)) - Error when extracting SpanContext 
> from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 90041ce6-81f3-4733-8e2b-6aceaa697b77
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:98)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$5(ContainerStateMachine.java:613)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-04-10 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13972:
---
Attachment: HDFS-13972-HDFS-13891.012.patch

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch, 
> HDFS-13972-HDFS-13891.002.patch, HDFS-13972-HDFS-13891.003.patch, 
> HDFS-13972-HDFS-13891.004.patch, HDFS-13972-HDFS-13891.005.patch, 
> HDFS-13972-HDFS-13891.006.patch, HDFS-13972-HDFS-13891.007.patch, 
> HDFS-13972-HDFS-13891.008.patch, HDFS-13972-HDFS-13891.009.patch, 
> HDFS-13972-HDFS-13891.010.patch, HDFS-13972-HDFS-13891.011.patch, 
> HDFS-13972-HDFS-13891.012.patch, TestRouterWebHDFSContractTokens.java
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1417:
-
Fix Version/s: 0.5.0
   0.4.0

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814946#comment-16814946
 ] 

Bharat Viswanadham edited comment on HDDS-1417 at 4/10/19 11:58 PM:


Thank You [~nandakumar131] for fixing this issue and [~arpitagarwal] for the 
review.

I have committed this to trunk and ozone-0.4.


was (Author: bharatviswa):
Thank You [~nandakumar131] for fixing this issue and [~arpitagarwal] for the 
review.

I have committed this to trunk.

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1417:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~nandakumar131] for fixing this issue and [~arpitagarwal] for the 
review.

I have committed this to trunk.

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?focusedWorklogId=225887=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225887
 ]

ASF GitHub Bot logged work on HDDS-1417:


Author: ASF GitHub Bot
Created on: 10/Apr/19 23:55
Start Date: 10/Apr/19 23:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #721: 
HDDS-1417. After successfully importing a container, datanode should delete the 
container tar.gz file from working directory.
URL: https://github.com/apache/hadoop/pull/721
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225887)
Time Spent: 50m  (was: 40m)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?focusedWorklogId=225886=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225886
 ]

ASF GitHub Bot logged work on HDDS-1417:


Author: ASF GitHub Bot
Created on: 10/Apr/19 23:55
Start Date: 10/Apr/19 23:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #721: HDDS-1417. 
After successfully importing a container, datanode should delete the container 
tar.gz file from working directory.
URL: https://github.com/apache/hadoop/pull/721#issuecomment-481914363
 
 
   Test failures are not related to this patch.
   I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225886)
Time Spent: 40m  (was: 0.5h)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1421) Avoid unnecessary object allocations in TracingUtil

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1421:

Status: Patch Available  (was: Open)

> Avoid unnecessary object allocations in TracingUtil
> ---
>
> Key: HDDS-1421
> URL: https://issues.apache.org/jira/browse/HDDS-1421
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Avoid unnecessary object allocations in TracingUtil#exportCurrentSpan and 
> #exportSpan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1421) Avoid unnecessary object allocations in TracingUtil

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1421:
-
Labels: pull-request-available  (was: )

> Avoid unnecessary object allocations in TracingUtil
> ---
>
> Key: HDDS-1421
> URL: https://issues.apache.org/jira/browse/HDDS-1421
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>
> Avoid unnecessary object allocations in TracingUtil#exportCurrentSpan and 
> #exportSpan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1421) Avoid unnecessary object allocations in TracingUtil

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1421?focusedWorklogId=225885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225885
 ]

ASF GitHub Bot logged work on HDDS-1421:


Author: ASF GitHub Bot
Created on: 10/Apr/19 23:36
Start Date: 10/Apr/19 23:36
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #722: HDDS-1421. Avoid 
unnecessary object allocations in TracingUtil. Contr…
URL: https://github.com/apache/hadoop/pull/722
 
 
   …ibuted by Arpit Agarwal.
   
   Change-Id: I3fd1b59447005cc62ae217c4def86a9df81944b1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225885)
Time Spent: 10m
Remaining Estimate: 0h

> Avoid unnecessary object allocations in TracingUtil
> ---
>
> Key: HDDS-1421
> URL: https://issues.apache.org/jira/browse/HDDS-1421
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: tracing
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Avoid unnecessary object allocations in TracingUtil#exportCurrentSpan and 
> #exportSpan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1421) Avoid unnecessary object allocations in TracingUtil

2019-04-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1421:
---

 Summary: Avoid unnecessary object allocations in TracingUtil
 Key: HDDS-1421
 URL: https://issues.apache.org/jira/browse/HDDS-1421
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: tracing
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Avoid unnecessary object allocations in TracingUtil#exportCurrentSpan and 
#exportSpan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14406) Add per user RPC Processing time

2019-04-10 Thread Xue Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814938#comment-16814938
 ] 

Xue Liu commented on HDFS-14406:



Thanks for the comments and suggestions! New patch available, makes the user 
RPC metrics optional. 

Agree on that we are now facing the bottleneck of lock waiting time, that would 
have an unfair effect on per user RPC processing time. In this case, the 
difference of RPC processing time may not be that big. However I do like to 
think of this metric as general indicator, that we can probably expose to 
users. Also, it might be good we have some generosity here, so that other ipc 
like Yarn may also use it.  

[~elgoiri][~daryn] Thanks for the suggestion, I have fixed checkstyles and made 
it optional.
[~xkrogen], the work on cost-based FCQ seems interesting, which is something we 
also want in our prod cluster in the future. From your perspective, would it be 
interesting if we can have a more detailed metrics, say per user per method rpc 
processing time?

> Add per user RPC Processing time
> 
>
> Key: HDFS-14406
> URL: https://issues.apache.org/jira/browse/HDFS-14406
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Xue Liu
>Assignee: Xue Liu
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-14406.001.patch, HDFS-14406.002.patch
>
>
> For a shared cluster we would want to separate users' resources, as well as 
> having our metrics reflecting on the usage, latency, etc, for each user. 
> This JIRA aims to add per user RPC processing time metrics and expose it via 
> JMX.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1420) Tracing exception in DataNode HddsDispatcher

2019-04-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1420:
---

 Summary: Tracing exception in DataNode HddsDispatcher
 Key: HDDS-1420
 URL: https://issues.apache.org/jira/browse/HDDS-1420
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: tracing, Ozone Datanode
Reporter: Arpit Agarwal


The following exception is seen in some unit tests:
{code}
2019-04-10 13:00:27,537 WARN  
internal.PropagationRegistry$ExceptionCatchingExtractorDecorator 
(PropagationRegistry.java:extract(60)) - Error when extracting SpanContext from 
carrier. Handling gracefully.
io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
String does not match tracer state format: 90041ce6-81f3-4733-8e2b-6aceaa697b77
at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
at 
io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
at 
org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:98)
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$applyTransaction$5(ContainerStateMachine.java:613)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14406) Add per user RPC Processing time

2019-04-10 Thread Xue Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xue Liu updated HDFS-14406:
---
Attachment: HDFS-14406.002.patch

> Add per user RPC Processing time
> 
>
> Key: HDFS-14406
> URL: https://issues.apache.org/jira/browse/HDFS-14406
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Xue Liu
>Assignee: Xue Liu
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-14406.001.patch, HDFS-14406.002.patch
>
>
> For a shared cluster we would want to separate users' resources, as well as 
> having our metrics reflecting on the usage, latency, etc, for each user. 
> This JIRA aims to add per user RPC processing time metrics and expose it via 
> JMX.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14403) Cost-Based RPC FairCallQueue

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814918#comment-16814918
 ] 

Hadoop QA commented on HDFS-14403:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 44s{color} | {color:orange} root: The patch generated 2 new + 971 unchanged 
- 1 fixed = 973 total (was 972) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
54s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}245m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965501/HDFS-14403.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux a2753aca4524 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool 

[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-04-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814911#comment-16814911
 ] 

Anu Engineer commented on HDFS-14234:
-

Hi [~clayb] ,

I have looked at the patch, and I think it would be good to be able to load the 
RestCsrfPreventionFilterHandler to preserve the existing behavior. I will add 
my review comments in a while. 

Thanks

Anu

 

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0003-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2019-04-10 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10477:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

There are conflicts when I attempted to backport to branch-2.8. I'll resolve 
this Jira now but feel free to reopen to contribute a branch-2.8 patch.

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.006.patch, 
> HDFS-10477.007.patch, HDFS-10477.branch-2.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> 

[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2019-04-10 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814887#comment-16814887
 ] 

Wei-Chiu Chuang commented on HDFS-10477:


Pushed up to branch-2 and branch-2.9. 

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.006.patch, 
> HDFS-10477.007.patch, HDFS-10477.branch-2.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 

[jira] [Updated] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2019-04-10 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10477:
---
Fix Version/s: 2.9.3
   2.10.0

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.006.patch, 
> HDFS-10477.007.patch, HDFS-10477.branch-2.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 280219 over-replicated 

[jira] [Commented] (HDFS-8409) HDFS client RPC call throws "java.lang.IllegalStateException"

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814873#comment-16814873
 ] 

Hadoop QA commented on HDFS-8409:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  1s{color} | {color:orange} root: The patch generated 2 new + 139 unchanged 
- 0 fixed = 141 total (was 139) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-8409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12735456/HDFS-8409.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 48e13cf7b9c5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e770a6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| 

[jira] [Work logged] (HDDS-1371) Download RocksDB checkpoint from OM Leader to Follower

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1371?focusedWorklogId=225841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225841
 ]

ASF GitHub Bot logged work on HDDS-1371:


Author: ASF GitHub Bot
Created on: 10/Apr/19 20:49
Start Date: 10/Apr/19 20:49
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #703: HDDS-1371. 
Download RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r274153802
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
 ##
 @@ -361,4 +373,149 @@ private static void addFilesToArchive(String source, 
File file,
 }
   }
 
+  /**
+   * If a OM conf is only set with key suffixed with OM Node ID, return the
+   * set value.
+   * @return null if base conf key is set, otherwise the value set for
+   * key suffixed with Node ID.
+   */
+  public static String getConfSuffixedWithOMNodeId(Configuration conf,
+  String confKey, String omNodeId) {
+String confValue = conf.getTrimmed(confKey);
+if (StringUtils.isNotEmpty(confValue)) {
+  return null;
+}
+String suffixedConfKey = OmUtils.addKeySuffixes(
+confKey, omNodeId);
+confValue = conf.getTrimmed(suffixedConfKey);
+if (StringUtils.isNotEmpty(confValue)) {
+  return confValue;
+}
+return null;
+  }
+
+  public static String getHttpAddressForOMPeerNode(Configuration conf,
 
 Review comment:
   Can you add a javadoc here that describes the format of the returned 
address? Looks like it is just address:port.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225841)
Time Spent: 1h 40m  (was: 1.5h)

> Download RocksDB checkpoint from OM Leader to Follower
> --
>
> Key: HDDS-1371
> URL: https://issues.apache.org/jira/browse/HDDS-1371
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If a follower OM is lagging way behind the leader OM or in case of a restart 
> or bootstrapping, a follower OM might need RocksDB checkpoint from the leader 
> to catch up with it. This is because the leader might have purged its logs 
> after taking a snapshot.
>  This Jira aims to add support to download a RocksDB checkpoint from leader 
> OM to follower OM through a HTTP servlet. We reuse the DBCheckpoint servlet 
> used by Recon server. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1371) Download RocksDB checkpoint from OM Leader to Follower

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1371?focusedWorklogId=225839=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225839
 ]

ASF GitHub Bot logged work on HDDS-1371:


Author: ASF GitHub Bot
Created on: 10/Apr/19 20:45
Start Date: 10/Apr/19 20:45
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #703: HDDS-1371. 
Download RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r274151625
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
 ##
 @@ -108,4 +129,19 @@ public int getRpcPort() {
   public String getRpcAddressString() {
 return NetUtils.getHostPortString(rpcAddress);
   }
+
+  public String getOMDBCheckpointEnpointUrl(boolean isSecurityEnabled) {
 
 Review comment:
   Should use getHttpPolicy here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225839)
Time Spent: 1.5h  (was: 1h 20m)

> Download RocksDB checkpoint from OM Leader to Follower
> --
>
> Key: HDDS-1371
> URL: https://issues.apache.org/jira/browse/HDDS-1371
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If a follower OM is lagging way behind the leader OM or in case of a restart 
> or bootstrapping, a follower OM might need RocksDB checkpoint from the leader 
> to catch up with it. This is because the leader might have purged its logs 
> after taking a snapshot.
>  This Jira aims to add support to download a RocksDB checkpoint from leader 
> OM to follower OM through a HTTP servlet. We reuse the DBCheckpoint servlet 
> used by Recon server. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1371) Download RocksDB checkpoint from OM Leader to Follower

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1371:

Status: Patch Available  (was: Open)

> Download RocksDB checkpoint from OM Leader to Follower
> --
>
> Key: HDDS-1371
> URL: https://issues.apache.org/jira/browse/HDDS-1371
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> If a follower OM is lagging way behind the leader OM or in case of a restart 
> or bootstrapping, a follower OM might need RocksDB checkpoint from the leader 
> to catch up with it. This is because the leader might have purged its logs 
> after taking a snapshot.
>  This Jira aims to add support to download a RocksDB checkpoint from leader 
> OM to follower OM through a HTTP servlet. We reuse the DBCheckpoint servlet 
> used by Recon server. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?focusedWorklogId=225825=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225825
 ]

ASF GitHub Bot logged work on HDDS-1417:


Author: ASF GitHub Bot
Created on: 10/Apr/19 20:17
Start Date: 10/Apr/19 20:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #721: HDDS-1417. After 
successfully importing a container, datanode should delete the container tar.gz 
file from working directory.
URL: https://github.com/apache/hadoop/pull/721#issuecomment-481845801
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1007 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 16 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 649 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 750 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 53 | the patch passed |
   | +1 | javadoc | 27 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 67 | container-service in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 2956 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/721 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 9d9b42961e3b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 813cee1 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/2/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/2/testReport/ |
   | Max. process+thread count | 465 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225825)
Time Spent: 0.5h  (was: 20m)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a 

[jira] [Assigned] (HDDS-1376) Datanode exits while executing client command when scmId is null

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1376:
---

Assignee: Hanisha Koneru

> Datanode exits while executing client command when scmId is null
> 
>
> Key: HDDS-1376
> URL: https://issues.apache.org/jira/browse/HDDS-1376
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Ozone Datanode exits with the following error, this happens because DN hasn't 
> received a scmID from the SCM after registration but is processing a client 
> command.
> {code}
> 2019-04-03 17:02:10,958 ERROR storage.RaftLogWorker 
> (ExitUtils.java:terminate(133)) - Terminating with exit status 1: 
> df6b578e-8d35-44f5-9b21-db7184dcc54e-RaftLogWorker failed.
> java.io.IOException: java.lang.NullPointerException: scmId cannot be null
> at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
> at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
> at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:83)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$StateMachineDataPolicy.getFromFuture(RaftLogWorker.java:76)
> at 
> org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:354)
> at 
> org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:219)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: scmId cannot be null
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:110)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:243)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:165)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:350)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:224)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:149)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:347)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:354)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$0(ContainerStateMachine.java:385)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1590)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1374) ContainerStateMap cannot find container while allocating blocks.

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1374:
---

Assignee: Bharat Viswanadham

> ContainerStateMap cannot find container while allocating blocks.
> 
>
> Key: HDDS-1374
> URL: https://issues.apache.org/jira/browse/HDDS-1374
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> ContainerStateMap cannot find container while allocating blocks.
> {code}
> org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #14
> at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:542)
> at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.getContainerInfo(ContainerStateMap.java:189)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerStateManager.getContainer(ContainerStateManager.java:483)
> at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.getContainer(SCMContainerManager.java:195)
> at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.getContainersForOwner(SCMContainerManager.java:466)
> at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.getMatchingContainer(SCMContainerManager.java:387)
> at 
> org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:201)
> at 
> org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:172)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:82)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:7533)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?focusedWorklogId=225812=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225812
 ]

ASF GitHub Bot logged work on HDDS-1417:


Author: ASF GitHub Bot
Created on: 10/Apr/19 19:59
Start Date: 10/Apr/19 19:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #721: HDDS-1417. After 
successfully importing a container, datanode should delete the container tar.gz 
file from working directory.
URL: https://github.com/apache/hadoop/pull/721#issuecomment-481840050
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1050 | trunk passed |
   | +1 | compile | 44 | trunk passed |
   | +1 | checkstyle | 18 | trunk passed |
   | +1 | mvnsite | 32 | trunk passed |
   | +1 | shadedclient | 653 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 49 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | +1 | checkstyle | 14 | the patch passed |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 51 | the patch passed |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 66 | container-service in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 2920 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/721 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 45099fddb40f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 813cee1 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/1/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-721/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225812)
Time Spent: 20m  (was: 10m)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a 

[jira] [Commented] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814832#comment-16814832
 ] 

Hudson commented on HDDS-1418:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16378 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16378/])
HDDS-1418. Move bang line to the start of the start-chaos.sh script. (github: 
rev feaab241e530fef9aaf8adab0dadc510fcb8701a)
* (edit) hadoop-ozone/integration-test/src/test/bin/start-chaos.sh


> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The start-chaos.sh script has a bang line but it is not the first line in the 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1419) Fix shellcheck errors in start-chaos.sh

2019-04-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1419:
---

 Summary: Fix shellcheck errors in start-chaos.sh
 Key: HDDS-1419
 URL: https://issues.apache.org/jira/browse/HDDS-1419
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Arpit Agarwal


Fix the following shellcheck errors in start-chaos.sh:
{code}
hadoop-ozone/integration-test/src/test/bin/start-chaos.sh:18:6: note: Use $(..) 
instead of legacy `..`. [SC2006]
hadoop-ozone/integration-test/src/test/bin/start-chaos.sh:27:19: note: Double 
quote to prevent globbing and word splitting. [SC2086]
hadoop-ozone/integration-test/src/test/bin/start-chaos.sh:28:20: note: Double 
quote to prevent globbing and word splitting. [SC2086]
hadoop-ozone/integration-test/src/test/bin/start-chaos.sh:31:33: note: Double 
quote to prevent globbing and word splitting. [SC2086]
hadoop-ozone/integration-test/src/test/bin/start-chaos.sh:35:23: note: Double 
quote to prevent globbing and word splitting. [SC2086]
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1418?focusedWorklogId=225811=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225811
 ]

ASF GitHub Bot logged work on HDDS-1418:


Author: ASF GitHub Bot
Created on: 10/Apr/19 19:56
Start Date: 10/Apr/19 19:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #720: HDDS-1418. Move 
bang line to the start of the start-chaos.sh script.
URL: https://github.com/apache/hadoop/pull/720#issuecomment-481838904
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1068 | trunk passed |
   | +1 | mvnsite | 72 | trunk passed |
   | +1 | shadedclient | 689 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 39 | the patch passed |
   | +1 | mvnsite | 30 | the patch passed |
   | -1 | shellcheck | 1 | The patch generated 5 new + 0 unchanged - 0 fixed = 
5 total (was 0) |
   | +1 | shelldocs | 22 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 728 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 30 | integration-test in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2837 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-720/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/720 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  
shelldocs  |
   | uname | Linux d9d600a5face 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 813cee1 |
   | maven | version: Apache Maven 3.3.9 |
   | shellcheck | v0.4.6 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-720/1/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-720/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-720/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225811)
Time Spent: 0.5h  (was: 20m)

> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The start-chaos.sh script has a bang line but it is not the first line in the 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1418?focusedWorklogId=225810=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225810
 ]

ASF GitHub Bot logged work on HDDS-1418:


Author: ASF GitHub Bot
Created on: 10/Apr/19 19:44
Start Date: 10/Apr/19 19:44
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #720: HDDS-1418. Move 
bang line to the start of the start-chaos.sh script.
URL: https://github.com/apache/hadoop/pull/720
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225810)
Time Spent: 20m  (was: 10m)

> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The start-chaos.sh script has a bang line but it is not the first line in the 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814826#comment-16814826
 ] 

Arpit Agarwal commented on HDDS-1418:
-

Merged via GitHub.

> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The start-chaos.sh script has a bang line but it is not the first line in the 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-1418.
-
   Resolution: Fixed
Fix Version/s: 0.5.0

> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The start-chaos.sh script has a bang line but it is not the first line in the 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1363) ozone.metadata.dirs doesn't pick multiple dirs

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1363?focusedWorklogId=225792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225792
 ]

ASF GitHub Bot logged work on HDDS-1363:


Author: ASF GitHub Bot
Created on: 10/Apr/19 19:24
Start Date: 10/Apr/19 19:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #691: [HDDS-1363] 
ozone.metadata.dirs doesn't pick multiple dirs
URL: https://github.com/apache/hadoop/pull/691#issuecomment-481828510
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 18 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1089 | trunk passed |
   | +1 | compile | 988 | trunk passed |
   | +1 | checkstyle | 210 | trunk passed |
   | +1 | mvnsite | 273 | trunk passed |
   | +1 | shadedclient | 1252 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 335 | trunk passed |
   | +1 | javadoc | 209 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 221 | the patch passed |
   | +1 | compile | 938 | the patch passed |
   | +1 | javac | 938 | the patch passed |
   | +1 | checkstyle | 213 | the patch passed |
   | +1 | mvnsite | 247 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 725 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 385 | the patch passed |
   | +1 | javadoc | 208 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 90 | common in the patch passed. |
   | +1 | unit | 70 | container-service in the patch passed. |
   | +1 | unit | 41 | framework in the patch passed. |
   | +1 | unit | 99 | server-scm in the patch passed. |
   | +1 | unit | 43 | common in the patch passed. |
   | +1 | unit | 48 | ozone-recon in the patch passed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7593 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-691/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/691 |
   | JIRA Issue | HDDS-1363 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 794e2698972e 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / dfb518b |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-691/3/testReport/ |
   | Max. process+thread count | 429 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/framework hadoop-hdds/server-scm hadoop-ozone/common 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-691/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225792)
Time Spent: 1h  (was: 50m)

> ozone.metadata.dirs doesn't pick multiple dirs
> --
>
> Key: HDDS-1363
> URL: https://issues.apache.org/jira/browse/HDDS-1363
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Sandeep Nemuri
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{ozone.metadata.dirs}} doesn't pick comma(,) separated paths.
>  It only picks one path as opposed to the property name 
> _ozone.metadata.dir{color:#FF}s{color}_
> {code:java}
>
>   ozone.metadata.dirs
>   

[jira] [Commented] (HDDS-1363) ozone.metadata.dirs doesn't pick multiple dirs

2019-04-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814814#comment-16814814
 ] 

Hadoop QA commented on HDDS-1363:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} framework in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} ozone-recon in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-691/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/691 |
| JIRA Issue | HDDS-1363 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  

[jira] [Updated] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1417:
--
Status: Patch Available  (was: Open)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1417:
-
Labels: pull-request-available  (was: )

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?focusedWorklogId=225789=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225789
 ]

ASF GitHub Bot logged work on HDDS-1417:


Author: ASF GitHub Bot
Created on: 10/Apr/19 19:09
Start Date: 10/Apr/19 19:09
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #721: 
HDDS-1417. After successfully importing a container, datanode should delete the 
container tar.gz file from working directory.
URL: https://github.com/apache/hadoop/pull/721
 
 
   Whenever we want to replicate or copy a container from one datanode to 
another, we compress the container data and create a tar.gz file. This tar file 
is then copied from source datanode to destination datanode. In destination, we 
use a temporary working directory where this tar file is copied. Once the 
copying is complete we import the container. After importing the container we 
no longer need the tar file in the working directory of destination datanode, 
this has to be deleted.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225789)
Time Spent: 10m
Remaining Estimate: 0h

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1418:

Description: The start-chaos.sh script has a bang line but it is not the 
first line in the script.  (was: The start-chaos.sh has a bang line but it is 
not the first line in the script.)

> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The start-chaos.sh script has a bang line but it is not the first line in the 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1418:
-
Labels: pull-request-available  (was: )

> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>
> The start-chaos.sh has a bang line but it is not the first line in the script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1418?focusedWorklogId=225788=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-225788
 ]

ASF GitHub Bot logged work on HDDS-1418:


Author: ASF GitHub Bot
Created on: 10/Apr/19 19:07
Start Date: 10/Apr/19 19:07
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #720: HDDS-1418. Move 
bang line to the start of the start-chaos.sh script.
URL: https://github.com/apache/hadoop/pull/720
 
 
   Change-Id: I4fcf39d61a7d4c4ca79cb56a6958db0f691fe971
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 225788)
Time Spent: 10m
Remaining Estimate: 0h

> Move bang line to the start of the start-chaos.sh script
> 
>
> Key: HDDS-1418
> URL: https://issues.apache.org/jira/browse/HDDS-1418
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The start-chaos.sh has a bang line but it is not the first line in the script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1418) Move bang line to the start of the start-chaos.sh script

2019-04-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1418:
---

 Summary: Move bang line to the start of the start-chaos.sh script
 Key: HDDS-1418
 URL: https://issues.apache.org/jira/browse/HDDS-1418
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The start-chaos.sh has a bang line but it is not the first line in the script.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14403) Cost-Based RPC FairCallQueue

2019-04-10 Thread Christopher Gregorian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Gregorian updated HDFS-14403:
-
Attachment: HDFS-14403.003.patch

> Cost-Based RPC FairCallQueue
> 
>
> Key: HDFS-14403
> URL: https://issues.apache.org/jira/browse/HDFS-14403
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc, namenode
>Reporter: Erik Krogen
>Assignee: Christopher Gregorian
>Priority: Major
>  Labels: qos, rpc
> Attachments: CostBasedFairCallQueueDesign_v0.pdf, 
> HDFS-14403.001.patch, HDFS-14403.002.patch, HDFS-14403.003.patch
>
>
> HADOOP-15016 initially described extensions to the Hadoop FairCallQueue 
> encompassing both cost-based analysis of incoming RPCs, as well as support 
> for reservations of RPC capacity for system/platform users. This JIRA intends 
> to track the former, as HADOOP-15016 was repurposed to more specifically 
> focus on the reservation portion of the work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1294) ExcludeList shoud be a RPC Client config so that multiple streams can avoid the same error.

2019-04-10 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814756#comment-16814756
 ] 

Jitendra Nath Pandey commented on HDDS-1294:


# 
{quote}DistributedStorageHandler is a long lived object. The exclude list from 
this will never get cleaned up.
{quote}
The storage handler is closed only when {{OzoneHddsDatanodeService}} is 
stopped, therefore the problem is slowly all datanodes in the cluster may get 
added in the exclude list, and after that this datanode will not be able to 
serve any request. The exclusion should be in the context of a single session 
or should decay out.

 # It is odd that get methods are synchronized on the list objects, while add 
methods are synchronized on the class instance. Therefore, it is possible that 
a get will be executed while the list is being modified causing 
concurrent-modification exception. It is ok to synchronize the get methods on 
the class instance as well.

> ExcludeList shoud be a RPC Client config so that multiple streams can avoid 
> the same error.
> ---
>
> Key: HDDS-1294
> URL: https://issues.apache.org/jira/browse/HDDS-1294
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster
> Attachments: HDDS-1294.000.patch, HDDS-1294.001.patch, 
> HDDS-1294.002.patch
>
>
> ExcludeList right now is a per BlockOutPutStream value, this can result in 
> multiple keys created out of the same client to run into same exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1294) ExcludeList shoud be a RPC Client config so that multiple streams can avoid the same error.

2019-04-10 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814756#comment-16814756
 ] 

Jitendra Nath Pandey edited comment on HDDS-1294 at 4/10/19 6:31 PM:
-

# 
{quote}DistributedStorageHandler is a long lived object. The exclude list from 
this will never get cleaned up.
{quote}
The storage handler is closed only when {{OzoneHddsDatanodeService}} is 
stopped, therefore the problem is slowly all datanodes in the cluster may get 
added in the exclude list, and after that this datanode will not be able to 
serve any request. The exclusion should be in the context of a single session 
or should decay out.
 # It is odd that get methods are synchronized on the list objects, while add 
methods are synchronized on the class instance. Therefore, it is possible that 
a get will be executed while the list is being modified causing 
concurrent-modification exception. It is ok to synchronize the get methods on 
the class instance as well.


was (Author: jnp):
# 
{quote}DistributedStorageHandler is a long lived object. The exclude list from 
this will never get cleaned up.
{quote}
The storage handler is closed only when {{OzoneHddsDatanodeService}} is 
stopped, therefore the problem is slowly all datanodes in the cluster may get 
added in the exclude list, and after that this datanode will not be able to 
serve any request. The exclusion should be in the context of a single session 
or should decay out.

 # It is odd that get methods are synchronized on the list objects, while add 
methods are synchronized on the class instance. Therefore, it is possible that 
a get will be executed while the list is being modified causing 
concurrent-modification exception. It is ok to synchronize the get methods on 
the class instance as well.

> ExcludeList shoud be a RPC Client config so that multiple streams can avoid 
> the same error.
> ---
>
> Key: HDDS-1294
> URL: https://issues.apache.org/jira/browse/HDDS-1294
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster
> Attachments: HDDS-1294.000.patch, HDDS-1294.001.patch, 
> HDDS-1294.002.patch
>
>
> ExcludeList right now is a per BlockOutPutStream value, this can result in 
> multiple keys created out of the same client to run into same exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HDDS-1417:
-

Assignee: Nanda kumar  (was: Sandeep Nemuri)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Nanda kumar
>Priority: Blocker
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1417:
--
Reporter: Sandeep Nemuri  (was: Nanda kumar)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-1417:
-
Priority: Blocker  (was: Major)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Blocker
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HDDS-1417:
-

Assignee: Sandeep Nemuri  (was: Nanda kumar)

> After successfully importing a container, datanode should delete the 
> container tar.gz file from working directory
> -
>
> Key: HDDS-1417
> URL: https://issues.apache.org/jira/browse/HDDS-1417
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>
> Whenever we want to replicate or copy a container from one datanode to 
> another, we compress the container data and create a tar.gz file. This tar 
> file is then copied from source datanode to destination datanode. In 
> destination, we use a temporary working directory where this tar file is 
> copied. Once the copying is complete we import the container. After importing 
> the container we no longer need the tar file in the working directory of 
> destination datanode, this has to be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14420) Fix typo in KeyShell console

2019-04-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814747#comment-16814747
 ] 

Hudson commented on HDFS-14420:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16377 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16377/])
HDFS-14420. Fix typo in KeyShell console. Contributed by Hu Xiaodong. (gifuma: 
rev 813cee1a18b2df05dff90e4a2183546bc05cd712)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java


> Fix typo in KeyShell console
> 
>
> Key: HDFS-14420
> URL: https://issues.apache.org/jira/browse/HDFS-14420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.2
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14420.001.patch, errorPrint.PNG
>
>
> !errorPrint.PNG!
>  
> spelling mistake about word "attribute"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1417) After successfully importing a container, datanode should delete the container tar.gz file from working directory

2019-04-10 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1417:
-

 Summary: After successfully importing a container, datanode should 
delete the container tar.gz file from working directory
 Key: HDDS-1417
 URL: https://issues.apache.org/jira/browse/HDDS-1417
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Nanda kumar
Assignee: Nanda kumar


Whenever we want to replicate or copy a container from one datanode to another, 
we compress the container data and create a tar.gz file. This tar file is then 
copied from source datanode to destination datanode. In destination, we use a 
temporary working directory where this tar file is copied. Once the copying is 
complete we import the container. After importing the container we no longer 
need the tar file in the working directory of destination datanode, this has to 
be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14420) Fix typo in KeyShell console

2019-04-10 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDFS-14420:

Status: In Progress  (was: Patch Available)

> Fix typo in KeyShell console
> 
>
> Key: HDFS-14420
> URL: https://issues.apache.org/jira/browse/HDFS-14420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.2
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14420.001.patch, errorPrint.PNG
>
>
> !errorPrint.PNG!
>  
> spelling mistake about word "attribute"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14420) Fix typo in KeyShell console

2019-04-10 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814746#comment-16814746
 ] 

Giovanni Matteo Fumarola commented on HDFS-14420:
-

Thanks [~xiaodong.hu].
Committed to trunk.

> Fix typo in KeyShell console
> 
>
> Key: HDFS-14420
> URL: https://issues.apache.org/jira/browse/HDFS-14420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.2
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14420.001.patch, errorPrint.PNG
>
>
> !errorPrint.PNG!
>  
> spelling mistake about word "attribute"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >