[jira] [Updated] (HDFS-13004) TestLeaseRecoveryStriped#testLeaseRecovery is failing when safeLength is 0MB or 25MB

2018-01-10 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13004:
-
Summary: TestLeaseRecoveryStriped#testLeaseRecovery is failing when 
safeLength is 0MB or 25MB  (was: TestLeaseRecoveryStriped#testLeaseRecovery is 
failing)

> TestLeaseRecoveryStriped#testLeaseRecovery is failing when safeLength is 0MB 
> or 25MB
> 
>
> Key: HDFS-13004
> URL: https://issues.apache.org/jira/browse/HDFS-13004
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Fix For: 3.0.1
>
> Attachments: HDFS-13004.01.patch, HDFS-13004.02.patch
>
>
> {code}
> Error:
> failed testCase at i=1, 
> blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths=
> {4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304},safeLength=25165824]
> java.lang.AssertionError: File length should be the same expected:<25165824> 
> but was:<18874368>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> Stack:
> java.lang.AssertionError: 
> failed testCase at i=1, 
> blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths={4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304}
> ,safeLength=25165824]
> java.lang.AssertionError: File length should be the same expected:<25165824> 
> but was:<18874368>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182)

[jira] [Commented] (HDFS-12972) RBF: Display mount table quota info in Web UI and admin command

2018-01-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321776#comment-16321776
 ] 

Yiqun Lin commented on HDFS-12972:
--

BTW, i found usage info printed in RouterAdmin is too simple. I'd like to 
document more in HDFS-12973.

> RBF: Display mount table quota info in Web UI and admin command
> ---
>
> Key: HDFS-12972
> URL: https://issues.apache.org/jira/browse/HDFS-12972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12972.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2018-01-10 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321758#comment-16321758
 ] 

Misha Dmitriev commented on HDFS-12051:
---

[~szetszwo] I have already provided you the numbers that you asked for. If you 
would like anything else, please specify what you would like.

If you are unhappy with the summary not exactly matching the changes, please 
keep this conversation constructive and suggest what you think would be a 
possible alternative summary.

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, 
> HDFS-12051.06.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> 

[jira] [Updated] (HDFS-12972) RBF: Display mount table quota info in Web UI and admin command

2018-01-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12972:
-
Attachment: HDFS-12972.001.patch

Initial patch attached. Make additional change: supporting size unit suffix for 
setting storage space quota.
Sample out
{noformat}
SourceDestinations  Owner 
Group Mode  Quota/Usage  
/test-QuotaMounttable ns0->/QuotaMounttable yiqun01.lin   
yiqun01.lin   rwxr-xr-x [NsQuota: 50/0, SsQuota: 
100 B/0 B]

SourceDestinations  Owner 
Group Mode  Quota/Usage  
/test-QuotaMounttable ns0->/QuotaMounttable yiqun01.lin   
yiqun01.lin   rwxr-xr-x [NsQuota: -/-, SsQuota: 
-/-]   
{noformat}

> RBF: Display mount table quota info in Web UI and admin command
> ---
>
> Key: HDFS-12972
> URL: https://issues.apache.org/jira/browse/HDFS-12972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12972.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12972) RBF: Display mount table quota info in Web UI and admin command

2018-01-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12972:
-
Status: Patch Available  (was: Open)

> RBF: Display mount table quota info in Web UI and admin command
> ---
>
> Key: HDFS-12972
> URL: https://issues.apache.org/jira/browse/HDFS-12972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12972.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12429) Upgrade JUnit from 4 to 5 in hadoop-hdfs fs

2018-01-10 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-12429:
---

Assignee: Takanobu Asanuma

> Upgrade JUnit from 4 to 5 in hadoop-hdfs fs
> ---
>
> Key: HDFS-12429
> URL: https://issues.apache.org/jira/browse/HDFS-12429
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Takanobu Asanuma
>
> Upgrade JUnit from 4 to 5 in hadoop-hdfs fs 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2018-01-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321743#comment-16321743
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12051:


No.

Do you think that it is appropriate to claim that this is a SnapshotCopy change 
in the summary and the description but the patch actually changes NameNode 
internal silently, [~yzhangal]?

BTW, I am still waiting for some numbers to support his arguments.  Thanks.

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, 
> HDFS-12051.06.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> 

[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321694#comment-16321694
 ] 

genericqa commented on HDFS-12919:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905591/HDFS-12919.018.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c851f6a08409 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit 

[jira] [Commented] (HDFS-12940) Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321548#comment-16321548
 ] 

genericqa commented on HDFS-12940:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}211m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12940 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905558/HDFS-12940-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e79ce0578e3d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / f436977 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| 

[jira] [Updated] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-01-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Attachment: HDFS-12919.018.patch

> RBF: Support erasure coding methods in RouterRpcServer
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Critical
>  Labels: RBF
> Attachments: HDFS-12919.000.patch, HDFS-12919.001.patch, 
> HDFS-12919.002.patch, HDFS-12919.003.patch, HDFS-12919.004.patch, 
> HDFS-12919.005.patch, HDFS-12919.006.patch, HDFS-12919.007.patch, 
> HDFS-12919.008.patch, HDFS-12919.009.patch, HDFS-12919.010.patch, 
> HDFS-12919.011.patch, HDFS-12919.012.patch, HDFS-12919.013.patch, 
> HDFS-12919.013.patch, HDFS-12919.014.patch, HDFS-12919.015.patch, 
> HDFS-12919.016.patch, HDFS-12919.017.patch, HDFS-12919.018.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321528#comment-16321528
 ] 

genericqa commented on HDFS-12919:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}219m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestParallelUnixDomainRead |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | hadoop.hdfs.TestSetrepDecreasing |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
|   | 

[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321515#comment-16321515
 ] 

genericqa commented on HDFS-12794:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
18s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
27s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
0 unchanged - 1 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}148m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | 

[jira] [Commented] (HDFS-13009) Creation of Encryption zone should succeed even if directory is not empty.

2018-01-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321456#comment-16321456
 ] 

Rushabh S Shah commented on HDFS-13009:
---

[~xiaochen], [~andrew.wang]: please let me know what you guys think.

> Creation of Encryption zone should succeed even if directory is not empty.
> --
>
> Key: HDFS-13009
> URL: https://issues.apache.org/jira/browse/HDFS-13009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> Currently we have a restriction that creation of encryption zone can be done 
> only on an empty directory.
> This jira is to remove that restriction.
> Motivation:
> New customers who wants to start using Encryption zone can make an existing 
> directory encrypted.
> They will be able to read the old data as it is  and will be decrypting the 
> newly written data.
> Internally we have many customers asking for this feature.
> Currently they have to ask for more space quota, encrypt the old data.
> This will make their life much more easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13007) Ozone: Fix TestKeySpaceManager#testKSMInitializationFailure

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321435#comment-16321435
 ] 

genericqa commented on HDFS-13007:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-13007 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905544/HDFS-13007-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4221a608cbd0 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / f436977 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22640/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22640/testReport/ |
| Max. process+thread count | 4168 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-01-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321408#comment-16321408
 ] 

Xiaoyu Yao commented on HDFS-12794:
---

Thanks for the update, [~shashikant]. Patch v9 LGTM, +1 pending Jenkins.

> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch, 
> HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, 
> HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, 
> HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, 
> HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12996) DataNode Replica Trash

2018-01-10 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321337#comment-16321337
 ] 

Hanisha Koneru commented on HDFS-12996:
---

Thanks for the review, [~shahrs87].

bq. Suppose user1 and user2 deleted some of their directories (lets say dir1 
and dir2 respectively). If user1 wants to recover its directory, then we will 
recover dir2 as well ?

Yes. In the current design, recovery is done by rolling back to an earlier 
image. We could separately build a more fine-grained recovery mechanism on top 
of the replica trash.

bq. Many of our clients(lets say user1) use /tmp/ to store their 
intermediate task output (to work around quota problems). After a job 
completes, they delete this space and use the same location to store next job 
output. In the meantime if some other user(lets say user2) wants to recover 
their mistakenly deleted directory then we will go back in time for user1 which 
might corrupt user1's output directory.

True. This again would be a trade-off between recovering the deleted data and 
undoing operations performed after the delete operation. Only an administrator 
can make this call.

The goal of this feature is to provide a safe-guard to recover from 
catastrophic mistakes where it is acceptable to lose a few recent changes to 
recover deleted data.

bq. Also the design looks very similar to Checkpointing/Snapshots.

--> Didn't get what you mean by checkpointing in this context. If you take 
frequent rolling snapshots e.g. hourly snapshots of root directory from a cron 
job, then you don't need this feature and you can recover deleted files from a 
recent snapshot. However very few clusters are setup for this.


> DataNode Replica Trash
> --
>
> Key: HDFS-12996
> URL: https://issues.apache.org/jira/browse/HDFS-12996
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: DataNode_Replica_Trash_Design_Doc.pdf
>
>
> DataNode Replica Trash will allow administrators to recover from a recent 
> delete request that resulted in catastrophic loss of user data. This is 
> achieved by placing all invalidated blocks in a replica trash on the datanode 
> before completely purging them from the system. The design doc is attached 
> here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13009) Creation of Encryption zone should succeed even if directory is not empty.

2018-01-10 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-13009:
-

 Summary: Creation of Encryption zone should succeed even if 
directory is not empty.
 Key: HDFS-13009
 URL: https://issues.apache.org/jira/browse/HDFS-13009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: encryption
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah


Currently we have a restriction that creation of encryption zone can be done 
only on an empty directory.
This jira is to remove that restriction.
Motivation:
New customers who wants to start using Encryption zone can make an existing 
directory encrypted.
They will be able to read the old data as it is  and will be decrypting the 
newly written data.
Internally we have many customers asking for this feature.
Currently they have to ask for more space quota, encrypt the old data.
This will make their life much more easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13008) Ozone: Add DN container open/close state to container report

2018-01-10 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-13008:
-

 Summary: Ozone: Add DN container open/close state to container 
report
 Key: HDFS-13008
 URL: https://issues.apache.org/jira/browse/HDFS-13008
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDFS-12799 added support to allow SCM send closeContainerCommand to DNs. This 
ticket is opened to add the DN container close state to container report so 
that SCM container state manager can update state from closing to closed when 
DN side container is fully closed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12940) Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally

2018-01-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321255#comment-16321255
 ] 

Xiaoyu Yao commented on HDFS-12940:
---

The patch v2 contains the fix for HDFS-13007. We may fix these two unit tests 
failure together since they are both in TestKeySpaceManager. 

> Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally
> -
>
> Key: HDFS-12940
> URL: https://issues.apache.org/jira/browse/HDFS-12940
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12940-HDFS-7240.000.patch, 
> HDFS-12940-HDFS-7240.001.patch, HDFS-12940-HDFS-7240.002.patch
>
>
> {{TestKeySpaceManager#testExpiredOpenKey}} is flaky.
> In {{testExpiredOpenKey}} we are opening four keys for writing and wait for 
> them to expire (without committing). Verification/Assertion is done by 
> querying {{MiniOzoneCluster}} and matching the count. Since the {{cluster}} 
> instance of {{MiniOzoneCluster}} is shared between test-cases in 
> {{TestKeySpaceManager}}, we should not rely on the count. The verification 
> should only happen by matching the keyNames and not with the count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12940) Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally

2018-01-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12940:
--
Attachment: HDFS-12940-HDFS-7240.002.patch

> Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally
> -
>
> Key: HDFS-12940
> URL: https://issues.apache.org/jira/browse/HDFS-12940
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12940-HDFS-7240.000.patch, 
> HDFS-12940-HDFS-7240.001.patch, HDFS-12940-HDFS-7240.002.patch
>
>
> {{TestKeySpaceManager#testExpiredOpenKey}} is flaky.
> In {{testExpiredOpenKey}} we are opening four keys for writing and wait for 
> them to expire (without committing). Verification/Assertion is done by 
> querying {{MiniOzoneCluster}} and matching the count. Since the {{cluster}} 
> instance of {{MiniOzoneCluster}} is shared between test-cases in 
> {{TestKeySpaceManager}}, we should not rely on the count. The verification 
> should only happen by matching the keyNames and not with the count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12940) Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally

2018-01-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321245#comment-16321245
 ] 

Xiaoyu Yao commented on HDFS-12940:
---

[~nandakumar131], I tried your patch with repeated Junit run and it still 
failed randomly. Attaching a simpler fix based on yours and that should be good 
to fix the problem without separate test. 

{code}

ava.lang.AssertionError: 
Expected :0
Actual   :2
 
{code}

> Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally
> -
>
> Key: HDFS-12940
> URL: https://issues.apache.org/jira/browse/HDFS-12940
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12940-HDFS-7240.000.patch, 
> HDFS-12940-HDFS-7240.001.patch
>
>
> {{TestKeySpaceManager#testExpiredOpenKey}} is flaky.
> In {{testExpiredOpenKey}} we are opening four keys for writing and wait for 
> them to expire (without committing). Verification/Assertion is done by 
> querying {{MiniOzoneCluster}} and matching the count. Since the {{cluster}} 
> instance of {{MiniOzoneCluster}} is shared between test-cases in 
> {{TestKeySpaceManager}}, we should not rely on the count. The verification 
> should only happen by matching the keyNames and not with the count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12984) BlockPoolSlice can leak in a mini dfs cluster

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321213#comment-16321213
 ] 

genericqa commented on HDFS-12984:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12984 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905169/HDFS-12984.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7410a0c9d868 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22638/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22638/testReport/ |
| Max. process+thread 

[jira] [Updated] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-01-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Attachment: HDFS-12919.017.patch

> RBF: Support erasure coding methods in RouterRpcServer
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Critical
>  Labels: RBF
> Attachments: HDFS-12919.000.patch, HDFS-12919.001.patch, 
> HDFS-12919.002.patch, HDFS-12919.003.patch, HDFS-12919.004.patch, 
> HDFS-12919.005.patch, HDFS-12919.006.patch, HDFS-12919.007.patch, 
> HDFS-12919.008.patch, HDFS-12919.009.patch, HDFS-12919.010.patch, 
> HDFS-12919.011.patch, HDFS-12919.012.patch, HDFS-12919.013.patch, 
> HDFS-12919.013.patch, HDFS-12919.014.patch, HDFS-12919.015.patch, 
> HDFS-12919.016.patch, HDFS-12919.017.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13007) Ozone: Fix TestKeySpaceManager#testKSMInitializationFailure

2018-01-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13007:
--
Attachment: HDFS-13007-HDFS-7240.001.patch

> Ozone: Fix TestKeySpaceManager#testKSMInitializationFailure
> ---
>
> Key: HDFS-13007
> URL: https://issues.apache.org/jira/browse/HDFS-13007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-13007-HDFS-7240.001.patch
>
>
> The test attempts to create KSM with a config that does not specify  
> OZONE_SCM_CLIENT_ADDRESS_KEY and always failed with a different exception. 
> The fix is to set ozone.scm.client.address before instantiate KSM.
> {code}
> config.set(ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY, "127.0.0.1:0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13007) Ozone: Fix TestKeySpaceManager#testKSMInitializationFailure

2018-01-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13007:
--
Status: Patch Available  (was: Open)

> Ozone: Fix TestKeySpaceManager#testKSMInitializationFailure
> ---
>
> Key: HDFS-13007
> URL: https://issues.apache.org/jira/browse/HDFS-13007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-13007-HDFS-7240.001.patch
>
>
> The test attempts to create KSM with a config that does not specify  
> OZONE_SCM_CLIENT_ADDRESS_KEY and always failed with a different exception. 
> The fix is to set ozone.scm.client.address before instantiate KSM.
> {code}
> config.set(ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY, "127.0.0.1:0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13007) Ozone: Fix TestKeySpaceManager#testKSMInitializationFailure

2018-01-10 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-13007:
-

 Summary: Ozone: Fix 
TestKeySpaceManager#testKSMInitializationFailure
 Key: HDFS-13007
 URL: https://issues.apache.org/jira/browse/HDFS-13007
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


The test attempts to create KSM with a config that does not specify  
OZONE_SCM_CLIENT_ADDRESS_KEY and always failed with a different exception. 

The fix is to set ozone.scm.client.address before instantiate KSM.

{code}
config.set(ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY, "127.0.0.1:0");
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-01-10 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12794:
---
Attachment: HDFS-12794-HDFS-7240.009.patch

Thanks [~xyao], for the review comments.

Ratis tests are failing because ratis version needs update addressed with 
HDFS-12986 which has changes related XceiverClient-Ratis as well. 
for TestContainerPersistence tests , filed a new jira HDFS-13006 because the 
tests are failing without the patch as well.

SCMCli tests and container state tests are failing and HDFS-12968 is filed for 
same. Few other cases timed out while they pass in local box.

Few tests like TestKsmBlockVersioning#testReadLatestVersion were failing 
because of :
{code}
Preconditions
  .checkState(streamBufferSize > 0 && streamBufferSize > chunkSize);
{code} 

in ChunkGroupOutputStream.java which are fixed by setting streamBufferSize in 
MB correctly.

patch v9 addresses rest of your review comments. Please have a look.

> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch, 
> HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, 
> HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, 
> HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, 
> HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13006) Ozone: TestContainerPersistence#testMultipleWriteSingleRead and TestContainerPersistence#testOverWrite fail consistently

2018-01-10 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13006:
--

 Summary: Ozone: 
TestContainerPersistence#testMultipleWriteSingleRead and 
TestContainerPersistence#testOverWrite fail consistently
 Key: HDFS-13006
 URL: https://issues.apache.org/jira/browse/HDFS-13006
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: HDFS-7240
Affects Versions: HDFS-7240
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: HDFS-7240


testMultipleWriteSingleRead:
{code}
org.junit.ComparisonFailure: 
Expected :ba7110cc8c721d04fa60639cd065d5cb5d78ffe05b30c8ab05684f63b7ecbb81
Actual   :496271a1c82a712c4716b12c96017c97c46d30d825588bc0605d54200dab5c87
 
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testMultipleWriteSingleRead(TestContainerPersistence.java:586)
{code}

testOverWrite :
{code}
 java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testOverWrite(TestContainerPersistence.java:534)
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2018-01-10 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321092#comment-16321092
 ] 

Yongjun Zhang commented on HDFS-12051:
--

Hi [~szetszwo],

Thanks for chiming in and good comments made. Do you think 
[~mi...@cloudera.com] addressed all your concerns?



> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, 
> HDFS-12051.06.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> 

[jira] [Commented] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321016#comment-16321016
 ] 

genericqa commented on HDFS-12070:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12070 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905500/lease.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b64fafa48951 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1a09da7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HDFS-12966) Ozone: owner name should be set properly when the container allocation happens

2018-01-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12966:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~shashikant] for the update. Latest patch LGTM, +1. I've committed the 
patch to the feature branch. 

> Ozone: owner name should be set properly when the container allocation happens
> --
>
> Key: HDFS-12966
> URL: https://issues.apache.org/jira/browse/HDFS-12966
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12966-HDFS-7240.001.patch, 
> HDFS-12966-HDFS-7240.002.patch, HDFS-12966-HDFS-7240.003.patch, 
> HDFS-12966-HDFS-7240.004.patch
>
>
> Currently , while the container allocation happens, the owner name is 
> hardcoded as "OZONE".
> It should be set to KSM instance id/ CBlock Manager instance Id from where 
> the container creation call happens. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12871) Ozone: Service Discovery: Adding REST server details in ServiceList

2018-01-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12871:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~nandakumar131] for the contribution. I've committed the patch to the 
feature branch. 

> Ozone: Service Discovery: Adding REST server details in ServiceList
> ---
>
> Key: HDFS-12871
> URL: https://issues.apache.org/jira/browse/HDFS-12871
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12871-HDFS-7240.000.patch, 
> HDFS-12871-HDFS-7240.001.patch, HDFS-12871-HDFS-7240.002.patch
>
>
> The datanode (REST server) details has to be added as part of 
> ServiceDiscovery {{getServiceList}} call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12871) Ozone: Service Discovery: Adding REST server details in ServiceList

2018-01-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320941#comment-16320941
 ] 

Xiaoyu Yao commented on HDFS-12871:
---

[~nandakumar131], you are right. I took back my previous comment. 
+1 for the v002 patch and I will commit it shortly. 

> Ozone: Service Discovery: Adding REST server details in ServiceList
> ---
>
> Key: HDFS-12871
> URL: https://issues.apache.org/jira/browse/HDFS-12871
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12871-HDFS-7240.000.patch, 
> HDFS-12871-HDFS-7240.001.patch, HDFS-12871-HDFS-7240.002.patch
>
>
> The datanode (REST server) details has to be added as part of 
> ServiceDiscovery {{getServiceList}} call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12984) BlockPoolSlice can leak in a mini dfs cluster

2018-01-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320891#comment-16320891
 ] 

Arpit Agarwal commented on HDFS-12984:
--

+1 pending Jenkins.

> BlockPoolSlice can leak in a mini dfs cluster
> -
>
> Key: HDFS-12984
> URL: https://issues.apache.org/jira/browse/HDFS-12984
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.5
>Reporter: Robert Joseph Evans
>Assignee: Ajay Kumar
> Attachments: HDFS-12984.001.patch, Screen Shot 2018-01-05 at 4.38.06 
> PM.png, Screen Shot 2018-01-05 at 5.26.54 PM.png, Screen Shot 2018-01-05 at 
> 5.31.52 PM.png
>
>
> When running some unit tests for storm we found that we would occasionally 
> get out of memory errors on the HDFS integration tests.
> When I got a heap dump I found that the ShutdownHookManager was full of 
> BlockPoolSlice$1 instances.  Which hold a reference to the BlockPoolSlice 
> which then in turn holds a reference to the DataNode etc
> It looks like when shutdown is called on the BlockPoolSlice there is no way 
> to remove the shut down hook in because no reference to it is saved.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12984) BlockPoolSlice can leak in a mini dfs cluster

2018-01-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12984:
-
Status: Patch Available  (was: Open)

> BlockPoolSlice can leak in a mini dfs cluster
> -
>
> Key: HDFS-12984
> URL: https://issues.apache.org/jira/browse/HDFS-12984
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.5
>Reporter: Robert Joseph Evans
>Assignee: Ajay Kumar
> Attachments: HDFS-12984.001.patch, Screen Shot 2018-01-05 at 4.38.06 
> PM.png, Screen Shot 2018-01-05 at 5.26.54 PM.png, Screen Shot 2018-01-05 at 
> 5.31.52 PM.png
>
>
> When running some unit tests for storm we found that we would occasionally 
> get out of memory errors on the HDFS integration tests.
> When I got a heap dump I found that the ShutdownHookManager was full of 
> BlockPoolSlice$1 instances.  Which hold a reference to the BlockPoolSlice 
> which then in turn holds a reference to the DataNode etc
> It looks like when shutdown is called on the BlockPoolSlice there is no way 
> to remove the shut down hook in because no reference to it is saved.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-01-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320887#comment-16320887
 ] 

Xiaoyu Yao commented on HDFS-12794:
---

Thanks [~shashikant] for the update. The latest patch LGTM. Just few minor 
issues below. Can you also manuual run the failed Jenkins test and confirm if 
they are related to this patch. If not, please file JIRAs to fix them 
separately. 

OzoneConfigKeys.java
Line 110: can you define a default const for stream buffer size?

ChunkGroupOutputStream.java
Line 124: NIT: the java doc needs to be updated block size-> stream buffer size
Line 93/458:  can you use the const from OzoneConfigKeys for max stream buffer 
size?
Line 428: NIT: blockSize->streamBufferSize?


> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch, 
> HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, 
> HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, 
> HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, 
> HDFS-12794-HDFS-7240.008.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12871) Ozone: Service Discovery: Adding REST server details in ServiceList

2018-01-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320871#comment-16320871
 ] 

Nanda kumar commented on HDFS-12871:


[~xyao], the {{build()}} in line 878/884 are for {{ServicePort}} (HTTP and 
HTTPS), and the one in line 886 is for {{ServiceInfo}} (dnServiceInfoBuilder). 
We need all the three build calls.

> Ozone: Service Discovery: Adding REST server details in ServiceList
> ---
>
> Key: HDFS-12871
> URL: https://issues.apache.org/jira/browse/HDFS-12871
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12871-HDFS-7240.000.patch, 
> HDFS-12871-HDFS-7240.001.patch, HDFS-12871-HDFS-7240.002.patch
>
>
> The datanode (REST server) details has to be added as part of 
> ServiceDiscovery {{getServiceList}} call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13004) TestLeaseRecoveryStriped#testLeaseRecovery is failing

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320860#comment-16320860
 ] 

genericqa commented on HDFS-13004:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13004 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905493/HDFS-13004.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 73965a9c84c3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1a09da7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22635/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22635/testReport/ |
| Max. process+thread count | 2976 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22635/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


[jira] [Commented] (HDFS-12871) Ozone: Service Discovery: Adding REST server details in ServiceList

2018-01-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320859#comment-16320859
 ] 

Xiaoyu Yao commented on HDFS-12871:
---

Thanks [~nandakumar131] for the update. One more comment on the latest patch, 
+1 after that is fixed.

KeySapceManager.java
Line 878/884: Can you remove the redundant".build());"as we only need to build 
it once at line 886? 

> Ozone: Service Discovery: Adding REST server details in ServiceList
> ---
>
> Key: HDFS-12871
> URL: https://issues.apache.org/jira/browse/HDFS-12871
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12871-HDFS-7240.000.patch, 
> HDFS-12871-HDFS-7240.001.patch, HDFS-12871-HDFS-7240.002.patch
>
>
> The datanode (REST server) details has to be added as part of 
> ServiceDiscovery {{getServiceList}} call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320773#comment-16320773
 ] 

genericqa commented on HDFS-12919:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 48s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
39s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905501/HDFS-12919.016.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 25c8e0da5b3d 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1a09da7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22637/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| 

[jira] [Updated] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-01-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Attachment: HDFS-12919.016.patch

> RBF: Support erasure coding methods in RouterRpcServer
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Critical
>  Labels: RBF
> Attachments: HDFS-12919.000.patch, HDFS-12919.001.patch, 
> HDFS-12919.002.patch, HDFS-12919.003.patch, HDFS-12919.004.patch, 
> HDFS-12919.005.patch, HDFS-12919.006.patch, HDFS-12919.007.patch, 
> HDFS-12919.008.patch, HDFS-12919.009.patch, HDFS-12919.010.patch, 
> HDFS-12919.011.patch, HDFS-12919.012.patch, HDFS-12919.013.patch, 
> HDFS-12919.013.patch, HDFS-12919.014.patch, HDFS-12919.015.patch, 
> HDFS-12919.016.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2018-01-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12070:
--
Status: Patch Available  (was: Open)

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
> Attachments: lease.patch
>
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2018-01-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12070:
--
Attachment: lease.patch

Attaching a patch without any unit test to run through the qa bot. 

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
> Attachments: lease.patch
>
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13004) TestLeaseRecoveryStriped#testLeaseRecovery is failing

2018-01-10 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13004:
-
Attachment: HDFS-13004.02.patch

> TestLeaseRecoveryStriped#testLeaseRecovery is failing
> -
>
> Key: HDFS-13004
> URL: https://issues.apache.org/jira/browse/HDFS-13004
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Fix For: 3.0.1
>
> Attachments: HDFS-13004.01.patch, HDFS-13004.02.patch
>
>
> {code}
> Error:
> failed testCase at i=1, 
> blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths=
> {4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304},safeLength=25165824]
> java.lang.AssertionError: File length should be the same expected:<25165824> 
> but was:<18874368>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> Stack:
> java.lang.AssertionError: 
> failed testCase at i=1, 
> blockLengths=org.apache.hadoop.hdfs.TestLeaseRecoveryStriped$BlockLengths@5a4c638d[blockLengths={4194304,4194304,4194304,1048576,4194304,4194304,2097152,1048576,4194304}
> ,safeLength=25165824]
> java.lang.AssertionError: File length should be the same expected:<25165824> 
> but was:<18874368>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.verifyLength(StripedFileTestUtil.java:79)
> at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:362)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:198)
> at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:182)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Commented] (HDFS-12871) Ozone: Service Discovery: Adding REST server details in ServiceList

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320268#comment-16320268
 ] 

genericqa commented on HDFS-12871:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  6m 
18s{color} | {color:red} Docker failed to build yetus/hadoop:d11161b. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12871 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905450/HDFS-12871-HDFS-7240.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22634/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Service Discovery: Adding REST server details in ServiceList
> ---
>
> Key: HDFS-12871
> URL: https://issues.apache.org/jira/browse/HDFS-12871
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12871-HDFS-7240.000.patch, 
> HDFS-12871-HDFS-7240.001.patch, HDFS-12871-HDFS-7240.002.patch
>
>
> The datanode (REST server) details has to be added as part of 
> ServiceDiscovery {{getServiceList}} call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12868) Ozone: Service Discovery API

2018-01-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12868:
---
Description: 
Currently if a client wants to connect to Ozone cluster we need multiple 
properties to be configured in the client.

For RPC based connection we need
{{ozone.ksm.address}}
{{ozone.scm.client.address}}
and the ports if something other than default is configured.

For REST based connection
{{ozone.rest.servers}}
and port if something other than default is configured.

With the introduction of Service Discovery API the client should be able to 
discover all the configurations needed for the connection. Service discovery 
calls will be handled by KSM, at the client side, we only need to configure 
{{ozone.ksm.address}}. The client should first connect to KSM and get all the 
required configurations.


  was:
qCurrently if a client wants to connect to Ozone cluster we need multiple 
properties to be configured in the client.

For RPC based connection we need
{{ozone.ksm.address}}
{{ozone.scm.client.address}}
and the ports if something other than default is configured.

For REST based connection
{{ozone.rest.servers}}
and port if something other than default is configured.

With the introduction of Service Discovery API the client should be able to 
discover all the configurations needed for the connection. Service discovery 
calls will be handled by KSM, at the client side, we only need to configure 
{{ozone.ksm.address}}. The client should first connect to KSM and get all the 
required configurations.



> Ozone: Service Discovery API
> 
>
> Key: HDFS-12868
> URL: https://issues.apache.org/jira/browse/HDFS-12868
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>
> Currently if a client wants to connect to Ozone cluster we need multiple 
> properties to be configured in the client.
> For RPC based connection we need
> {{ozone.ksm.address}}
> {{ozone.scm.client.address}}
> and the ports if something other than default is configured.
> For REST based connection
> {{ozone.rest.servers}}
> and port if something other than default is configured.
> With the introduction of Service Discovery API the client should be able to 
> discover all the configurations needed for the connection. Service discovery 
> calls will be handled by KSM, at the client side, we only need to configure 
> {{ozone.ksm.address}}. The client should first connect to KSM and get all the 
> required configurations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12871) Ozone: Service Discovery: Adding REST server details in ServiceList

2018-01-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12871:
---
Attachment: HDFS-12871-HDFS-7240.002.patch

Thanks [~xyao] for the review, addressed the same in patch v002.

> Ozone: Service Discovery: Adding REST server details in ServiceList
> ---
>
> Key: HDFS-12871
> URL: https://issues.apache.org/jira/browse/HDFS-12871
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12871-HDFS-7240.000.patch, 
> HDFS-12871-HDFS-7240.001.patch, HDFS-12871-HDFS-7240.002.patch
>
>
> The datanode (REST server) details has to be added as part of 
> ServiceDiscovery {{getServiceList}} call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2018-01-10 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320229#comment-16320229
 ] 

Vinayakumar B commented on HDFS-11409:
--

Cherry picked to branch-2.8.

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.4
>
> Attachments: HDFS-11409.001.patch
>
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2018-01-10 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11409:
-
Fix Version/s: 2.8.4

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.4
>
> Attachments: HDFS-11409.001.patch
>
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12868) Ozone: Service Discovery API

2018-01-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12868 started by Nanda kumar.
--
> Ozone: Service Discovery API
> 
>
> Key: HDFS-12868
> URL: https://issues.apache.org/jira/browse/HDFS-12868
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>
> qCurrently if a client wants to connect to Ozone cluster we need multiple 
> properties to be configured in the client.
> For RPC based connection we need
> {{ozone.ksm.address}}
> {{ozone.scm.client.address}}
> and the ports if something other than default is configured.
> For REST based connection
> {{ozone.rest.servers}}
> and port if something other than default is configured.
> With the introduction of Service Discovery API the client should be able to 
> discover all the configurations needed for the connection. Service discovery 
> calls will be handled by KSM, at the client side, we only need to configure 
> {{ozone.ksm.address}}. The client should first connect to KSM and get all the 
> required configurations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12733) Option to disable to namenode local edits

2018-01-10 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12733:

Component/s: performance
 namenode

> Option to disable to namenode local edits
> -
>
> Key: HDFS-12733
> URL: https://issues.apache.org/jira/browse/HDFS-12733
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12733-001.patch, HDFS-12733-002.patch, 
> HDFS-12733-003.patch
>
>
> As of now, Edits will be written in local and shared locations which will be 
> redundant and local edits never used in HA setup.
> Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2018-01-10 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320197#comment-16320197
 ] 

Brahma Reddy Battula commented on HDFS-8693:


+1, on latest patch. [~vinayrpet] and [~kihwal] do you've any comments on 
latest patch.

> refreshNamenodes does not support adding a new standby to a running DN
> --
>
> Key: HDFS-8693
> URL: https://issues.apache.org/jira/browse/HDFS-8693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 2.6.0
>Reporter: Jian Fang
>Assignee: Ajith S
>Priority: Critical
> Attachments: HDFS-8693.02.patch, HDFS-8693.03.patch, HDFS-8693.1.patch
>
>
> I tried to run the following command on a Hadoop 2.6.0 cluster with HA 
> support 
> $ hdfs dfsadmin -refreshNamenodes datanode-host:port
> to refresh name nodes on data nodes after I replaced one name node with a new 
> one so that I don't need to restart the data nodes. However, I got the 
> following error:
> refreshNamenodes: HA does not currently support adding a new standby to a 
> running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.
> I checked the 2.6.0 code and the error was thrown by the following code 
> snippet, which led me to this JIRA.
> void refreshNNList(ArrayList addrs) throws IOException {
> Set oldAddrs = Sets.newHashSet();
> for (BPServiceActor actor : bpServices)
> { oldAddrs.add(actor.getNNSocketAddress()); }
> Set newAddrs = Sets.newHashSet(addrs);
> if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
> { // Keep things simple for now -- we can implement this at a later date. 
> throw new IOException( "HA does not currently support adding a new standby to 
> a running DN. " + "Please do a rolling restart of DNs to reconfigure the list 
> of NNs."); }
> }
> Looks like this the refreshNameNodes command is an uncompleted feature. 
> Unfortunately, the new name node on a replacement is critical for auto 
> provisioning a hadoop cluster with HDFS HA support. Without this support, the 
> HA feature could not really be used. I also observed that the new standby 
> name node on the replacement instance could stuck in safe mode because no 
> data nodes check in with it. Even with a rolling restart, it may take quite 
> some time to restart all data nodes if we have a big cluster, for example, 
> with 4000 data nodes, let alone restarting DN is way too intrusive and it is 
> not a preferable operation in production. It also increases the chance for a 
> double failure because the standby name node is not really ready for a 
> failover in the case that the current active name node fails. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11409) DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile instead of synchronized

2018-01-10 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320192#comment-16320192
 ] 

Vinayakumar B commented on HDFS-11409:
--

bq. it's good candidate for branch-2.8 ?
Yes. Since its a straight-forward change, it could be cherry-picked to 
branch-2.8.

> DatanodeInfo getNetworkLocation and setNetworkLocation shoud use volatile 
> instead of synchronized
> -
>
> Key: HDFS-11409
> URL: https://issues.apache.org/jira/browse/HDFS-11409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-11409.001.patch
>
>
> {{DatanodeInfo}} has synchronized methods {{getNetworkLocation}} and 
> {{setNetworkLocation}}. While they doing nothing more than setting and 
> getting variable {{location}}.
> Since {{location}} is not being modified based on its current value and is 
> independent from any other variables. This JIRA propose to remove 
> synchronized methods but only make {{location}} volatile. Such that threads 
> will not be blocked on get/setNetworkLocation.
> Thanks  [~szetszwo] for the offline disscussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12934) RBF: Federation supports global quota

2018-01-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319861#comment-16319861
 ] 

Yiqun Lin edited comment on HDFS-12934 at 1/10/18 11:58 AM:


Attach new patch to fix compile error on Java 7 for branch-2.


was (Author: linyiqun):
Attach new patch to fix compile error in branch-2.

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HDFS-12934-branch-2.001.patch, 
> HDFS-12934-branch-2.002.patch, HDFS-12934.001.patch, HDFS-12934.002.patch, 
> HDFS-12934.003.patch, HDFS-12934.004.patch, HDFS-12934.005.patch, 
> HDFS-12934.006.patch, HDFS-12934.007.patch, HDFS-12934.008.patch, RBF support 
>  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12934) RBF: Federation supports global quota

2018-01-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12934:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.10.0
  3.1.0
Target Version/s: 3.1.0  (was: 2.9.1, 3.0.1)
  Status: Resolved  (was: Patch Available)

Failed UTs are not related. Had committed this to trunk and branch-2. 
Since there maybe still some further testing or improvement work we need to do, 
I didn't commit this to minor version branches 
Thanks for the review, [~elgoiri]!

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HDFS-12934-branch-2.001.patch, 
> HDFS-12934-branch-2.002.patch, HDFS-12934.001.patch, HDFS-12934.002.patch, 
> HDFS-12934.003.patch, HDFS-12934.004.patch, HDFS-12934.005.patch, 
> HDFS-12934.006.patch, HDFS-12934.007.patch, HDFS-12934.008.patch, RBF support 
>  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13000) Ozone: OzoneFileSystem: Implement seek functionality for rest client

2018-01-10 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13000:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-7240

> Ozone: OzoneFileSystem: Implement seek functionality for rest client
> 
>
> Key: HDFS-13000
> URL: https://issues.apache.org/jira/browse/HDFS-13000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>
> This jira aims to add seekable functionality in rest client input stream.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12934) RBF: Federation supports global quota

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319994#comment-16319994
 ] 

genericqa commented on HDFS-12934:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 391 unchanged - 0 fixed = 393 total (was 391) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
5s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:22 |
| Failed junit tests | hadoop.hdfs.TestListFilesInDFS |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.TestDatanodeRegistration |
|   | org.apache.hadoop.hdfs.TestBlocksScheduledCounter |
|   | org.apache.hadoop.hdfs.TestDFSClientFailover |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsTokens |
|   | org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade |
|   | org.apache.hadoop.hdfs.TestFileAppendRestart |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter |
|   | org.apache.hadoop.hdfs.TestDFSOutputStream |
|   | org.apache.hadoop.hdfs.TestDatanodeReport |
|   | org.apache.hadoop.hdfs.web.TestWebHDFS |
|   | org.apache.hadoop.hdfs.web.TestWebHDFSXAttr |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | org.apache.hadoop.hdfs.TestMiniDFSCluster |
|   | org.apache.hadoop.hdfs.TestDistributedFileSystem |
|   | org.apache.hadoop.hdfs.web.TestWebHDFSForHA |
|   | org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication |
|   | org.apache.hadoop.hdfs.TestDFSShell |
|   | org.apache.hadoop.hdfs.web.TestWebHDFSAcl |
\\
\\
|| Subsystem || 

[jira] [Commented] (HDFS-11194) Maintain aggregated peer performance metrics on NameNode

2018-01-10 Thread Leo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319891#comment-16319891
 ] 

Leo Chen commented on HDFS-11194:
-

hi Arpit, what's  metrics name in namenode jmx info that user can search?

> Maintain aggregated peer performance metrics on NameNode
> 
>
> Key: HDFS-11194
> URL: https://issues.apache.org/jira/browse/HDFS-11194
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Xiaobing Zhou
>Assignee: Arpit Agarwal
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-11194-03-04.delta, HDFS-11194.01.patch, 
> HDFS-11194.02.patch, HDFS-11194.03.patch, HDFS-11194.04.patch, 
> HDFS-11194.05.patch, HDFS-11194.06.patch
>
>
> The metrics collected in HDFS-10917 should be reported to and aggregated on 
> NameNode as part of heart beat messages. This will make is easy to expose it 
> through JMX to users who are interested in them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12934) RBF: Federation supports global quota

2018-01-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12934:
-
Attachment: HDFS-12934-branch-2.002.patch

Attach new patch to fix compile error in branch-2.

> RBF: Federation supports global quota
> -
>
> Key: HDFS-12934
> URL: https://issues.apache.org/jira/browse/HDFS-12934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12934-branch-2.001.patch, 
> HDFS-12934-branch-2.002.patch, HDFS-12934.001.patch, HDFS-12934.002.patch, 
> HDFS-12934.003.patch, HDFS-12934.004.patch, HDFS-12934.005.patch, 
> HDFS-12934.006.patch, HDFS-12934.007.patch, HDFS-12934.008.patch, RBF support 
>  global quota.pdf
>
>
> Now federation doesn't support set the global quota for each folder. 
> Currently the quota will be applied for each subcluster under the specified 
> folder via RPC call.
> It will be very useful for users that federation can support setting global 
> quota and exposing the command of this.
> In a federated environment, a folder can be spread across multiple 
> subclusters. For this reason, we plan to solve this by following way:
> # Set global quota across each subcluster. We don't allow each subcluster can 
> exceed maximun quota value.
> # We need to construct one  cache map for storing the sum  
> quota usage of these subclusters under federation folder. Every time we want 
> to do WRITE operation under specified folder, we will get its quota usage 
> from cache and verify its quota. If quota exceeded, throw exception, 
> otherwise update its quota usage in cache when finishing operations.
> The quota will be set to mount table and as a new field in mount table. The 
> set/unset command will be like:
> {noformat}
>  hdfs dfsrouteradmin -setQuota -ns  -ss  
>  hdfs dfsrouteradmin -clrQuota  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org