[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Status: Patch Available  (was: Open)

> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> {{headerLen=length(HEADER)}}
> {{payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}
> So {{totalLen = payloadLen + headerLen + length(HLEN)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Description: 
{{headerLen=length(HEADER)}}
{{payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}

So {{totalLen = payloadLen + headerLen + length(HLEN)}}

  was:
{{headerLen=length(HEADER)}}
{{payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}

{{totalLen = payloadLen + headerLen}}


> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> {{headerLen=length(HEADER)}}
> {{payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}
> So {{totalLen = payloadLen + headerLen + length(HLEN)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Description: 
{{headerLen=length(HEADER)}}
{{payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}

{{totalLen = payloadLen + headerLen}}

  was:
{{headerLen=length(HEADER)
payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}

{{totalLen = payloadLen + headerLen}}


> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> {{headerLen=length(HEADER)}}
> {{payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}
> {{totalLen = payloadLen + headerLen}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Description: 
{{headerLen=length(HEADER)
payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}

{{totalLen = payloadLen + headerLen}}

  was:
{{* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}

{{totalLen = payloadLen + headerLen}}


> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> {{headerLen=length(HEADER)
> payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}
> {{totalLen = payloadLen + headerLen}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Description: 
* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)

{{totalLen = payloadLen + headerLen}}

  was:
* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)

bq. totalLen = payloadLen + headerLen{{monospaced text}}


> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> * headerLen=length(HEADER)
> * payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)
> {{totalLen = payloadLen + headerLen}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Description: 
{{* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}

{{totalLen = payloadLen + headerLen}}

  was:
* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)

{{totalLen = payloadLen + headerLen}}


> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> {{* headerLen=length(HEADER)
> * payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)}}
> {{totalLen = payloadLen + headerLen}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Description: 
* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)

bq. totalLen = payloadLen + headerLen{{monospaced text}}

  was:
* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)

bq. totalLen = payloadLen + headerLen


> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> * headerLen=length(HEADER)
> * payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)
> bq. totalLen = payloadLen + headerLen{{monospaced text}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Description: 
* headerLen=length(HEADER)
* payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)

bq. totalLen = payloadLen + headerLen

> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>
> * headerLen=length(HEADER)
> * payloadLen=length(PLEN) + length(CHECKSUMS) + length(DATA)
> bq. totalLen = payloadLen + headerLen



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Attachment: HDFS-12393.001.patch

> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
> Attachments: HDFS-12393.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12392) Write striped file failure

2017-09-04 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-12392:
-
Status: Patch Available  (was: Open)

> Write striped file failure
> --
>
> Key: HDFS-12392
> URL: https://issues.apache.org/jira/browse/HDFS-12392
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12392.001.patch
>
>
> Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is 
> more than caller expected.
> Exception stack:
> org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of 
> length 4096
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12392) Write striped file failure

2017-09-04 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153129#comment-16153129
 ] 

SammiChen edited comment on HDFS-12392 at 9/5/17 5:45 AM:
--

Initial patch

Hi [~drankye], would you please help to review the patch? 


was (Author: sammi):
Initial patch

> Write striped file failure
> --
>
> Key: HDFS-12392
> URL: https://issues.apache.org/jira/browse/HDFS-12392
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12392.001.patch
>
>
> Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is 
> more than caller expected.
> Exception stack:
> org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of 
> length 4096
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12392) Write striped file failure

2017-09-04 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-12392:
-
Attachment: HDFS-12392.001.patch

Initial patch

> Write striped file failure
> --
>
> Key: HDFS-12392
> URL: https://issues.apache.org/jira/browse/HDFS-12392
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12392.001.patch
>
>
> Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is 
> more than caller expected.
> Exception stack:
> org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of 
> length 4096
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Component/s: hdfs-client

> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

legend updated HDFS-12393:
--
Flags: Patch

> Fix incorrect package length for doRead in PacketReceiver
> -
>
> Key: HDFS-12393
> URL: https://issues.apache.org/jira/browse/HDFS-12393
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12393) Fix incorrect package length for doRead in PacketReceiver

2017-09-04 Thread legend (JIRA)
legend created HDFS-12393:
-

 Summary: Fix incorrect package length for doRead in PacketReceiver
 Key: HDFS-12393
 URL: https://issues.apache.org/jira/browse/HDFS-12393
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha4
Reporter: legend






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12392) Write striped file failure

2017-09-04 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen reassigned HDFS-12392:


Assignee: SammiChen

> Write striped file failure
> --
>
> Key: HDFS-12392
> URL: https://issues.apache.org/jira/browse/HDFS-12392
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
>
> Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is 
> more than caller expected.
> Exception stack:
> org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of 
> length 4096
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12388) A bad error message in DFSStripedOutputStream

2017-09-04 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12388:

Attachment: HDFS-12388.001.patch

> A bad error message in DFSStripedOutputStream
> -
>
> Key: HDFS-12388
> URL: https://issues.apache.org/jira/browse/HDFS-12388
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kai Zheng
>Assignee: Huafeng Wang
> Attachments: HDFS-12388.001.patch
>
>
> Noticed a failure reported by Jenkins in HDFS-11882. The reported error 
> message wasn't correct, it should be: {{the number of failed blocks = 4 > the 
> number of data blocks = 3}} =>  {{the number of failed blocks = 4 > the 
> number of parity blocks = 3}} 
> {noformat}
> Regression
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030.testBlockTokenExpired
> Failing for the past 1 build (Since Failed#20973 )
> Took 6.4 sec.
> Error Message
> Failed at i=6294527
> Stacktrace
> java.io.IOException: Failed at i=6294527
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:559)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:534)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired(TestDFSStripedOutputStreamWithFailure.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.IOException: Failed: the number of failed blocks = 4 > the 
> number of data blocks = 3
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:392)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:410)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1262)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:627)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:563)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
>   at java.io.DataOutputStream.write(DataOutputStream.java:88)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:557)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:534)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired(TestDFSStripedOutputStreamWithFailure.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDFS-12388) A bad error message in DFSStripedOutputStream

2017-09-04 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12388:

Status: Patch Available  (was: Open)

> A bad error message in DFSStripedOutputStream
> -
>
> Key: HDFS-12388
> URL: https://issues.apache.org/jira/browse/HDFS-12388
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kai Zheng
>Assignee: Huafeng Wang
> Attachments: HDFS-12388.001.patch
>
>
> Noticed a failure reported by Jenkins in HDFS-11882. The reported error 
> message wasn't correct, it should be: {{the number of failed blocks = 4 > the 
> number of data blocks = 3}} =>  {{the number of failed blocks = 4 > the 
> number of parity blocks = 3}} 
> {noformat}
> Regression
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030.testBlockTokenExpired
> Failing for the past 1 build (Since Failed#20973 )
> Took 6.4 sec.
> Error Message
> Failed at i=6294527
> Stacktrace
> java.io.IOException: Failed at i=6294527
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:559)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:534)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired(TestDFSStripedOutputStreamWithFailure.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.IOException: Failed: the number of failed blocks = 4 > the 
> number of data blocks = 3
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:392)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:410)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1262)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:627)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:563)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
>   at java.io.DataOutputStream.write(DataOutputStream.java:88)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:557)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:534)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired(TestDFSStripedOutputStreamWithFailure.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Assigned] (HDFS-12388) A bad error message in DFSStripedOutputStream

2017-09-04 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang reassigned HDFS-12388:
---

Assignee: Huafeng Wang

> A bad error message in DFSStripedOutputStream
> -
>
> Key: HDFS-12388
> URL: https://issues.apache.org/jira/browse/HDFS-12388
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kai Zheng
>Assignee: Huafeng Wang
>
> Noticed a failure reported by Jenkins in HDFS-11882. The reported error 
> message wasn't correct, it should be: {{the number of failed blocks = 4 > the 
> number of data blocks = 3}} =>  {{the number of failed blocks = 4 > the 
> number of parity blocks = 3}} 
> {noformat}
> Regression
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030.testBlockTokenExpired
> Failing for the past 1 build (Since Failed#20973 )
> Took 6.4 sec.
> Error Message
> Failed at i=6294527
> Stacktrace
> java.io.IOException: Failed at i=6294527
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:559)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:534)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired(TestDFSStripedOutputStreamWithFailure.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.IOException: Failed: the number of failed blocks = 4 > the 
> number of data blocks = 3
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:392)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:410)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1262)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:627)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:563)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
>   at java.io.DataOutputStream.write(DataOutputStream.java:88)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:557)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:534)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired(TestDFSStripedOutputStreamWithFailure.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDFS-12222) Add EC information to BlockLocation

2017-09-04 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-1:

Attachment: HDFS-1.003.patch

> Add EC information to BlockLocation
> ---
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-1.001.patch, HDFS-1.002.patch, 
> HDFS-1.003.patch
>
>
> HDFS applications query block location information to compute splits. One 
> example of this is FileInputFormat:
> https://github.com/apache/hadoop/blob/d4015f8628dd973c7433639451a9acc3e741d2a2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java#L346
> You see bits of code like this that calculate offsets as follows:
> {noformat}
> long bytesInThisBlock = blkLocations[startIndex].getOffset() + 
>   blkLocations[startIndex].getLength() - offset;
> {noformat}
> EC confuses this since the block locations include parity block locations as 
> well, which are not part of the logical file length. This messes up the 
> offset calculation and thus topology/caching information too.
> Applications can figure out what's a parity block by reading the EC policy 
> and then parsing the schema, but it'd be a lot better if we exposed this more 
> generically in BlockLocation instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12222) Add EC information to BlockLocation

2017-09-04 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-1:

Status: Patch Available  (was: Open)

> Add EC information to BlockLocation
> ---
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-1.001.patch, HDFS-1.002.patch, 
> HDFS-1.003.patch
>
>
> HDFS applications query block location information to compute splits. One 
> example of this is FileInputFormat:
> https://github.com/apache/hadoop/blob/d4015f8628dd973c7433639451a9acc3e741d2a2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java#L346
> You see bits of code like this that calculate offsets as follows:
> {noformat}
> long bytesInThisBlock = blkLocations[startIndex].getOffset() + 
>   blkLocations[startIndex].getLength() - offset;
> {noformat}
> EC confuses this since the block locations include parity block locations as 
> well, which are not part of the logical file length. This messes up the 
> offset calculation and thus topology/caching information too.
> Applications can figure out what's a parity block by reading the EC policy 
> and then parsing the schema, but it'd be a lot better if we exposed this more 
> generically in BlockLocation instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12392) Write striped file failure

2017-09-04 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-12392:
-
Summary: Write striped file failure  (was: Randomly write striped file 
failure)

> Write striped file failure
> --
>
> Key: HDFS-12392
> URL: https://issues.apache.org/jira/browse/HDFS-12392
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: SammiChen
>  Labels: hdfs-ec-3.0-must-do
>
> Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is 
> more than caller expected.
> Exception stack:
> org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of 
> length 4096
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12392) Randomly write striped file failure

2017-09-04 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-12392:
-
Summary: Randomly write striped file failure  (was: Randomly read striped 
file failure)

> Randomly write striped file failure
> ---
>
> Key: HDFS-12392
> URL: https://issues.apache.org/jira/browse/HDFS-12392
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: SammiChen
>  Labels: hdfs-ec-3.0-must-do
>
> Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is 
> more than caller expected.
> Exception stack:
> org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of 
> length 4096
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12392) Randomly read striped file failure

2017-09-04 Thread SammiChen (JIRA)
SammiChen created HDFS-12392:


 Summary: Randomly read striped file failure
 Key: HDFS-12392
 URL: https://issues.apache.org/jira/browse/HDFS-12392
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha3
Reporter: SammiChen


Root cause: The buffer size returned by ElasticByteBufferPool.getBuffer() is 
more than caller expected.


Exception stack:
org.apache.hadoop.HadoopIllegalArgumentException: Invalid buffer, not of length 
4096

at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.checkBuffers(ByteBufferEncodingState.java:99)
at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferEncodingState.(ByteBufferEncodingState.java:46)
at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder.encode(RawErasureEncoder.java:67)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.encode(DFSStripedOutputStream.java:368)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeParityCells(DFSStripedOutputStream.java:942)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-04 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153051#comment-16153051
 ] 

Kai Zheng commented on HDFS-7859:
-

7. "HDFS-8140" should be HDFS-11467 instead, please update the comments.
{code}
+@Override
+protected void toXml(ContentHandler contentHandler) throws SAXException {
+  // TODO: HDFS-8140 Support for offline EditsVistor over an OEV XML file
+}
+
+@Override
+void fromXml(Stanza st) throws InvalidXmlException {
+  // TODO: HDFS-8140 Support for offline EditsVistor over an OEV XML file
+}
{code}

8. You have tests like {{testChangeErasureCodingCodec}}, 
{{AddNewErasureCodingCodec}} and etc., but I don't think we need such as 
codec/coder/algorithms are part of the runtime binary packages and are meant to 
be loaded during startup. Let's avoid the complexity here and focus on the EC 
policy persisting stuff.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-04 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153041#comment-16153041
 ] 

Kai Zheng commented on HDFS-7859:
-

6. Could we have a separate issue to refactor the existing codes, renaming 
addErasureCodePolicy to addErasureCodingPolicy and so on ...
{code}
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
@@ -214,7 +214,10 @@ static FileStatus unsetErasureCodingPolicy(final 
FSNamesystem fsn,
   static ErasureCodingPolicy addErasureCodePolicy(final FSNamesystem fsn,
   ErasureCodingPolicy policy) throws IllegalECPolicyException {
...
...
{code}

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11467) Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools

2017-09-04 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153033#comment-16153033
 ] 

SammiChen commented on HDFS-11467:
--

Hi [~andrew.wang], given the beta1 timeline, I would prefer it as a follow-on 
here. Assigned it to myself. 

> Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools
> --
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
>
> As discussed in HDFS-7859, after ErasureCodingPolicyManager section is added 
> into fsimage, we would like to also support exporting this section into an 
> XML back and forth using the OIV tool.
> Likewise, HDFS-7859 adds new edit log ops, so OEV tool should also support it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11467) Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools

2017-09-04 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen reassigned HDFS-11467:


Assignee: SammiChen

> Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools
> --
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
>
> As discussed in HDFS-7859, after ErasureCodingPolicyManager section is added 
> into fsimage, we would like to also support exporting this section into an 
> XML back and forth using the OIV tool.
> Likewise, HDFS-7859 adds new edit log ops, so OEV tool should also support it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Supporting refresh DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Summary: Supporting refresh DNS to switch mapping  (was: Supporting DNS to 
switch mapping)

> Supporting refresh DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Attachments: HDFS-12390.001.patch, HDFS-12390.002.patch, 
> HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement refreshDNSToSwitch in dfsadmin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Support to refresh DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Summary: Support to refresh DNS to switch mapping  (was: Supporting refresh 
DNS to switch mapping)

> Support to refresh DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Attachments: HDFS-12390.001.patch, HDFS-12390.002.patch, 
> HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement refreshDNSToSwitch in dfsadmin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Attachment: HDFS-12390.002.patch

fix whitespace

> Supporting DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Attachments: HDFS-12390.001.patch, HDFS-12390.002.patch, 
> HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement refreshDNSToSwitch in dfsadmin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-04 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153001#comment-16153001
 ] 

Kai Zheng commented on HDFS-7859:
-

Continued.

1. Better to fix the existing text by the way. "does not exists" => "doesn't 
exist"
{code}
-  throw new IllegalArgumentException("The policy name " +
+  throw new HadoopIllegalArgumentException("The policy name " +
   name + " does not exists");
{code}

2. PROHIBITED => DISABLED
{code}
+if (!CodecUtil.hasCodec(policy.getCodecName()) ||
+policy.getCellSize() > maxCellSize) {
+  // If policy is not supported in current system, set the policy state to
+  // PROHIBITED;
+  policy.setState(ErasureCodingPolicyState.DISABLED);
+}
{code}

3. What did you mean by policies loaded HDFS configuration file? There isn't 
any such file to configure and load EC policies. User may provide one in 
client, but it's forgotten after used. 
{code}
+String policyName = policy.getName();
+for (ErasureCodingPolicy p : getPolicies()) {
+  if (p.getName().equals(policyName) ||
+  (p.getSchema().equals(policy.getSchema()) &&
+  p.getCellSize() == policy.getCellSize())) {
+// If the same policy loaded from fsImage override policy loaded based
+// on HDFS configuration file
+LOG.info("The erasure coding policy name " + policy + " loaded from " +
+"fsImage override the one loaded according to HDFS " +
+"configuration file");
+  }
{code}

4. How about ErasureCodingPolicyManagerSection  => ErasureCodingSection?

5. Could we get rid of {{allPolicies}} or avoid repeatedly creating the array 
in the for loop from the map?
{code}
+  public synchronized void reloadPolicy(ErasureCodingPolicy policy) {
...
+allPolicies = policiesByName.values().toArray(new ErasureCodingPolicy[0]);
...
+  }

+  public synchronized void loadState(PersistState state) {
...
+for (ErasureCodingPolicy p : state.getPolicies()) {
+  reloadPolicy(p);
+}
+  }
{code}

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152799#comment-16152799
 ] 

Hadoop QA commented on HDFS-12340:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 13m  5s{color} | 
{color:red} root generated 16 new + 7 unchanged - 0 fixed = 23 total (was 7) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12340 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885268/HDFS-12340-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  |
| uname | Linux 1ae43e0e7911 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / ac5f01c |
| Default Java | 1.8.0_144 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20992/artifact/patchprocess/diff-compile-cc-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20992/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-native-client U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20992/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>   

[jira] [Commented] (HDFS-12182) BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks

2017-09-04 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152768#comment-16152768
 ] 

Wellington Chevreuil commented on HDFS-12182:
-

Hi [~andrew.wang], I believe the failures on last branch-2 patch are unrelated, 
but worth another review, before committing it to branch-2.

> BlockManager.metaSave does not distinguish between "under replicated" and 
> "missing" blocks
> --
>
> Key: HDFS-12182
> URL: https://issues.apache.org/jira/browse/HDFS-12182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12182.001.patch, HDFS-12182.002.patch, 
> HDFS-12182.003.patch, HDFS-12182.004.patch, HDFS-12182-branch-2.001.patch, 
> HDFS-12182-branch-2.002.patch
>
>
> Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs 
> CLI command) reports both "under replicated" and "missing" blocks under same 
> metric *Metasave: Blocks waiting for reconstruction:* as shown on below code 
> snippet:
> {noformat}
>synchronized (neededReconstruction) {
>   out.println("Metasave: Blocks waiting for reconstruction: "
>   + neededReconstruction.size());
>   for (Block block : neededReconstruction) {
> dumpBlockMeta(block, out);
>   }
> }
> {noformat}
> *neededReconstruction* is an instance of *LowRedundancyBlocks*, which 
> actually wraps 5 priority queues currently. 4 of these queues store different 
> under replicated scenarios, but the 5th one is dedicated for corrupt/missing 
> blocks. 
> Thus, metasave report may suggest some corrupt blocks are just under 
> replicated. This can be misleading for admins and operators trying to track 
> block missing/corruption issues, and/or other issues related to 
> *BlockManager* metrics.
> I would like to propose a patch with trivial changes that would report 
> corrupt blocks separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152779#comment-16152779
 ] 

Hadoop QA commented on HDFS-12390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.8.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
33s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} branch-2.8.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} branch-2.8.2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.8.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} branch-2.8.2 passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project: The patch generated 11 new 
+ 747 unchanged - 0 fixed = 758 total (was 747) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_151. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}427m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_151. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  5m 
59s{color} | 

[jira] [Commented] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-09-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152761#comment-16152761
 ] 

Xiao Chen commented on HDFS-12359:
--

Failed tests look unrelated. [~jojochuang] could you please review again when 
you have cycles? Thanks a lot.

> Re-encryption should operate with minimum KMS ACL requirements.
> ---
>
> Key: HDFS-12359
> URL: https://issues.apache.org/jira/browse/HDFS-12359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12359.01.patch, HDFS-12359.02.patch, 
> HDFS-12359.03.patch
>
>
> This was caught from KMS acl testing.
> HDFS-10899 gets the current key versions from KMS directly, which requires 
> {{READ}} acls.
> It also calls invalidateCache, which requires {{MANAGEMENT}} acls.
> We should fix re-encryption to not require additional ACLs than original 
> encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-04 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152736#comment-16152736
 ] 

Shashikant Banerjee commented on HDFS-12340:


Addressed :

1) Addressed some coding style issues
2) Fixed white space issues


> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, main.C, ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-04 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12340:
---
Attachment: HDFS-12340-HDFS-7240.002.patch

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, main.C, ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir

2017-09-04 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152686#comment-16152686
 ] 

Rakesh R commented on HDFS-12291:
-

Awesome work [~surendrasingh]. I've have few comments, please take care. Thanks!
# Rename few items:
{code}
{{pendingWork}} => {{pendingWorkCount}}
{{fullScanned}} => {{fullyScanned}}
{{queueCapacity}} => {{remainingCapacity}}
{code}
# Typo: please change {{re-encryption}}
{code}
BlockStorageMovementNeeded#processFileInode()
LOG.trace("Processing {} for re-encryption", inode.getFullPathName());
{code}
# Please add {{@InterfaceAudience.Private}} to {{FSTreeTraverser.java}}
# Please change priority to debug as this is frequently exec.
{code}
  LOG.info("StorageMovementNeeded queue remaining capacity is zero,"
  + " waiting for some free slots.");
{code}
# For safer side, please keep the condition {{pendingWork <= 0}}
{code}
public synchronized boolean isDirWorkDone() {
  return (pendingWork == 0 && fullScanned);
}
{code}
# Unused method, please remove.
{code}
/**
 * Return pending work count for directory.
 */
public synchronized int getPendingWork() {
  return pendingWork;
}
{code}
# bq. I think current rate of consumption is low, SPS will take one by one and 
wait for 3sec. Instead, we should take more elements from queue
Good catch [~umamaheswararao], I agree to add logic to increase the rate of 
consumption of SPS tasks. Presently, SPS thread waiting period between each 
task submission is 3 secs. For example, the remaining capacity is 1000 then 
presently SPS will take 3 * 1000 secs to schedule 1000 movement tasks. One 
imprv to {{#traverseDirInt()}} is to slice the {{remainingCapacity}} to smaller 
internal batches like <=50 or <=100 each and do #submitCurrentBatch, again 
smaller batch submission can be repeated until it submits the remainingCapacity 
number of items to {{storageMovementNeeded}}?

> [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy 
> of all the files under the given dir
> -
>
> Key: HDFS-12291
> URL: https://issues.apache.org/jira/browse/HDFS-12291
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12291-HDFS-10285-01.patch, 
> HDFS-12291-HDFS-10285-02.patch
>
>
> For the given source path directory, presently SPS consider only the files 
> immediately under the directory(only one level of scanning) for satisfying 
> the policy. It WON’T do recursive directory scanning and then schedules SPS 
> tasks to satisfy the storage policy of all the files till the leaf node. 
> The idea of this jira is to discuss & implement an efficient recursive 
> directory iteration mechanism and satisfies storage policy for all the files 
> under the given directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152682#comment-16152682
 ] 

Hadoop QA commented on HDFS-7859:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 11s{color} | {color:orange} root: The patch generated 30 new + 766 unchanged 
- 1 fixed = 796 total (was 767) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | 

[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-04 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152584#comment-16152584
 ] 

Kai Zheng commented on HDFS-7859:
-

Thanks [~Sammi] for the update!

Some comments so far:
1. Did you notice some failure or error for the current behavior of 
{{ElasticByteBufferPool->getBuffer(boolean direct, int length)}}? It looks 
reasonable to me that it can return a ByteBuffer of larger capacity than 
required; it can the caller's responsibility to use it well. Anyway, it's not 
relevant to this issue, so would you please handle it separately? Thanks.
{code}
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
@@ -96,7 +96,8 @@ public synchronized ByteBuffer getBuffer(boolean direct, int 
length) {
   ByteBuffer.allocate(length);
 }
 tree.remove(entry.getKey());
-return entry.getValue();
+// The reused ByteBuffer may have more capacity than required(length)
+return (ByteBuffer) entry.getValue().limit(length);
   }
{code}

2. Similarly, please also do the large portion of changes in 
{{CodecRegistry/CodecUtil}} separately. It's really not very relevant to this.

3. Again, in {{DFSStripedOutputStream}} please do it elsewhere (better with 
some test), I wish this could focus on the NN side changes when possible.
{code}
 private void clear() {
   for (int i = 0; i< numAllBlocks; i++) {
 buffers[i].clear();
+buffers[i] = (ByteBuffer) buffers[i].limit(cellSize);
   }
 }
{code}

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152577#comment-16152577
 ] 

Hadoop QA commented on HDFS-12390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 10 new 
+ 612 unchanged - 0 fixed = 622 total (was 612) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | 

[jira] [Commented] (HDFS-12391) Ozone: TestKSMSQLCli is not working as expected

2017-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152554#comment-16152554
 ] 

Hadoop QA commented on HDFS-12391:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.tools.TestStoragePolicyCommands |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885210/HDFS-12391-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5c7bd36d149c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2017-09-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152542#comment-16152542
 ] 

Steve Loughran commented on HDFS-7878:
--

Coming together nicely

h4. RawPathHandle 

* L108: any limit on buffer size here? I'm just worrying about malicious DoS 
against client/server RAM

h4. filesystem.md

L104: review spelling; maybe make a MUST throw, it's a new feature after all.

L179: back-quote all types in the para


L720: I've never come across the word "prenominate" before, looks like the US 
version of "aforementioned". Need something globally understood, e.g. "declared 
previously".


L729: 

{code}
result = FSDataInputStream(0, FS.Files'[p'])
{code}

h4. AbstractContractOpenTest

L191: We should have {{ContractTestUtils}} call for rename(), if not, time to 
add one. 
L192: {{ContractTestUtils.assertPathDoesNotExist}}

HDFS code is for HDFS team to review. Do make sure it can handle the condition: 
new API call without PathHandle bytes, with a test creating {{HdfsPathHandle}} 
for this.



> API - expose an unique file identifier
> --
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-04 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-7859:

Attachment: HDFS-7859.012.patch

Rebase patch against trunk code

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152435#comment-16152435
 ] 

Hadoop QA commented on HDFS-12235:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 12 
unchanged - 2 fixed = 12 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.ozone.web.client.TestKeys |
|   | 

[jira] [Updated] (HDFS-12391) Ozone: TestKSMSQLCli is not working as expected

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12391:
---
Status: Patch Available  (was: Open)

> Ozone: TestKSMSQLCli is not working as expected
> ---
>
> Key: HDFS-12391
> URL: https://issues.apache.org/jira/browse/HDFS-12391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12391-HDFS-7240.001.patch
>
>
> I found this issue while investigating the {{TestKSMSQLCli}} failure in [this 
> jenkins 
> report|https://builds.apache.org/job/PreCommit-HDFS-Build/20984/testReport/], 
> the test is supposed to use parameterized class to test both {{LevelDB}} and 
> {{RocksDB}} implementation of metadata stores, however it only tests default 
> {{RocksDB}} case twice.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12389) Ozone: oz commandline list calls should return valid JSON format output

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12389:
---
Status: Patch Available  (was: Open)

> Ozone: oz commandline list calls should return valid JSON format output
> ---
>
> Key: HDFS-12389
> URL: https://issues.apache.org/jira/browse/HDFS-12389
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12389-HDFS-7240.001.patch
>
>
> At present the outputs of {{listVolume}}, {{listBucket}} and {{listKey}} are 
> hard to parse, for example following call
> {code}
> ./bin/hdfs oz -listVolume http://localhost:9864 -user wwei
> {code}
> lists all volumes in my cluster and it returns
> {noformat}
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>  {  
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>   ...
> {noformat}
> this is not a valid JSON format output hence it is hard to parse in clients' 
> script for further interactions. Propose to reformat them to valid JSON data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Attachment: HDFS-12390.001.patch

> Supporting DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Attachments: HDFS-12390.001.patch, HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement refreshDNSToSwitch in dfsadmin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12389) Ozone: oz commandline list calls should return valid JSON format output

2017-09-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152428#comment-16152428
 ] 

Weiwei Yang commented on HDFS-12389:


Patch attached, please see UT in {{TestOzoneShell}} and functional test in 
[^json_output_test.log]. Please kindly review, thanks.

> Ozone: oz commandline list calls should return valid JSON format output
> ---
>
> Key: HDFS-12389
> URL: https://issues.apache.org/jira/browse/HDFS-12389
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12389-HDFS-7240.001.patch, json_output_test.log
>
>
> At present the outputs of {{listVolume}}, {{listBucket}} and {{listKey}} are 
> hard to parse, for example following call
> {code}
> ./bin/hdfs oz -listVolume http://localhost:9864 -user wwei
> {code}
> lists all volumes in my cluster and it returns
> {noformat}
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>  {  
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>   ...
> {noformat}
> this is not a valid JSON format output hence it is hard to parse in clients' 
> script for further interactions. Propose to reformat them to valid JSON data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12389) Ozone: oz commandline list calls should return valid JSON format output

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12389:
---
Attachment: json_output_test.log

> Ozone: oz commandline list calls should return valid JSON format output
> ---
>
> Key: HDFS-12389
> URL: https://issues.apache.org/jira/browse/HDFS-12389
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12389-HDFS-7240.001.patch, json_output_test.log
>
>
> At present the outputs of {{listVolume}}, {{listBucket}} and {{listKey}} are 
> hard to parse, for example following call
> {code}
> ./bin/hdfs oz -listVolume http://localhost:9864 -user wwei
> {code}
> lists all volumes in my cluster and it returns
> {noformat}
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>  {  
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>   ...
> {noformat}
> this is not a valid JSON format output hence it is hard to parse in clients' 
> script for further interactions. Propose to reformat them to valid JSON data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12389) Ozone: oz commandline list calls should return valid JSON format output

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12389:
---
Attachment: HDFS-12389-HDFS-7240.001.patch

> Ozone: oz commandline list calls should return valid JSON format output
> ---
>
> Key: HDFS-12389
> URL: https://issues.apache.org/jira/browse/HDFS-12389
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12389-HDFS-7240.001.patch
>
>
> At present the outputs of {{listVolume}}, {{listBucket}} and {{listKey}} are 
> hard to parse, for example following call
> {code}
> ./bin/hdfs oz -listVolume http://localhost:9864 -user wwei
> {code}
> lists all volumes in my cluster and it returns
> {noformat}
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>  {  
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "modifiedOn" : "Mon, 04 Sep 2017 03:25:22 GMT",
> "size" : 10240,
> "keyName" : "key-0-22381",
> "dataFileName" : null
>   }
>   ...
> {noformat}
> this is not a valid JSON format output hence it is hard to parse in clients' 
> script for further interactions. Propose to reformat them to valid JSON data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12310) [SPS]: Provide an option to track the status of in progress requests

2017-09-04 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152380#comment-16152380
 ] 

Surendra Singh Lilhore commented on HDFS-12310:
---

Thanks [~umamaheswararao]
Proposal LGTM...

> [SPS]: Provide an option to track the status of in progress requests
> 
>
> Key: HDFS-12310
> URL: https://issues.apache.org/jira/browse/HDFS-12310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>
> As per the [~andrew.wang] 's review comments in HDFS-10285, This is the JIRA 
> for tracking about the options how we track the progress of SPS requests.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Fix Version/s: (was: 2.8.2)

> Supporting DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Attachments: HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement refreshDNSToSwitch in dfsadmin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume

2017-09-04 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152364#comment-16152364
 ] 

Mukul Kumar Singh commented on HDFS-12100:
--

Thanks for the patch [~ljain], the patch looks really good, Please find my 
comments as following.

1) DistributedOzoneHandler: Please initialize the default quota as part of the 
constructor for DistributedOzoneHandler, and use the default values elsewhere.
2) KeyManagerImpl.java:156, {{volumeBuilder.getSizeInBytes()}} volume size has 
already been fetched, we can use the volume size here.
3) KeyManagerImpl.java:159, & 237 - These ops should be added as part of the DB 
batch op and added together.
4) Please add a OzoneQuota Unit of PB as well :)


> Ozone: KSM: Allocate key should honour volume quota if quota is set on the 
> volume
> -
>
> Key: HDFS-12100
> URL: https://issues.apache.org/jira/browse/HDFS-12100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
> Fix For: HDFS-7240
>
> Attachments: HDFS-12100-HDFS-7240.001.patch, 
> HDFS-12100-HDFS-7240.002.patch
>
>
> KeyManagerImpl#allocateKey currently does not check the volume quota before 
> allocating a key, this can cause the volume quota overrun.
> Volume quota needs to be check before allocating the key in the SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12391) Ozone: TestKSMSQLCli is not working as expected

2017-09-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152249#comment-16152249
 ] 

Weiwei Yang commented on HDFS-12391:


Two problems are fixed in this patch

# {{SQLCLI#convertKSMDB}} did not call {{setConf}} to get desired metadata 
store based on value of {{OzoneConfigKeys.OZONE_METADATA_STORE_IMPL}};
#  {{TestKSMSQLCli#setup}} should use {{@Before}} tag so every case (leveldb or 
rocksdb) will start a new mini cluster, reuse existing cluster will throw an 
exception on initiating the DB instance.

> Ozone: TestKSMSQLCli is not working as expected
> ---
>
> Key: HDFS-12391
> URL: https://issues.apache.org/jira/browse/HDFS-12391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12391-HDFS-7240.001.patch
>
>
> I found this issue while investigating the {{TestKSMSQLCli}} failure in [this 
> jenkins 
> report|https://builds.apache.org/job/PreCommit-HDFS-Build/20984/testReport/], 
> the test is supposed to use parameterized class to test both {{LevelDB}} and 
> {{RocksDB}} implementation of metadata stores, however it only tests default 
> {{RocksDB}} case twice.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12391) Ozone: TestKSMSQLCli is not working as expected

2017-09-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152250#comment-16152250
 ] 

Weiwei Yang commented on HDFS-12391:


Ping [~vagarychen], please help to review, thanks.

> Ozone: TestKSMSQLCli is not working as expected
> ---
>
> Key: HDFS-12391
> URL: https://issues.apache.org/jira/browse/HDFS-12391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12391-HDFS-7240.001.patch
>
>
> I found this issue while investigating the {{TestKSMSQLCli}} failure in [this 
> jenkins 
> report|https://builds.apache.org/job/PreCommit-HDFS-Build/20984/testReport/], 
> the test is supposed to use parameterized class to test both {{LevelDB}} and 
> {{RocksDB}} implementation of metadata stores, however it only tests default 
> {{RocksDB}} case twice.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12391) Ozone: TestKSMSQLCli is not working as expected

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12391:
---
Attachment: HDFS-12391-HDFS-7240.001.patch

> Ozone: TestKSMSQLCli is not working as expected
> ---
>
> Key: HDFS-12391
> URL: https://issues.apache.org/jira/browse/HDFS-12391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12391-HDFS-7240.001.patch
>
>
> I found this issue while investigating the {{TestKSMSQLCli}} failure in [this 
> jenkins 
> report|https://builds.apache.org/job/PreCommit-HDFS-Build/20984/testReport/], 
> the test is supposed to use parameterized class to test both {{LevelDB}} and 
> {{RocksDB}} implementation of metadata stores, however it only tests default 
> {{RocksDB}} case twice.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12391) Ozone: TestKSMSQLCli is not working as expected

2017-09-04 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12391:
--

 Summary: Ozone: TestKSMSQLCli is not working as expected
 Key: HDFS-12391
 URL: https://issues.apache.org/jira/browse/HDFS-12391
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone, test
Affects Versions: HDFS-7240
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Minor


I found this issue while investigating the {{TestKSMSQLCli}} failure in [this 
jenkins 
report|https://builds.apache.org/job/PreCommit-HDFS-Build/20984/testReport/], 
the test is supposed to use parameterized class to test both {{LevelDB}} and 
{{RocksDB}} implementation of metadata stores, however it only tests default 
{{RocksDB}} case twice.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12367) Ozone: Too many open files error while running corona

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-12367.

Resolution: Duplicate

I think this issue no longer happens to me, closing it as a dup to HDFS-12382 
as this looks like to be fixed there, thanks [~nandakumar131]. [~msingh] feel 
free to create another lower severity JIRA to track resource leaks you found in 
code level. I will close this one as it is no longer a blocker for tests.

> Ozone: Too many open files error while running corona
> -
>
> Key: HDFS-12367
> URL: https://issues.apache.org/jira/browse/HDFS-12367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Reporter: Weiwei Yang
>Assignee: Mukul Kumar Singh
>
> Too many open files error keeps happening to me while using corona, I have 
> simply setup a single node cluster and run corona to generate 1000 keys, but 
> I keep getting following error
> {noformat}
> ./bin/hdfs corona -numOfThreads 1 -numOfVolumes 1 -numOfBuckets 1 -numOfKeys 
> 1000
> 17/08/28 00:47:42 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/08/28 00:47:42 INFO tools.Corona: Number of Threads: 1
> 17/08/28 00:47:42 INFO tools.Corona: Mode: offline
> 17/08/28 00:47:42 INFO tools.Corona: Number of Volumes: 1.
> 17/08/28 00:47:42 INFO tools.Corona: Number of Buckets per Volume: 1.
> 17/08/28 00:47:42 INFO tools.Corona: Number of Keys per Bucket: 1000.
> 17/08/28 00:47:42 INFO rpc.OzoneRpcClient: Creating Volume: vol-0-05000, with 
> wwei as owner and quota set to 1152921504606846976 bytes.
> 17/08/28 00:47:42 INFO tools.Corona: Starting progress bar Thread.
> ...
> ERROR tools.Corona: Exception while adding key: key-251-19293 in bucket: 
> bucket-0-34960 of volume: vol-0-05000.
> java.io.IOException: Exception getting XceiverClient.
>   at 
> org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:156)
>   at 
> org.apache.hadoop.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:122)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.getFromKsmKeyInfo(ChunkGroupOutputStream.java:289)
>   at 
> org.apache.hadoop.ozone.client.rpc.OzoneRpcClient.createKey(OzoneRpcClient.java:487)
>   at 
> org.apache.hadoop.ozone.tools.Corona$OfflineProcessor.run(Corona.java:352)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.IllegalStateException: failed to create a child event loop
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>   at 
> org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:144)
>   ... 9 more
> Caused by: java.lang.IllegalStateException: failed to create a child event 
> loop
>   at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:68)
>   at 
> io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:49)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:61)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:52)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:44)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:36)
>   at org.apache.hadoop.scm.XceiverClient.connect(XceiverClient.java:76)
>   at 
> org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:151)
>   at 
> org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:145)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   ... 12 more
> Caused by: io.netty.channel.ChannelException: failed to open a new selector
>   at 

[jira] [Assigned] (HDFS-12367) Ozone: Too many open files error while running corona

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-12367:
--

Assignee: Nandakumar  (was: Mukul Kumar Singh)

> Ozone: Too many open files error while running corona
> -
>
> Key: HDFS-12367
> URL: https://issues.apache.org/jira/browse/HDFS-12367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, tools
>Reporter: Weiwei Yang
>Assignee: Nandakumar
>
> Too many open files error keeps happening to me while using corona, I have 
> simply setup a single node cluster and run corona to generate 1000 keys, but 
> I keep getting following error
> {noformat}
> ./bin/hdfs corona -numOfThreads 1 -numOfVolumes 1 -numOfBuckets 1 -numOfKeys 
> 1000
> 17/08/28 00:47:42 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/08/28 00:47:42 INFO tools.Corona: Number of Threads: 1
> 17/08/28 00:47:42 INFO tools.Corona: Mode: offline
> 17/08/28 00:47:42 INFO tools.Corona: Number of Volumes: 1.
> 17/08/28 00:47:42 INFO tools.Corona: Number of Buckets per Volume: 1.
> 17/08/28 00:47:42 INFO tools.Corona: Number of Keys per Bucket: 1000.
> 17/08/28 00:47:42 INFO rpc.OzoneRpcClient: Creating Volume: vol-0-05000, with 
> wwei as owner and quota set to 1152921504606846976 bytes.
> 17/08/28 00:47:42 INFO tools.Corona: Starting progress bar Thread.
> ...
> ERROR tools.Corona: Exception while adding key: key-251-19293 in bucket: 
> bucket-0-34960 of volume: vol-0-05000.
> java.io.IOException: Exception getting XceiverClient.
>   at 
> org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:156)
>   at 
> org.apache.hadoop.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:122)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.getFromKsmKeyInfo(ChunkGroupOutputStream.java:289)
>   at 
> org.apache.hadoop.ozone.client.rpc.OzoneRpcClient.createKey(OzoneRpcClient.java:487)
>   at 
> org.apache.hadoop.ozone.tools.Corona$OfflineProcessor.run(Corona.java:352)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.IllegalStateException: failed to create a child event loop
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>   at 
> org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:144)
>   ... 9 more
> Caused by: java.lang.IllegalStateException: failed to create a child event 
> loop
>   at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:68)
>   at 
> io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:49)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:61)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:52)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:44)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:36)
>   at org.apache.hadoop.scm.XceiverClient.connect(XceiverClient.java:76)
>   at 
> org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:151)
>   at 
> org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:145)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   ... 12 more
> Caused by: io.netty.channel.ChannelException: failed to open a new selector
>   at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:128)
>   at io.netty.channel.nio.NioEventLoop.(NioEventLoop.java:120)
>   at 
> io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
>   at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:64)
> 

[jira] [Updated] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-04 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12235:
---
Attachment: HDFS-12235-HDFS-7240.007.patch

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152221#comment-16152221
 ] 

Weiwei Yang commented on HDFS-12235:


UT failure {{hadoop.ozone.ksm.TestKSMSQLCli}} was related, there is a bug in 
{{KeyManagerImpl#stop}} where it should call {{metadataManager.stop()}} to 
close DB instance and release resources. Fix it in v7 patch.

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152221#comment-16152221
 ] 

Weiwei Yang edited comment on HDFS-12235 at 9/4/17 7:41 AM:


UT failure {{hadoop.ozone.ksm.TestKSMSQLCli}} was related, there is a bug in 
{{KeyManagerImpl#stop}} where it should call {{metadataManager.stop()}} to 
close DB instance and release resources. Fix it in v7 patch.


was (Author: cheersyang):
UT failure {{hadoop.ozone.ksm.TestKSMSQLCli}} was related, there is a bug in 
{{KeyManagerImpl#stop}} where it should call {{metadataManager.stop()}} to 
close DB instance and release resources. Fix it in v7 patch.

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Description: 
As described in [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
sub_processor to get rack info of DN/Client, so we think  it's a little heavy.  
We prepare to use TableMapping,but  TableMapping does not support refresh and 
can not reload rack info of newly added DataNodes.
So we implement refreshDNSToSwitch in dfsadmin.

  was:
As described in [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
sub_processor to get rack info of DN/Client, so we think  it's a little heavy.  
We prepare to use TableMapping,but  TableMapping does not support refresh and 
can not reload rack info of newly added DataNodes.
So we implement it.


> Supporting DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Fix For: 2.8.2
>
> Attachments: HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement refreshDNSToSwitch in dfsadmin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Fix Version/s: 2.8.2
   Status: Patch Available  (was: Open)

> Supporting DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Fix For: 2.8.2
>
> Attachments: HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12390:
-
Attachment: HDFS-12390-branch-2.8.2.001.patch

> Supporting DNS to switch mapping
> 
>
> Key: HDFS-12390
> URL: https://issues.apache.org/jira/browse/HDFS-12390
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
> Fix For: 2.8.2
>
> Attachments: HDFS-12390-branch-2.8.2.001.patch
>
>
> As described in 
> [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
> ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
> sub_processor to get rack info of DN/Client, so we think  it's a little 
> heavy.  We prepare to use TableMapping,but  TableMapping does not support 
> refresh and can not reload rack info of newly added DataNodes.
> So we implement it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12390) Supporting DNS to switch mapping

2017-09-04 Thread Jiandan Yang (JIRA)
Jiandan Yang  created HDFS-12390:


 Summary: Supporting DNS to switch mapping
 Key: HDFS-12390
 URL: https://issues.apache.org/jira/browse/HDFS-12390
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs, hdfs-client
Reporter: Jiandan Yang 
Assignee: Jiandan Yang 


As described in [HDFS-12200|https://issues.apache.org/jira/browse/HDFS-12200], 
ScriptBasedMapping may lead to NN cpu 100%. ScriptBasedMapping run 
sub_processor to get rack info of DN/Client, so we think  it's a little heavy.  
We prepare to use TableMapping,but  TableMapping does not support refresh and 
can not reload rack info of newly added DataNodes.
So we implement it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org