[jira] [Commented] (HDFS-10254) DfsClient undervalidates args for PositionedReadable operations

2016-04-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233730#comment-15233730
 ] 

Steve Loughran commented on HDFS-10254:
---

fixed as part of HADOOP-12994

> DfsClient undervalidates args for PositionedReadable operations
> ---
>
> Key: HDFS-10254
> URL: https://issues.apache.org/jira/browse/HDFS-10254
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Fix For: 2.8.0
>
>
> HDFS should can do stricter checking of the inputs
> # raise an exception on negative offset of destination buffer
> # explicitly raise an EOF exception if the file position is negative
> Optionally: short-circuit read/readfully operations if the byte range is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-10255) ByteRangeInputStream.readFully leaks stream handles on failure

2016-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10255.
---
   Resolution: Fixed
Fix Version/s: 2.8.0

> ByteRangeInputStream.readFully leaks stream handles on failure
> --
>
> Key: HDFS-10255
> URL: https://issues.apache.org/jira/browse/HDFS-10255
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> in {{ByteRangeInputStream.readFully}}, if the requested amount of data is out 
> of range, the EOFException is thrown —without closing the input stream.
> Fix: move test into the try/finally clause
> using java 7 try-with-resources would be cleaner, but make it harder to 
> switch to aborting TCP channels if that was felt to be needed here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-10254) DfsClient undervalidates args for PositionedReadable operations

2016-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10254.
---
   Resolution: Fixed
Fix Version/s: 2.8.0

> DfsClient undervalidates args for PositionedReadable operations
> ---
>
> Key: HDFS-10254
> URL: https://issues.apache.org/jira/browse/HDFS-10254
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Fix For: 2.8.0
>
>
> HDFS should can do stricter checking of the inputs
> # raise an exception on negative offset of destination buffer
> # explicitly raise an EOF exception if the file position is negative
> Optionally: short-circuit read/readfully operations if the byte range is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10216) distcp -diff relative path exception

2016-04-09 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233621#comment-15233621
 ] 

John Zhuge commented on HDFS-10216:
---

Great job with the patch [~bwtakacy].

I assume the ad-hoc test passed with the patch. Is there any unit test that can 
reproduce this problem? If not, should we add a unit test?

Should we run related unit tests to exercise this code path?

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9918) Erasure Coding: Sort located striped blocks based on decommissioned states

2016-04-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233561#comment-15233561
 ] 

Walter Su commented on HDFS-9918:
-

+1. Thanks, [~rakesh_r].

> Erasure Coding: Sort located striped blocks based on decommissioned states
> --
>
> Key: HDFS-9918
> URL: https://issues.apache.org/jira/browse/HDFS-9918
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-9918-001.patch, HDFS-9918-002.patch, 
> HDFS-9918-003.patch, HDFS-9918-004.patch, HDFS-9918-005.patch, 
> HDFS-9918-006.patch, HDFS-9918-007.patch, HDFS-9918-008.patch, 
> HDFS-9918-009.patch, HDFS-9918-010.patch, HDFS-9918-011.patch, 
> HDFS-9918-012.patch, HDFS-9918-013.patch
>
>
> This jira is a follow-on work of HDFS-8786, where we do decommissioning of 
> datanodes having striped blocks.
> Now, after decommissioning it requires to change the ordering of the storage 
> list so that the decommissioned datanodes should only be last node in list.
> For example, assume we have a block group with storage list:-
> d0, d1, d2, d3, d4, d5, d6, d7, d8, d9
> mapping to indices
> 0, 1, 2, 3, 4, 5, 6, 7, 8, 2
> Here the internal block b2 is duplicated, locating in d2 and d9. If d2 is a 
> decommissioning node then should switch d2 and d9 in the storage list.
> Thanks [~jingzhao] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-8786?focusedCommentId=15180415=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15180415]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10273) Remove duplicate logSync() and log message in FSN#enterSafemode()

2016-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233543#comment-15233543
 ] 

Hadoop QA commented on HDFS-10273:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 23s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
36s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 236m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSeek |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | 

[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-09 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Attachment: HDFS-10216.1.patch

Attached patch.

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-09 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Status: Patch Available  (was: Open)

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-10216) distcp -diff relative path exception

2016-04-09 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi reassigned HDFS-10216:
--

Assignee: Takashi Ohnishi

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10216) distcp -diff relative path exception

2016-04-09 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233525#comment-15233525
 ] 

Takashi Ohnishi commented on HDFS-10216:


Thanks!

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Critical
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10274) Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode

2016-04-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10274:
-
Attachment: HDFS-10274-01.patch

Attaching the patch for the change.
Removed {{SafeMode#isInStartupSafeMode()}}, and moved to {{BlockManager}} and 
{{BlockManagerSafeMode}}.

Please review.

> Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode
> -
>
> Key: HDFS-10274
> URL: https://issues.apache.org/jira/browse/HDFS-10274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10274-01.patch
>
>
> To reduce the number of methods in Namesystem interface and for clean looking 
> refactor, its better to move {{isInStartupSafeMode()}} to BlockManager and 
> BlockManagerSafeMode, as most of the callers are in BlockManager. So one more 
> interface overhead can be reduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10274) Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode

2016-04-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10274:
-
Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode
> -
>
> Key: HDFS-10274
> URL: https://issues.apache.org/jira/browse/HDFS-10274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10274-01.patch
>
>
> To reduce the number of methods in Namesystem interface and for clean looking 
> refactor, its better to move {{isInStartupSafeMode()}} to BlockManager and 
> BlockManagerSafeMode, as most of the callers are in BlockManager. So one more 
> interface overhead can be reduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10274) Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode

2016-04-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10274:
-
Issue Type: Improvement  (was: Bug)

> Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode
> -
>
> Key: HDFS-10274
> URL: https://issues.apache.org/jira/browse/HDFS-10274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>
> To reduce the number of methods in Namesystem interface and for clean looking 
> refactor, its better to move {{isInStartupSafeMode()}} to BlockManager and 
> BlockManagerSafeMode, as most of the callers are in BlockManager. So one more 
> interface overhead can be reduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10274) Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode

2016-04-09 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-10274:


 Summary: Move NameSystem#isInStartupSafeMode() to 
BlockManagerSafeMode
 Key: HDFS-10274
 URL: https://issues.apache.org/jira/browse/HDFS-10274
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


To reduce the number of methods in Namesystem interface and for clean looking 
refactor, its better to move {{isInStartupSafeMode()}} to BlockManager and 
BlockManagerSafeMode, as most of the callers are in BlockManager. So one more 
interface overhead can be reduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10273) Remove duplicate logSync() and log message in FSN#enterSafemode()

2016-04-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10273:
-
Attachment: HDFS-10273-01.patch

Attached simple patch to remove duplicate lines.

> Remove duplicate logSync() and log message in FSN#enterSafemode()
> -
>
> Key: HDFS-10273
> URL: https://issues.apache.org/jira/browse/HDFS-10273
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HDFS-10273-01.patch
>
>
> Remove duplicate logSync() and log message in FSN#enterSafemode()
> {code:title=FSN#enterSafemode(..)}
>   // Before Editlog is in OpenForWrite mode, editLogStream will be null. 
> So,
>   // logSyncAll call can be called only when Edlitlog is in OpenForWrite 
> mode
>   if (isEditlogOpenForWrite) {
> getEditLog().logSyncAll();
>   }
>   setManualAndResourceLowSafeMode(!resourcesLow, resourcesLow);
>   NameNode.stateChangeLog.info("STATE* Safe mode is ON.\n" +
>   getSafeModeTip());
>   if (isEditlogOpenForWrite) {
> getEditLog().logSyncAll();
>   }
>   NameNode.stateChangeLog.info("STATE* Safe mode is ON" + 
> getSafeModeTip());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10273) Remove duplicate logSync() and log message in FSN#enterSafemode()

2016-04-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10273:
-
Affects Version/s: 2.9.0
   Status: Patch Available  (was: Open)

> Remove duplicate logSync() and log message in FSN#enterSafemode()
> -
>
> Key: HDFS-10273
> URL: https://issues.apache.org/jira/browse/HDFS-10273
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HDFS-10273-01.patch
>
>
> Remove duplicate logSync() and log message in FSN#enterSafemode()
> {code:title=FSN#enterSafemode(..)}
>   // Before Editlog is in OpenForWrite mode, editLogStream will be null. 
> So,
>   // logSyncAll call can be called only when Edlitlog is in OpenForWrite 
> mode
>   if (isEditlogOpenForWrite) {
> getEditLog().logSyncAll();
>   }
>   setManualAndResourceLowSafeMode(!resourcesLow, resourcesLow);
>   NameNode.stateChangeLog.info("STATE* Safe mode is ON.\n" +
>   getSafeModeTip());
>   if (isEditlogOpenForWrite) {
> getEditLog().logSyncAll();
>   }
>   NameNode.stateChangeLog.info("STATE* Safe mode is ON" + 
> getSafeModeTip());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10273) Remove duplicate logSync() and log message in FSN#enterSafemode()

2016-04-09 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-10273:


 Summary: Remove duplicate logSync() and log message in 
FSN#enterSafemode()
 Key: HDFS-10273
 URL: https://issues.apache.org/jira/browse/HDFS-10273
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor


Remove duplicate logSync() and log message in FSN#enterSafemode()
{code:title=FSN#enterSafemode(..)}
  // Before Editlog is in OpenForWrite mode, editLogStream will be null. So,
  // logSyncAll call can be called only when Edlitlog is in OpenForWrite 
mode
  if (isEditlogOpenForWrite) {
getEditLog().logSyncAll();
  }
  setManualAndResourceLowSafeMode(!resourcesLow, resourcesLow);
  NameNode.stateChangeLog.info("STATE* Safe mode is ON.\n" +
  getSafeModeTip());
  if (isEditlogOpenForWrite) {
getEditLog().logSyncAll();
  }
  NameNode.stateChangeLog.info("STATE* Safe mode is ON" + getSafeModeTip());
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10269) Invalid value configured for dfs.datanode.failed.volumes.tolerated cause the datanode exit

2016-04-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233428#comment-15233428
 ] 

Andrew Wang commented on HDFS-10269:


Like Chris, I don't like the idea of falling back to a default value in the 
case of misconfiguration since it leads to ambiguity. The admin has explicitly 
set it to this value, why should we ignore it and use some other value? It's 
much more clear for misconfiguration to be treated as a fatal error.

> Invalid value configured for dfs.datanode.failed.volumes.tolerated cause the 
> datanode exit
> --
>
> Key: HDFS-10269
> URL: https://issues.apache.org/jira/browse/HDFS-10269
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-10269.001.patch
>
>
> The datanode start failed and exited when I reused configured for 
> dfs.datanode.failed.volumes.tolerated as 5 from my another cluster but 
> actually the new cluster only have one datadir path. And this leaded the 
> Invalid volume failure config value and threw {{DiskErrorException}}, so the 
> datanode shutdown. The info is below:
> {code}
> 2016-04-07 09:34:45,358 WARN org.apache.hadoop.hdfs.server.common.Storage: 
> Failed to add storage for block pool: BP-1239160341-xx.xx.xx.xx-1459929303126 
> : BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block 
> storage: /home/data/hdfs/data/current/BP-1239160341-xx.xx.xx.xx-1459929303126
> 2016-04-07 09:34:45,358 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> Block pool  (Datanode Uuid unassigned) service to 
> /xx.xx.xx.xx:9000. Exiting.
> java.io.IOException: All specified directories are failed to load.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
> at java.lang.Thread.run(Thread.java:745)
> 2016-04-07 09:34:45,358 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> Block pool  (Datanode Uuid unassigned) service to 
> /xx.xx.xx.xx:9000. Exiting.
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid volume failure 
>  config value: 5
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:281)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1374)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
> at java.lang.Thread.run(Thread.java:745)
> 2016-04-07 09:34:45,358 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to /xx.xx.xx.xx:9000
> 2016-04-07 09:34:45,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to /xx.xx.xx.xx:9000
> 2016-04-07 09:34:45,460 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Removed Block pool  (Datanode Uuid unassigned)
> 2016-04-07 09:34:47,460 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exiting Datanode
> 2016-04-07 09:34:47,462 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 0
> 2016-04-07 09:34:47,463 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> SHUTDOWN_MSG:
> {code}
> IMO, this will let users feel bad because I only configured a value 
> incorrectly. Instead of, we can give a warn info for this and reset this 
> value to the default value. It will be a better way for this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)