[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2020-08-28 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186763#comment-17186763
 ] 

Andras Bokor commented on HDFS-14353:
-

For git greppers: The commit misses the JIRA id so you can find the commit by 
grepping on the title of this JIRA: {{Erasure Coding: metrics xmitsInProgress 
become to negative.}}
Or find by commit hash: d6fc482a541310d83d9cf1393e8f6ed220ef4c1e

> Erasure Coding: metrics xmitsInProgress become to negative.
> ---
>
> Key: HDFS-14353
> URL: https://issues.apache.org/jira/browse/HDFS-14353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, erasure-coding
>Affects Versions: 3.3.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.4.0, 3.1.5
>
> Attachments: HDFS-14353.001.patch, HDFS-14353.002.patch, 
> HDFS-14353.003.patch, HDFS-14353.004.patch, HDFS-14353.005.patch, 
> HDFS-14353.006.patch, HDFS-14353.007.patch, HDFS-14353.008.patch, 
> HDFS-14353.009.patch, HDFS-14353.010.patch, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-2585) dfs.https.port is a bit confusing

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-2585.

Resolution: Duplicate

> dfs.https.port is a bit confusing
> -
>
> Key: HDFS-2585
> URL: https://issues.apache.org/jira/browse/HDFS-2585
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Joe Crobak
>Priority: Trivial
>
> First off, dfs.https.address was renamed to dfs.namenode.https-address, so it 
> would make sense to deprecate dfs.https.port and rename it to 
> dfs.namenode.https-address for consistency. Yet, it also appears that 
> dfs.namenode.https-address includes the port number, so it's unclear to me 
> why a separate port property exists.
> In addition, in DFSConfigKeys.java, both DFS_NAMENODE_HTTPS_PORT_KEY and 
> DFS_HTTPS_PORT_KEY use this string as a key.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-2585) dfs.https.port is a bit confusing

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-2585.

Resolution: Fixed

This ticket is no longer valid since it's superceded by the two linked issues.

> dfs.https.port is a bit confusing
> -
>
> Key: HDFS-2585
> URL: https://issues.apache.org/jira/browse/HDFS-2585
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Joe Crobak
>Priority: Trivial
>
> First off, dfs.https.address was renamed to dfs.namenode.https-address, so it 
> would make sense to deprecate dfs.https.port and rename it to 
> dfs.namenode.https-address for consistency. Yet, it also appears that 
> dfs.namenode.https-address includes the port number, so it's unclear to me 
> why a separate port property exists.
> In addition, in DFSConfigKeys.java, both DFS_NAMENODE_HTTPS_PORT_KEY and 
> DFS_HTTPS_PORT_KEY use this string as a key.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-2585) dfs.https.port is a bit confusing

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HDFS-2585:


> dfs.https.port is a bit confusing
> -
>
> Key: HDFS-2585
> URL: https://issues.apache.org/jira/browse/HDFS-2585
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Joe Crobak
>Priority: Trivial
>
> First off, dfs.https.address was renamed to dfs.namenode.https-address, so it 
> would make sense to deprecate dfs.https.port and rename it to 
> dfs.namenode.https-address for consistency. Yet, it also appears that 
> dfs.namenode.https-address includes the port number, so it's unclear to me 
> why a separate port property exists.
> In addition, in DFSConfigKeys.java, both DFS_NAMENODE_HTTPS_PORT_KEY and 
> DFS_HTTPS_PORT_KEY use this string as a key.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5696) Examples for httpfs REST API incorrect on apache.org

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-5696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-5696.

Resolution: Duplicate

> Examples for httpfs REST API incorrect on apache.org
> 
>
> Key: HDFS-5696
> URL: https://issues.apache.org/jira/browse/HDFS-5696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
> Environment: NA
>Reporter: Casey Brotherton
>Priority: Trivial
>
> The examples provided for the httpfs REST API are incorrect.
> http://hadoop.apache.org/docs/r2.2.0/hadoop-hdfs-httpfs/index.html
> http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-hdfs-httpfs/index.html
> From the documentation:
> *
> HttpFS is a separate service from Hadoop NameNode.
> HttpFS itself is Java web-application and it runs using a preconfigured 
> Tomcat bundled with HttpFS binary distribution.
> HttpFS HTTP web-service API calls are HTTP REST calls that map to a HDFS file 
> system operation. For example, using the curl Unix command:
> $ curl http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt returns the 
> contents of the HDFS /user/foo/README.txt file.
> $ curl http://httpfs-host:14000/webhdfs/v1/user/foo?op=list returns the 
> contents of the HDFS /user/foo directory in JSON format.
> $ curl -X POST http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=mkdirs 
> creates the HDFS /user/foo.bar directory.
> ***
> The commands have incorrect "op"erations. ( Verified through source code in 
> HttpFSFileSystem.java )
> In addition, although the webhdfs documentation specifies user.name as 
> optional, on my cluster, each action required a "user.name"
> It should be included in the short examples to allow for the greatest chance 
> of success.
> Three examples rewritten:
> curl -i -L 
> "http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=open=hdfsuser;
> curl -i 
> "http://httpfs-host:14000/webhdfs/v1/user/foo/?op=liststatus=hdfsuser;
> curl -i -X PUT 
> "http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=mkdirs=hdfsuser;
> Not sure what the convention should be for specifying the user.name. Use 
> hdfs? or a name that is obviously an example?
> It would also be beneficial if the HTTPfs page linked to the webhdfs 
> documentation page in the text instead of just on the menu sidebar.
> http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-2672) possible Cases for NullPointerException

2019-12-19 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-2672.

Resolution: Cannot Reproduce

> possible Cases for NullPointerException
> ---
>
> Key: HDFS-2672
> URL: https://issues.apache.org/jira/browse/HDFS-2672
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: kavita sharma
>Priority: Trivial
>
> NullPointerException can be thrown as null check is not added in DFSClient 
> and UpgradeManagerNamenode.
> {noformat}
>  DFSClient.java
>Block newBlock = primary.getBlockInfo(last.getBlock());
>if newBlock comes null then NullPointerException will be thrown.
> {noformat}
> {noformat}
>  UpgradeManagerNamenode.java
>   uos = UpgradeObjectCollection.getDistributedUpgrades(-4,
> HdfsConstants.NodeType.NAME_NODE);
> if uos will come null then NullPointerException will be thrown.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10486) "Cannot start secure datanode with unprivileged HTTP ports" should give config param

2019-12-13 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-10486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10486:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

This message no longer exists. It seems as an obsolete ticket.

> "Cannot start secure datanode with unprivileged HTTP ports" should give 
> config param
> 
>
> Key: HDFS-10486
> URL: https://issues.apache.org/jira/browse/HDFS-10486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Yiqun Lin
>Priority: Trivial
> Attachments: HDFS-10486.001.patch
>
>
> The "Cannot start secure datanode with unprivileged HTTP ports" error should 
> really give users a hint as to which parameter should get changed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10542) Failures in mvn install

2019-12-13 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-10542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-10542.
-
Resolution: Cannot Reproduce

> Failures in mvn install
> ---
>
> Key: HDFS-10542
> URL: https://issues.apache.org/jira/browse/HDFS-10542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: Ubuntu 14
>Reporter: Pankaj Maurya
>Priority: Trivial
> Attachments: apache-hadoop-hdfs-10542.txt
>
>
> test failures due to webapps/test directory missing in 'mvn install'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5006) Provide a link to symlink target in the web UI

2019-12-13 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-5006.

Resolution: Won't Fix

Obsolete. We have a new UI now.

> Provide a link to symlink target in the web UI
> --
>
> Key: HDFS-5006
> URL: https://issues.apache.org/jira/browse/HDFS-5006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Priority: Minor
> Attachments: screenshot1.png, screenshot2.png
>
>
> Currently, it's difficult to see what is a symlink from the web UI.
> I've attached two screenshots. In screenshot 1, we see the symlink 
> _/user/schu/tf2-link_ which has a target _/user/schu/dir1/tf2_.
> If we click the tf2-link URL, we arrive at a page that shows the path of the 
> target (screenshot 2).
> It'd be useful if there was a link to this target web UI page and some way of 
> easily discerning if a file is a symlink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-27 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916794#comment-16916794
 ] 

Andras Bokor edited comment on HDDS-1610 at 8/27/19 3:16 PM:
-

Shashikant Banerjee,

Can you help me why HDFS-13101 fix was commented out from 
DirectoryWithSnapshotFeature.java?
 It may was unintended since I do not see anything related in the pull request 
and this issue does not seem related to that part of the code.


was (Author: boky01):
Shashikant Banerjee,

Can you help me why HDFS-13101 fix was commented out from 
DirectoryWithSnapshotFeature.java?
I may was unintended since I do not see anything related in the pull request 
and this issue does not seem related to that part of the code.

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-27 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916794#comment-16916794
 ] 

Andras Bokor commented on HDDS-1610:


Shashikant Banerjee,

Can you help me why HDFS-13101 fix was commented out from 
DirectoryWithSnapshotFeature.java?
I may was unintended since I do not see anything related in the pull request 
and this issue does not seem related to that part of the code.

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5656) add some configuration keys to hdfs-default.xml

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-5656.

Resolution: Duplicate

> add some configuration keys to hdfs-default.xml
> ---
>
> Key: HDFS-5656
> URL: https://issues.apache.org/jira/browse/HDFS-5656
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Colin P. McCabe
>Priority: Minor
>
> Some configuration keys like {{dfs.client.read.shortcircuit}} are not present 
> in {{hdfs-default.xml}} as they should be.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13662) TestBlockReaderLocal#testStatisticsForErasureCodingRead is flaky

2018-09-19 Thread Andras Bokor (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620845#comment-16620845
 ] 

Andras Bokor commented on HDFS-13662:
-

The problem is that NN sometimes returns less blocks than expected so when 
reading the file we do not even search the missing chunk so no decode required. 
I do not really understand the reason.
In my case the first datanode contains the following block in the data dir: 
{{blk_-9223372036854775785_1001.meta}}

But when we read the file the NN returns the following blocks (so the block 
from the first datanode is missing):
{code}LocatedBlock{BP-190215821-10.200.51.205-1537375777565:blk_-9223372036854775792_1001;
 getBlockSize()=4194304; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:51733,DS-dee4bb1f-2e28-4133-9f69-1c6337e3491a,DISK]]}
LocatedBlock{BP-190215821-10.200.51.205-1537375777565:blk_-9223372036854775791_1001;
 getBlockSize()=3145851; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:51712,DS-b697ac66-2305-40cc-b1a2-a300be2c8e49,DISK]]}
LocatedBlock{BP-190215821-10.200.51.205-1537375777565:blk_-9223372036854775790_1001;
 getBlockSize()=3145728; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:51737,DS-c3a76ba1-da50-415b-9383-3c0e814be239,DISK]]}
LocatedBlock{BP-190215821-10.200.51.205-1537375777565:blk_-9223372036854775789_1001;
 getBlockSize()=3145728; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:51717,DS-a311ee8d-58ed-4ec3-9819-97c3daeafe6c,DISK]]}
LocatedBlock{BP-190215821-10.200.51.205-1537375777565:blk_-9223372036854775788_1001;
 getBlockSize()=3145728; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:51741,DS-eab9f35a-4fdf-4e86-9c9a-7d4ee5b5036d,DISK]]}
LocatedBlock{BP-190215821-10.200.51.205-1537375777565:blk_-9223372036854775787_1001;
 getBlockSize()=3145728; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:51725,DS-82af61de-b11a-4813-ae55-e4d12791ab00,DISK]]}{code}

Does somebody have explanation or idea how to go on?

> TestBlockReaderLocal#testStatisticsForErasureCodingRead is flaky
> 
>
> Key: HDFS-13662
> URL: https://issues.apache.org/jira/browse/HDFS-13662
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Wei-Chiu Chuang
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> The test failed in this precommit for a patch that only modifies an unrelated 
> test.
> https://builds.apache.org/job/PreCommit-HDFS-Build/24401/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocal/testStatisticsForErasureCodingRead/
> This test also failed occasionally in our internal test.
> {noformat}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testStatisticsForErasureCodingRead(TestBlockReaderLocal.java:842)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13457) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread Andras Bokor (JIRA)
Andras Bokor created HDFS-13457:
---

 Summary: LocalFilesystem#rename(Path, Path, Options.Rename...) 
does not handle crc files
 Key: HDFS-13457
 URL: https://issues.apache.org/jira/browse/HDFS-13457
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


ChecksumFilesystem#rename(Path, Path, Options.Rename...) is missing and 
FilterFileSystem does not care with crc files. That causes abandoned crc files 
in case of rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7524) TestRetryCacheWithHA.testUpdatePipeline fails occasionally in trunk

2018-03-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-7524.

Resolution: Duplicate

> TestRetryCacheWithHA.testUpdatePipeline fails occasionally in trunk
> ---
>
> Key: HDFS-7524
> URL: https://issues.apache.org/jira/browse/HDFS-7524
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: Yongjun Zhang
>Priority: Major
>  Labels: flaky-test
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/
> Error Message
> {quote}
> After waiting the operation updatePipeline still has not taken effect on NN 
> yet
> Stacktrace
> java.lang.AssertionError: After waiting the operation updatePipeline still 
> has not taken effect on NN yet
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176)
> {quote}
> Found by tool proposed in HADOOP-11045:
> {quote}
> [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
> Hadoop-Hdfs-trunk -n 5 | tee bt.log
> Recently FAILED builds in url: 
> https://builds.apache.org//job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport 
> (2014-12-15 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> Failed test: 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport 
> (2014-12-13 10:32:27)
> Failed test: 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport 
> (2014-12-13 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport 
> (2014-12-11 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
> Among 6 runs examined, all failed tests <#failedRuns: testName>:
> 3: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> 2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> 2: 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> 1: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13113) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-27 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-13113:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HDFS-13113
> URL: https://issues.apache.org/jira/browse/HDFS-13113
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, nfs
>Affects Versions: 2.4.0
>Reporter: Steve Loughran
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-10571-branch-3.0.002.patch, 
> HADOOP-10571.05.patch, HADOOP-10571.07.patch
>
>
> FYI, In HADOOP-10571, [~boky01] is going to clean up a lot of the log 
> statements, including some in Datanode and elsewhere.
> I'm provisionally +1 on that, but want to run it on the standalone tests 
> (Yetus has already done them), and give the HDFS developers warning of a 
> change which is going to touch their codebase.
> If anyone doesn't want the logging improvements, now is your chance to say so



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13113) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-20 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-13113:

Attachment: HADOOP-10571-branch-3.0.002.patch

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HDFS-13113
> URL: https://issues.apache.org/jira/browse/HDFS-13113
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, nfs
>Affects Versions: 2.4.0
>Reporter: Steve Loughran
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-10571-branch-3.0.002.patch, 
> HADOOP-10571.05.patch, HADOOP-10571.07.patch
>
>
> FYI, In HADOOP-10571, [~boky01] is going to clean up a lot of the log 
> statements, including some in Datanode and elsewhere.
> I'm provisionally +1 on that, but want to run it on the standalone tests 
> (Yetus has already done them), and give the HDFS developers warning of a 
> change which is going to touch their codebase.
> If anyone doesn't want the logging improvements, now is your chance to say so



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10453) ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

2018-02-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10453:

Comment: was deleted

(was: Regarding branch-2.7:

After patch 008 an additional patch, 009 was uploaded and it seems that one was 
[committed|https://github.com/apache/hadoop/commit/02f6030b35999f2f741a8c4b9363ee59f36f7e28].
The difference between the two patches is that the later one does not include 
the unit test.
I do not see any comment about 009. Was committing 009 intended? Now the branch 
misses the UT.)

> ReplicationMonitor thread could stuck for long time due to the race between 
> replication and delete of same file in a large cluster.
> ---
>
> Key: HDFS-10453
> URL: https://issues.apache.org/jira/browse/HDFS-10453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.1, 2.5.2, 2.7.1, 2.6.4
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6
>
> Attachments: HDFS-10453-branch-2.001.patch, 
> HDFS-10453-branch-2.003.patch, HDFS-10453-branch-2.7.004.patch, 
> HDFS-10453-branch-2.7.005.patch, HDFS-10453-branch-2.7.006.patch, 
> HDFS-10453-branch-2.7.007.patch, HDFS-10453-branch-2.7.008.patch, 
> HDFS-10453-branch-2.7.009.patch, HDFS-10453-branch-2.8.001.patch, 
> HDFS-10453-branch-2.8.002.patch, HDFS-10453-branch-2.9.001.patch, 
> HDFS-10453-branch-2.9.002.patch, HDFS-10453-branch-3.0.001.patch, 
> HDFS-10453-branch-3.0.002.patch, HDFS-10453-trunk.001.patch, 
> HDFS-10453-trunk.002.patch, HDFS-10453.001.patch
>
>
> ReplicationMonitor thread could stuck for long time and loss data with little 
> probability. Consider the typical scenarioļ¼š
> (1) create and close a file with the default replicas(3);
> (2) increase replication (to 10) of the file.
> (3) delete the file while ReplicationMonitor is scheduling blocks belong to 
> that file for replications.
> if ReplicationMonitor stuck reappeared, NameNode will print log as:
> {code:xml}
> 2016-04-19 10:20:48,083 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> ..
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 7 but only 0 storage types can be selected 
> (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, 
> DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) All required storage types are unavailable:  
> unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) 
> process same block at the same moment.
> (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to 
> replicate and leave the global lock.
> (2) FSNamesystem#delete invoked to delete blocks then clear the reference in 
> blocksmap, needReplications, etc. the block's NumBytes will set 
> NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does 
> not need explicit ACK from the node. 
> (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to 
> chooseTargets for the same blocks and no node will be selected after traverse 
> whole cluster because  no node choice satisfy the goodness criteria 
> (remaining spaces achieve required size Long.MAX_VALUE). 

[jira] [Commented] (HDFS-10453) ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

2018-02-13 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16362473#comment-16362473
 ] 

Andras Bokor commented on HDFS-10453:
-

Regarding branch-2.7:

After patch 008 an additional patch, 009 was uploaded and it seems that one was 
[committed|https://github.com/apache/hadoop/commit/02f6030b35999f2f741a8c4b9363ee59f36f7e28].
The difference between the two patches is that the later one does not include 
the unit test.
I do not see any comment about 009. Was committing 009 intended? Now the branch 
misses the UT.

> ReplicationMonitor thread could stuck for long time due to the race between 
> replication and delete of same file in a large cluster.
> ---
>
> Key: HDFS-10453
> URL: https://issues.apache.org/jira/browse/HDFS-10453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.1, 2.5.2, 2.7.1, 2.6.4
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6
>
> Attachments: HDFS-10453-branch-2.001.patch, 
> HDFS-10453-branch-2.003.patch, HDFS-10453-branch-2.7.004.patch, 
> HDFS-10453-branch-2.7.005.patch, HDFS-10453-branch-2.7.006.patch, 
> HDFS-10453-branch-2.7.007.patch, HDFS-10453-branch-2.7.008.patch, 
> HDFS-10453-branch-2.7.009.patch, HDFS-10453-branch-2.8.001.patch, 
> HDFS-10453-branch-2.8.002.patch, HDFS-10453-branch-2.9.001.patch, 
> HDFS-10453-branch-2.9.002.patch, HDFS-10453-branch-3.0.001.patch, 
> HDFS-10453-branch-3.0.002.patch, HDFS-10453-trunk.001.patch, 
> HDFS-10453-trunk.002.patch, HDFS-10453.001.patch
>
>
> ReplicationMonitor thread could stuck for long time and loss data with little 
> probability. Consider the typical scenarioļ¼š
> (1) create and close a file with the default replicas(3);
> (2) increase replication (to 10) of the file.
> (3) delete the file while ReplicationMonitor is scheduling blocks belong to 
> that file for replications.
> if ReplicationMonitor stuck reappeared, NameNode will print log as:
> {code:xml}
> 2016-04-19 10:20:48,083 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> ..
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 7 but only 0 storage types can be selected 
> (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, 
> DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) All required storage types are unavailable:  
> unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) 
> process same block at the same moment.
> (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to 
> replicate and leave the global lock.
> (2) FSNamesystem#delete invoked to delete blocks then clear the reference in 
> blocksmap, needReplications, etc. the block's NumBytes will set 
> NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does 
> not need explicit ACK from the node. 
> (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to 
> chooseTargets for the same blocks and no node will be selected after traverse 
> whole cluster because  no node choice satisfy the goodness criteria 
> (remaining spaces achieve required size 

[jira] [Resolved] (HDFS-3638) backport HDFS-3568 (add security to fuse_dfs) to branch-1

2017-11-24 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-3638.

Resolution: Won't Fix

Branch-1 is EoL.

> backport HDFS-3568 (add security to fuse_dfs) to branch-1
> -
>
> Key: HDFS-3638
> URL: https://issues.apache.org/jira/browse/HDFS-3638
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 1.1.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Minor
>
> Backport HDFS-3568 to branch-1.  This will give fuse_dfs support for Kerberos 
> authentication, allowing FUSE to be used in a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-4262) Backport HTTPFS to Branch 1

2017-11-15 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-4262.

Resolution: Won't Fix

> Backport HTTPFS to Branch 1
> ---
>
> Key: HDFS-4262
> URL: https://issues.apache.org/jira/browse/HDFS-4262
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
> Environment: IBM JDK, RHEL 6.3
>Reporter: Eric Yang
>Assignee: Yu Li
> Attachments: 01-retrofit-httpfs-cdh3u4-for-hadoop1.patch, 
> 02-cookie-from-authenticated-url-is-not-getting-to-auth-filter.patch, 
> 03-resolve-proxyuser-related-issue.patch, HDFS-4262-github.patch
>
>
> There are interests to backport HTTPFS back to Hadoop 1 branch.  After the 
> initial investigation, there're quite some changes in HDFS-2178, and several 
> related patches, including:
> HDFS-2284 Write Http access to HDFS
> HDFS-2646 Hadoop HttpFS introduced 4 findbug warnings
> HDFS-2649 eclipse:eclipse build fails for hadoop-hdfs-httpfs
> HDFS-2657 TestHttpFSServer and TestServerWebApp are failing on trunk
> HDFS-2658 HttpFS introduced 70 javadoc warnings
> The most challenge of backporting is all these patches, including HDFS-2178 
> are for 2.X, which  code base has been refactored a lot and quite different 
> from 1.X, so it seems we have to backport the changes manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4312) fix test TestSecureNameNode and improve test TestSecureNameNodeWithExternalKdc

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-4312:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Seems like obsolete. Java6 is not used by any supported version.

> fix test TestSecureNameNode and improve test TestSecureNameNodeWithExternalKdc
> --
>
> Key: HDFS-4312
> URL: https://issues.apache.org/jira/browse/HDFS-4312
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>  Labels: BB2015-05-TBR
> Attachments: HDFS-4312-trunk--N2.patch, HDFS-4312.patch
>
>
> TestSecureNameNode does not work on Java6 without 
> "dfs.web.authentication.kerberos.principal" config property set.
> Also the following improved:
> 1) keytab files are checked for existence and readability to provide 
> fast-fail on config error.
> 2) added comment to TestSecureNameNode describing the required sys props.
> 3) string literals replaced with config constants.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12780) Fix spelling mistake in DistCpUtils.java

2017-11-07 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-12780:

Fix Version/s: (was: 3.0.0-beta1)

> Fix spelling mistake in DistCpUtils.java
> 
>
> Key: HDFS-12780
> URL: https://issues.apache.org/jira/browse/HDFS-12780
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
>  Labels: patch
> Attachments: HDFS-12780.patch
>
>
> We found a spelling mistake in DistCpUtils.java.  "* If checksums's can't be 
> retrieved," should be " * If checksums can't be retrieved,"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12780) Fix spelling mistake in DistCpUtils.java

2017-11-07 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-12780:

Target Version/s: 3.0.0  (was: 3.0.0-beta1)

> Fix spelling mistake in DistCpUtils.java
> 
>
> Key: HDFS-12780
> URL: https://issues.apache.org/jira/browse/HDFS-12780
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
>  Labels: patch
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12780.patch
>
>
> We found a spelling mistake in DistCpUtils.java.  "* If checksums's can't be 
> retrieved," should be " * If checksums can't be retrieved,"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-3821) Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt edit log)

2017-11-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-3821.

Resolution: Won't Fix

> Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt 
> edit log)
> -
>
> Key: HDFS-3821
> URL: https://issues.apache.org/jira/browse/HDFS-3821
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Priority: Major
>
> Per [Todd's 
> comment|https://issues.apache.org/jira/browse/HDFS-3626?focusedCommentId=13413509=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13413509]
>  this issue affects v1 as well though the problem isn't as obvious because 
> the shell doesn't use the Path(URI) constructor. To test the server side Todd 
> modified the touchz command to use new Path(new URI(src)) and was able to 
> reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12623) Add UT for the Test Command

2017-10-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-12623:

Component/s: test

> Add UT for the Test Command
> ---
>
> Key: HDFS-12623
> URL: https://issues.apache.org/jira/browse/HDFS-12623
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0
>Reporter: legend
> Attachments: HDFS-12623.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12373) Trying to construct Path for a file that has colon (":") throws IllegalArgumentException

2017-10-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-12373.
-
Resolution: Duplicate

HADOOP-14217 will fix this.
HADOOP-14217 has already had a patch and has bigger activity so I think it can 
be closed as "Duplicate".

> Trying to construct Path for a file that has colon (":") throws 
> IllegalArgumentException
> 
>
> Key: HDFS-12373
> URL: https://issues.apache.org/jira/browse/HDFS-12373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Andras Bokor
>
> In case a file has colon in its name, org.apache.hadoop.fs.Path, can not be 
> constructed. For example, I have file "a:b" under /tmp and new 
> Path("file:/tmp", "a:b") throws IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-12373) Trying to construct Path for a file that has colon (":") throws IllegalArgumentException

2017-09-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-12373:

Comment: was deleted

(was: How to reproduce that? {{Path path = new Path("/tmp/a:b");}} does not 
throw any exception.)

> Trying to construct Path for a file that has colon (":") throws 
> IllegalArgumentException
> 
>
> Key: HDFS-12373
> URL: https://issues.apache.org/jira/browse/HDFS-12373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Andras Bokor
>
> In case a file has colon in its name, org.apache.hadoop.fs.Path, can not be 
> constructed. For example, I have file "a:b" under /tmp and new 
> Path("file:/tmp", "a:b") throws IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12373) Trying to construct Path for a file that has colon (":") throws IllegalArgumentException

2017-09-22 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176160#comment-16176160
 ] 

Andras Bokor commented on HDFS-12373:
-

How to reproduce that? {{Path path = new Path("/tmp/a:b");}} does not throw any 
exception.

> Trying to construct Path for a file that has colon (":") throws 
> IllegalArgumentException
> 
>
> Key: HDFS-12373
> URL: https://issues.apache.org/jira/browse/HDFS-12373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Andras Bokor
>
> In case a file has colon in its name, org.apache.hadoop.fs.Path, can not be 
> constructed. For example, I have file "a:b" under /tmp and new 
> Path("file:/tmp", "a:b") throws IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12373) Trying to construct Path for a file that has colon (":") throws IllegalArgumentException

2017-09-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-12373:
---

Assignee: Andras Bokor

> Trying to construct Path for a file that has colon (":") throws 
> IllegalArgumentException
> 
>
> Key: HDFS-12373
> URL: https://issues.apache.org/jira/browse/HDFS-12373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Andras Bokor
>
> In case a file has colon in its name, org.apache.hadoop.fs.Path, can not be 
> constructed. For example, I have file "a:b" under /tmp and new 
> Path("file:/tmp", "a:b") throws IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12326) What is the correct way of retrying when failure occurs during writing

2017-09-08 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-12326.
-
Resolution: Not A Problem

It seems like a question, not a bug.

> What is the correct way of retrying when failure occurs during writing
> --
>
> Key: HDFS-12326
> URL: https://issues.apache.org/jira/browse/HDFS-12326
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs-client
>Reporter: ZhangBiao
>
> I'm using hdfs client for golang https://github.com/colinmarc/hdfs to write 
> to the hdfs. And I'm using hadoop 2.7.3
> When the number of files concurrently being opened is larger, for example 
> 200. I'll always get the 'broken pipe' error.
> So I want to retry to continue writing. What is the correct way of retrying? 
> Because https://github.com/colinmarc/hdfs hasn't been able to recover the 
> stream status when an error occurs duing writing, so I have to reopen and get 
> a new stream. So I tried the following steps:
> 1 close the current stream
> 2 Append the file to get a new stream
> But when I close the stream, I got the error "updateBlockForPipeline call 
> failed with ERROR_APPLICATION (java.io.IOException"
> and it seems the namenode complains:
> {code:java}
> 2017-08-20 03:22:55,598 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 2 on 9000, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.updateBlockForPipeline from 
> 192.168.0.39:46827 Call#50183 Retry#-1
> java.io.IOException: 
> BP-1152809458-192.168.0.39-1502261411064:blk_1073825071_111401 does not exist 
> or is not under Constructionblk_1073825071_111401{UCState=COMMITTED, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-d61914ba-df64-467b-bb75-272875e5e865:NORMAL:192.168.0.39:50010|RBW],
>  
> ReplicaUC[[DISK]DS-1314debe-ab08-4001-ab9a-8e234f28f87c:NORMAL:192.168.0.38:50010|RBW]]}
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6241)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6309)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:806)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> 2017-08-20 03:22:56,333 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1073825071_111401{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-d61914ba-df64-467b-bb75-272875e5e865:NORMAL:192.168.0.39:50010|RBW],
>  
> ReplicaUC[[DISK]DS-1314debe-ab08-4001-ab9a-8e234f28f87c:NORMAL:192.168.0.38:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 <  minimum = 1) in 
> file 
> /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log
> {code}
> when I Appended to get a new stream, I got the error 'append call failed with 
> ERROR_APPLICATION 
> (org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException)', and the 
> corresponding error in namenode is:
> {code:java}
> 2017-08-20 03:22:56,335 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.append: Failed to APPEND_FILE 
> /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log
>  for go-hdfs-OAfvZiSUM2Eu894p on 192.168.0.39 because 
> go-hdfs-OAfvZiSUM2Eu894p is already the current lease holder.
> 2017-08-20 03:22:56,335 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 0 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.append from 
> 192.168.0.39:46827 Call#50186 Retry#-1: 
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to 
> APPEND_FILE 
> /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log
>  for go-hdfs-OAfvZiSUM2Eu894p on 192.168.0.39 because 
> go-hdfs-OAfvZiSUM2Eu894p is already the current lease holder.
> {code}
> Could you please suggest the correct 

[jira] [Comment Edited] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2017-08-08 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118235#comment-16118235
 ] 

Andras Bokor edited comment on HDFS-10429 at 8/8/17 11:06 PM:
--

I am not confident in this area but I tried out the patch on a cluster with 4 
datanodes.
I have tested with 12000 random files between 1-10 MB. Before the patch I got 
exceptions. After the patch I run the copy 3 times (which means 36000 files) 
without exception so it seems the patch fixes the issue.
Related the code of patch:
* We should consider introducing some checks in finally to not to call close 
methods twice. Something like {{if(!getStreamer().isSocketClosed() || 
!getStreamer().isAlive())}} and call forced close if something is not closed 
properly. What do you think?
* I would not hide the InterruptedExceptions where it's not expected. It can 
show when our thread handling is not clean like in this case.

Other than the two bullet points I'd give a non-binding +1 since my test shows 
that the exceptions disappeared and all the files were copied to HDFS.


was (Author: boky01):
I am not confident in this area but I tried out the patch on a cluster with 4 
datanodes.
I have tested with 12000 random files between 1-10 MB. Before the patch I got 
exceptions. After the patch I run the copy 3 times (which means 36000 files) 
without exception so it seems the patch fixes the issue.
Related the code of patch:
* We should consider introducing some checks in finally to not to call close 
methods twice. Something like {{if(!getStreamer().isSocketClosed() || 
!getStreamer().isAlive())}} and call forced close if something is not closed 
properly. What do you think?
* I would not hide the InterruptedExceptions where it's not expected. It can 
show when our thread handling is not clean like in this case.

Other than the two bullet points I'd give a non-binding +1 since my test shows 
that the exceptions disappeared but and all the files were copied to HDFS.

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Zhiyuan Yang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-10429.1.patch, HDFS-10429.2.patch, 
> HDFS-10429.3.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2017-08-08 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10429:

Target Version/s: 3.0.0-beta1

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Zhiyuan Yang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-10429.1.patch, HDFS-10429.2.patch, 
> HDFS-10429.3.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2017-08-08 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118235#comment-16118235
 ] 

Andras Bokor commented on HDFS-10429:
-

I am not confident in this area but I tried out the patch on a cluster with 4 
datanodes.
I have tested with 12000 random files between 1-10 MB. Before the patch I got 
exceptions. After the patch I run the copy 3 times (which means 36000 files) 
without exception so it seems the patch fixes the issue.
Related the code of patch:
* We should consider introducing some checks in finally to not to call close 
methods twice. Something like {{if(!getStreamer().isSocketClosed() || 
!getStreamer().isAlive())}} and call forced close if something is not closed 
properly. What do you think?
* I would not hide the InterruptedExceptions where it's not expected. It can 
show when our thread handling is not clean like in this case.

Other than the two bullet points I'd give a non-binding +1 since my test shows 
that the exceptions disappeared but and all the files were copied to HDFS.

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Zhiyuan Yang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-10429.1.patch, HDFS-10429.2.patch, 
> HDFS-10429.3.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8843) Hadoop dfs command with --verbose option

2017-08-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-8843:
---
Summary: Hadoop dfs command with --verbose option  (was: Hadoop dfs command 
with --verbose optoin)

> Hadoop dfs command with --verbose option
> 
>
> Key: HDFS-8843
> URL: https://issues.apache.org/jira/browse/HDFS-8843
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: Neill Lima
>Priority: Minor
>
> Generally when copying large files from/to HDFS using 
> get/put/copyFromLocal/copyToLocal there is a lot going under the hood that we 
> are not aware of. 
> It would be handy to have a --verbose flag to show the status of the 
> files/folders being copied at the moment, so we can have a rough ETA on 
> completion. 
> A good sample is the curl -O command.
> Another option would be a recursive tree of files showing the progress of 
> each completed/total (%). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11786) Add support to make copyFromLocal multi threaded

2017-07-31 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106992#comment-16106992
 ] 

Andras Bokor commented on HDFS-11786:
-

Thanks [~anu] for your answer. I uploaded a fix for HADOOP-14698 to make the 
two commands identical again.
Could you guys please check?

> Add support to make copyFromLocal multi threaded
> 
>
> Key: HDFS-11786
> URL: https://issues.apache.org/jira/browse/HDFS-11786
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11786.001.patch, HDFS-11786.002.patch, 
> HDFS-11786.003.patch, HDFS-11786.004.patch, HDFS-11786.005.patch
>
>
> CopyFromLocal/Put is not currently multithreaded.
> In case, where there are multiple files which need to be uploaded to the 
> hdfs, a single thread reads the file and then copies the data to the cluster.
> This copy to hdfs can be made faster by uploading multiple files in parallel.
> I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11786) Add support to make copyFromLocal multi threaded

2017-07-28 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-11786:

Issue Type: Improvement  (was: Bug)

> Add support to make copyFromLocal multi threaded
> 
>
> Key: HDFS-11786
> URL: https://issues.apache.org/jira/browse/HDFS-11786
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11786.001.patch, HDFS-11786.002.patch, 
> HDFS-11786.003.patch, HDFS-11786.004.patch, HDFS-11786.005.patch
>
>
> CopyFromLocal/Put is not currently multithreaded.
> In case, where there are multiple files which need to be uploaded to the 
> hdfs, a single thread reads the file and then copies the data to the cluster.
> This copy to hdfs can be made faster by uploading multiple files in parallel.
> I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11786) Add support to make copyFromLocal multi threaded

2017-07-28 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105754#comment-16105754
 ] 

Andras Bokor commented on HDFS-11786:
-

[~anu], [~msingh],

Is there any reason why not to apply this threading feature for -put as well?
I think making that two command not-identical makes the usage more complicated.
Please check HADOOP-14698 and share your thoughts.

> Add support to make copyFromLocal multi threaded
> 
>
> Key: HDFS-11786
> URL: https://issues.apache.org/jira/browse/HDFS-11786
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11786.001.patch, HDFS-11786.002.patch, 
> HDFS-11786.003.patch, HDFS-11786.004.patch, HDFS-11786.005.patch
>
>
> CopyFromLocal/Put is not currently multithreaded.
> In case, where there are multiple files which need to be uploaded to the 
> hdfs, a single thread reads the file and then copies the data to the cluster.
> This copy to hdfs can be made faster by uploading multiple files in parallel.
> I am attaching the initial patch so that I can get some initial feedback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4262) Backport HTTPFS to Branch 1

2017-07-12 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083637#comment-16083637
 ] 

Andras Bokor commented on HDFS-4262:


Seems like obsolete. Is it still intended to fix?

> Backport HTTPFS to Branch 1
> ---
>
> Key: HDFS-4262
> URL: https://issues.apache.org/jira/browse/HDFS-4262
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
> Environment: IBM JDK, RHEL 6.3
>Reporter: Eric Yang
>Assignee: Yu Li
> Attachments: 01-retrofit-httpfs-cdh3u4-for-hadoop1.patch, 
> 02-cookie-from-authenticated-url-is-not-getting-to-auth-filter.patch, 
> 03-resolve-proxyuser-related-issue.patch, HDFS-4262-github.patch
>
>
> There are interests to backport HTTPFS back to Hadoop 1 branch.  After the 
> initial investigation, there're quite some changes in HDFS-2178, and several 
> related patches, including:
> HDFS-2284 Write Http access to HDFS
> HDFS-2646 Hadoop HttpFS introduced 4 findbug warnings
> HDFS-2649 eclipse:eclipse build fails for hadoop-hdfs-httpfs
> HDFS-2657 TestHttpFSServer and TestServerWebApp are failing on trunk
> HDFS-2658 HttpFS introduced 70 javadoc warnings
> The most challenge of backporting is all these patches, including HDFS-2178 
> are for 2.X, which  code base has been refactored a lot and quite different 
> from 1.X, so it seems we have to backport the changes manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-3821) Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt edit log)

2017-07-07 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077887#comment-16077887
 ] 

Andras Bokor edited comment on HDFS-3821 at 7/7/17 3:48 PM:


branch-1 is EoL. Is this ticket still intended to fix?


was (Author: boky01):
branch-1 is EoL. Is this ticket still intended to fix.

> Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt 
> edit log)
> -
>
> Key: HDFS-3821
> URL: https://issues.apache.org/jira/browse/HDFS-3821
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>
> Per [Todd's 
> comment|https://issues.apache.org/jira/browse/HDFS-3626?focusedCommentId=13413509=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13413509]
>  this issue affects v1 as well though the problem isn't as obvious because 
> the shell doesn't use the Path(URI) constructor. To test the server side Todd 
> modified the touchz command to use new Path(new URI(src)) and was able to 
> reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3821) Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt edit log)

2017-07-07 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077887#comment-16077887
 ] 

Andras Bokor commented on HDFS-3821:


branch-1 is EoL. Is this ticket still intended to fix.

> Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt 
> edit log)
> -
>
> Key: HDFS-3821
> URL: https://issues.apache.org/jira/browse/HDFS-3821
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>
> Per [Todd's 
> comment|https://issues.apache.org/jira/browse/HDFS-3626?focusedCommentId=13413509=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13413509]
>  this issue affects v1 as well though the problem isn't as obvious because 
> the shell doesn't use the Path(URI) constructor. To test the server side Todd 
> modified the touchz command to use new Path(new URI(src)) and was able to 
> reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2017-07-03 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072514#comment-16072514
 ] 

Andras Bokor commented on HDFS-9820:


Facon uses {{setUseDiff}} method. Once they bump up HDFS version they need to 
call {{setUseDiff}} with the new signature.
Is incompatible flag needed in this case?

> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch, HDFS-9820.005.patch, 
> HDFS-9820.006.patch, HDFS-9820.007.patch, HDFS-9820.008.patch, 
> HDFS-9820.009.patch, HDFS-9820.branch-2.002.patch, HDFS-9820.branch-2.patch
>
>
> A common use scenario (scenaio 1): 
> # create snapshot sx in clusterX, 
> # do some experiemnts in clusterX, which creates some files. 
> # throw away the files changed and go back to sx.
> Another scenario (scenario 2) is, there is a production cluster and a backup 
> cluster, we periodically sync up the data from production cluster to the 
> backup cluster with distcp. 
> The cluster in scenario 1 could be the backup cluster in scenario 2.
> For scenario 1:
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges.  Before that jira is implemented, we count on 
> distcp to copy from snapshot to the current state. However, the performance 
> of this operation could be very bad because we have to go through all files 
> even if we only changed a few files.
> For scenario 2:
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from 
> snapshot sx of either the source or the target cluster, to restore target 
> cluster's current state to sx. 
> Specifically,
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> As a native restore feature, HDFS-4167 would still be ideal to have. However, 
>  HDFS-9820 would hopefully be easier to implement, before HDFS-4167 is in 
> place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-4580) 0.95 site build failing with 'maven-project-info-reports-plugin: Could not find goal 'dependency-info''

2017-06-30 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-4580.

Resolution: Duplicate

> 0.95 site build failing with 'maven-project-info-reports-plugin: Could not 
> find goal 'dependency-info''
> ---
>
> Key: HDFS-4580
> URL: https://issues.apache.org/jira/browse/HDFS-4580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: stack
>Assignee: Andras Bokor
>
> Our report plugin is 2.4.  Says that 'dependency-info' is new since 2.5 on 
> the mvn report page:
> project-info-reports:dependency-info (new in 2.5>) is used to generate code 
> snippets to be added to build tools.
> http://maven.apache.org/plugins/maven-project-info-reports-plugin/
> Let me try upgrading our reports plugin.  I tried reproducing locally running 
> same mvn version but it just works for me.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10654) Move building of httpfs dependency analysis under "docs" profile

2017-05-12 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-10654.
-
Resolution: Duplicate

> Move building of httpfs dependency analysis under "docs" profile
> 
>
> Key: HDFS-10654
> URL: https://issues.apache.org/jira/browse/HDFS-10654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10654.001.patch
>
>
> When built with "-Pdist" but not "-Pdocs", httpfs still generates a 
> share/docs directory since the dependency report is run unconditionally. 
> Let's move it under the "docs" profile like the rest of the site.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10654) Move building of httpfs dependency analysis under "docs" profile

2017-05-12 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007800#comment-16007800
 ] 

Andras Bokor commented on HDFS-10654:
-

Fixed by HADOOP-14401. Closing.

> Move building of httpfs dependency analysis under "docs" profile
> 
>
> Key: HDFS-10654
> URL: https://issues.apache.org/jira/browse/HDFS-10654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10654.001.patch
>
>
> When built with "-Pdist" but not "-Pdocs", httpfs still generates a 
> share/docs directory since the dependency report is run unconditionally. 
> Let's move it under the "docs" profile like the rest of the site.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11810) Calling maven-site-plugin directly for docs profile is unnecessary

2017-05-11 Thread Andras Bokor (JIRA)
Andras Bokor created HDFS-11810:
---

 Summary: Calling maven-site-plugin directly for docs profile is 
unnecessary
 Key: HDFS-11810
 URL: https://issues.apache.org/jira/browse/HDFS-11810
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor
Priority: Minor


For a few modules:
* hadoop-auth
* hadoop-kms
* hadoop-hdfs-httpfs
* hadoop-sls

we call {{mave-site-plugin}} directly when docs profile is active.
In main pom we use {{excludeDefaults}} in reporting section and allow only 
javadoc and dependency-plugin for the report. Since javadoc plugin is set to 
{{inherited}} false it won't be called on individual child modules. So actually 
{{maven-dependency-plugin:analyze-report}} is the only additional goal which 
will run.
I debugged the process with {{mvn clean package -DskipTests 
-Dmaven.javadoc.skip=true -DskipShade -Pdocs -X}} command and in all the 4 
affected modules I found the following configuration for site plugin:
{code}

  org.apache.maven.plugins
  maven-dependency-plugin
  2.10
  

  default
  
analyze-report
  

  

  {code}

At this point I do not see the purpose of calling {{mave-site-plugin}} for docs 
profile. It does not contain useful information. Or if it does why don't we 
call for other modules? It's inconsistent.
Considering to remove.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10654) Move building of httpfs dependency analysis under "docs" profile

2017-05-11 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006151#comment-16006151
 ] 

Andras Bokor commented on HDFS-10654:
-

[~andrew.wang],

I believe it will be no longer an issue if HADOOP-14401 is approved by the 
community. After it can be resolved.
Do you mind if I assign it to myself to not to be forgotten?

> Move building of httpfs dependency analysis under "docs" profile
> 
>
> Key: HDFS-10654
> URL: https://issues.apache.org/jira/browse/HDFS-10654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-10654.001.patch
>
>
> When built with "-Pdist" but not "-Pdocs", httpfs still generates a 
> share/docs directory since the dependency report is run unconditionally. 
> Let's move it under the "docs" profile like the rest of the site.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-4580) 0.95 site build failing with 'maven-project-info-reports-plugin: Could not find goal 'dependency-info''

2017-05-03 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-4580:
--

Assignee: Andras Bokor

> 0.95 site build failing with 'maven-project-info-reports-plugin: Could not 
> find goal 'dependency-info''
> ---
>
> Key: HDFS-4580
> URL: https://issues.apache.org/jira/browse/HDFS-4580
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: stack
>Assignee: Andras Bokor
>
> Our report plugin is 2.4.  Says that 'dependency-info' is new since 2.5 on 
> the mvn report page:
> project-info-reports:dependency-info (new in 2.5>) is used to generate code 
> snippets to be added to build tools.
> http://maven.apache.org/plugins/maven-project-info-reports-plugin/
> Let me try upgrading our reports plugin.  I tried reproducing locally running 
> same mvn version but it just works for me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9066) expose truncate via webhdfs

2017-04-28 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-9066.

  Resolution: Duplicate
Target Version/s:   (was: )

> expose truncate via webhdfs
> ---
>
> Key: HDFS-9066
> URL: https://issues.apache.org/jira/browse/HDFS-9066
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>
> Truncate should be exposed to WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11707) TestDirectoryScanner#testThrottling fails on OSX

2017-04-26 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15985626#comment-15985626
 ] 

Andras Bokor commented on HDFS-11707:
-

2.5 GHz Intel Core i7
OS X 10.11.3

{code}mvn clean test -Dtest=TestDirectoryScanner -q

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.606 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0{code}

> TestDirectoryScanner#testThrottling fails on OSX
> 
>
> Key: HDFS-11707
> URL: https://issues.apache.org/jira/browse/HDFS-11707
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Priority: Minor
>
> In branch-2 and trunk, {{TestDirectoryScanner#testThrottling}} consistently 
> fails on OS X (I'm running 10.11 specifically) with:
> {code}
> java.lang.AssertionError: Throttle is too permissive
> {code}
> It seems to work alright on Unix systems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11707) TestDirectoryScanner#testThrottling fails on OSX

2017-04-26 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15985605#comment-15985605
 ] 

Andras Bokor commented on HDFS-11707:
-

I cannot reproduce with the same OS version. Does it fail constantly for you?

> TestDirectoryScanner#testThrottling fails on OSX
> 
>
> Key: HDFS-11707
> URL: https://issues.apache.org/jira/browse/HDFS-11707
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Priority: Minor
>
> In branch-2 and trunk, {{TestDirectoryScanner#testThrottling}} consistently 
> fails on OS X (I'm running 10.11 specifically) with:
> {code}
> java.lang.AssertionError: Throttle is too permissive
> {code}
> It seems to work alright on Unix systems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7850) distribute-excludes and refresh-namenodes update to new shell framework

2017-04-24 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15981003#comment-15981003
 ] 

Andras Bokor commented on HDFS-7850:


[~aw],

What do you mean with "use new shell framework"?

> distribute-excludes and refresh-namenodes update to new shell framework
> ---
>
> Key: HDFS-7850
> URL: https://issues.apache.org/jira/browse/HDFS-7850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>
> These need to get updated to use new shell framework.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5567) CacheAdmin operations not supported with viewfs

2017-04-21 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-5567.

  Resolution: Not A Bug
Target Version/s:   (was: )

> CacheAdmin operations not supported with viewfs
> ---
>
> Key: HDFS-5567
> URL: https://issues.apache.org/jira/browse/HDFS-5567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>
> On a federated cluster with viewfs configured, we'll run into the following 
> error when using CacheAdmin commands:
> {code}
> bash-4.1$ hdfs cacheadmin -listPools
> Exception in thread "main" java.lang.IllegalArgumentException: FileSystem 
> viewfs://cluster3/ is not an HDFS file system
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.getDFS(CacheAdmin.java:96)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin.access$100(CacheAdmin.java:50)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:748)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2017-04-21 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-8802.

Resolution: Duplicate

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Andras Bokor
> Attachments: HDFS-8802_01.patch, HDFS-8802_02.patch, HDFS-8802.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2017-04-21 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905274#comment-15905274
 ] 

Andras Bokor edited comment on HDFS-8802 at 4/21/17 1:07 PM:
-

Fixed by HDFS-8356.


was (Author: boky01):
Fixed by HDFS-8356.
I am not sure about the resolution. Duplicate, maybe?

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Andras Bokor
> Attachments: HDFS-8802_01.patch, HDFS-8802_02.patch, HDFS-8802.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-5567) CacheAdmin operations not supported with viewfs

2017-03-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-5567:
--

Assignee: Andras Bokor  (was: Colin P. McCabe)

> CacheAdmin operations not supported with viewfs
> ---
>
> Key: HDFS-5567
> URL: https://issues.apache.org/jira/browse/HDFS-5567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>
> On a federated cluster with viewfs configured, we'll run into the following 
> error when using CacheAdmin commands:
> {code}
> bash-4.1$ hdfs cacheadmin -listPools
> Exception in thread "main" java.lang.IllegalArgumentException: FileSystem 
> viewfs://cluster3/ is not an HDFS file system
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.getDFS(CacheAdmin.java:96)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin.access$100(CacheAdmin.java:50)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:748)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2017-03-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-8802:
--

Assignee: Andras Bokor  (was: Gururaj Shetty)

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Andras Bokor
> Attachments: HDFS-8802_01.patch, HDFS-8802_02.patch, HDFS-8802.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2017-03-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905274#comment-15905274
 ] 

Andras Bokor commented on HDFS-8802:


Fixed by HDFS-8356.
I am not sure about the resolution. Duplicate, maybe?

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Gururaj Shetty
> Attachments: HDFS-8802_01.patch, HDFS-8802_02.patch, HDFS-8802.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-5567) CacheAdmin operations not supported with viewfs

2017-03-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905213#comment-15905213
 ] 

Andras Bokor edited comment on HDFS-5567 at 3/10/17 3:26 PM:
-

CacheAdmin commands work with -fs option.
Commands where {{AdminHelper}} is used the help page misses the -fs option 
(since generic options are not printed).
If there is no objection I will resolve it.


was (Author: boky01):
CacheAdmin commands work with -fs option.
Commands where {{AdminHelper}} is used the help page misses the -fs option 
(since generic options are not printed).
Can we close it?

> CacheAdmin operations not supported with viewfs
> ---
>
> Key: HDFS-5567
> URL: https://issues.apache.org/jira/browse/HDFS-5567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Colin P. McCabe
>
> On a federated cluster with viewfs configured, we'll run into the following 
> error when using CacheAdmin commands:
> {code}
> bash-4.1$ hdfs cacheadmin -listPools
> Exception in thread "main" java.lang.IllegalArgumentException: FileSystem 
> viewfs://cluster3/ is not an HDFS file system
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.getDFS(CacheAdmin.java:96)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin.access$100(CacheAdmin.java:50)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:748)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5567) CacheAdmin operations not supported with viewfs

2017-03-10 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905213#comment-15905213
 ] 

Andras Bokor commented on HDFS-5567:


CacheAdmin commands work with -fs option.
Commands where {{AdminHelper}} is used the help page misses the -fs option 
(since generic options are not printed).
Can we close it?

> CacheAdmin operations not supported with viewfs
> ---
>
> Key: HDFS-5567
> URL: https://issues.apache.org/jira/browse/HDFS-5567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Colin P. McCabe
>
> On a federated cluster with viewfs configured, we'll run into the following 
> error when using CacheAdmin commands:
> {code}
> bash-4.1$ hdfs cacheadmin -listPools
> Exception in thread "main" java.lang.IllegalArgumentException: FileSystem 
> viewfs://cluster3/ is not an HDFS file system
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.getDFS(CacheAdmin.java:96)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin.access$100(CacheAdmin.java:50)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:748)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11229) HDFS-11056 failed to close meta file

2016-12-19 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-11229:

Release Note: The fix for HDFS-11056 reads meta file to load last partial 
chunk checksum when a block is converted from finalized/temporary to rbw. 
However, it did not close the file explicitly, which may cause number of open 
files reaching system limit. This jira fixes it by closing the file explicitly 
after the meta file is read.  (was: The fix for HDFS-111056 reads meta file to 
load last partial chunk checksum when a block is converted from 
finalized/temporary to rbw. However, it did not close the file explicitly, 
which may cause number of open files reaching system limit. This jira fixes it 
by closing the file explicitly after the meta file is read.)

> HDFS-11056 failed to close meta file
> 
>
> Key: HDFS-11229
> URL: https://issues.apache.org/jira/browse/HDFS-11229
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.4, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11229.001.patch, HDFS-11229.branch-2.patch
>
>
> The following code failed to close the file after it is read.
> {code:title=FsVolumeImpl#loadLastPartialChunkChecksum}
> RandomAccessFile raf = new RandomAccessFile(metaFile, "r");
> raf.seek(offsetInChecksum);
> raf.read(lastChecksum, 0, checksumSize);
> return lastChecksum;
> {code}
> This must be fixed because every append operation uses this piece of code. 
> Without an explicit close, open files can reach system limit before 
> RandomAccessFile objects are garbage collected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6262) HDFS doesn't raise FileNotFoundException if the source of a rename() is missing

2016-11-29 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-6262:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Closing this as "Won't Fix". Please check the [related 
comment|https://issues.apache.org/jira/browse/HDFS-303?focusedCommentId=15677026=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677026].

> HDFS doesn't raise FileNotFoundException if the source of a rename() is 
> missing
> ---
>
> Key: HDFS-6262
> URL: https://issues.apache.org/jira/browse/HDFS-6262
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6262.2.patch, HDFS-6262.patch
>
>
> HDFS's {{rename(src, dest)}} returns false if src does not exist -all the 
> other filesystems raise {{FileNotFoundException}}
> This behaviour is defined in {{FSDirectory.unprotectedRenameTo()}} -the 
> attempt is logged, but the operation then just returns false.
> I propose changing the behaviour of {{DistributedFileSystem}} to be the same 
> as that of the others -and of {{FileContext}}, which does reject renames with 
> nonexistent sources



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-303) Make contracts of LocalFileSystem and DistributedFileSystem consistent

2016-11-29 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-303.
---
Resolution: Done

> Make contracts of LocalFileSystem and DistributedFileSystem consistent
> --
>
> Key: HDFS-303
> URL: https://issues.apache.org/jira/browse/HDFS-303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tom White
>Assignee: Andras Bokor
> Attachments: HDFS-303-common-test-case.patch, HDFS-303.patch, 
> hadoop-4114.patch
>
>
> There are a number of edge cases that the two file system implementations 
> handle differently. In particular:
> * When trying to make a directory under an existing file, HDFS throws an 
> IOException while LocalFileSystem doesn't.
> * The FileSytem#listStatus(Path) method returns null for a non-existent file 
> on HDFS, while LocalFileSytem returns an empty FileStatus array.
> * When trying to rename a non-existent path, LocalFileSystem throws an 
> IOException, while HDFS returns false.
> * When renaming a file or directory to a non-existent directory (e.g. /a/b to 
> /c/d, where /c doesn't exist) LocalFileSystem succeeds (returns true) while 
> HDFS fails (false).
> * When renaming a file (or directory) as an existing file (or directory) 
> LocalFileSystem succeeds (returns true) while HDFS fails (false).
> We should document the expected behaviour for these cases in FileSystem's 
> javadoc, and make sure all implementations conform to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-303) Make contracts of LocalFileSystem and DistributedFileSystem consistent

2016-11-18 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677026#comment-15677026
 ] 

Andras Bokor commented on HDFS-303:
---

bq. When trying to make a directory under an existing file, HDFS throws an 
IOException while LocalFileSystem doesn't.
HADOOP-6229 solves this
bq. The FileSytem#listStatus(Path) method returns null for a non-existent file 
on HDFS, while LocalFileSytem returns an empty FileStatus array.
In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
throw FileNotFoundException instead of returning null, when the target 
directory does not exist.
In addition in {{DistributedFilesystem}}:
{code:title=DistributedFilesystem.java:862}
if (thisListing == null) { // the directory does not exist
throw new FileNotFoundException("File " + p + " does not exist.");
}
{code}
In {{RawLocalFileSystem}}:
{code}
if (!localf.exists()) {
throw new FileNotFoundException("File " + f + " does not exist");
}
{code}
So it seems HDFS throws exception too.
bq. When trying to rename a non-existent path, LocalFileSystem throws an 
IOException, while HDFS returns false.
HDFS-6262. I wrote a comment that I think that ticket should be closed as 
"Won't fix".
bq. When renaming a file or directory to a non-existent directory (e.g. /a/b to 
/c/d, where /c doesn't exist) LocalFileSystem succeeds (returns true) while 
HDFS fails (false).
I think it is the same as was discussed in HADOOP-13082
bq. When renaming a file (or directory) as an existing file (or directory) 
LocalFileSystem succeeds (returns true) while HDFS fails (false).
The discussion in HADOOP-13082 covers this as well.
Based on this I think the ticket can be closed. Do you have any objection?

> Make contracts of LocalFileSystem and DistributedFileSystem consistent
> --
>
> Key: HDFS-303
> URL: https://issues.apache.org/jira/browse/HDFS-303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tom White
>Assignee: Andras Bokor
> Attachments: HDFS-303-common-test-case.patch, HDFS-303.patch, 
> hadoop-4114.patch
>
>
> There are a number of edge cases that the two file system implementations 
> handle differently. In particular:
> * When trying to make a directory under an existing file, HDFS throws an 
> IOException while LocalFileSystem doesn't.
> * The FileSytem#listStatus(Path) method returns null for a non-existent file 
> on HDFS, while LocalFileSytem returns an empty FileStatus array.
> * When trying to rename a non-existent path, LocalFileSystem throws an 
> IOException, while HDFS returns false.
> * When renaming a file or directory to a non-existent directory (e.g. /a/b to 
> /c/d, where /c doesn't exist) LocalFileSystem succeeds (returns true) while 
> HDFS fails (false).
> * When renaming a file (or directory) as an existing file (or directory) 
> LocalFileSystem succeeds (returns true) while HDFS fails (false).
> We should document the expected behaviour for these cases in FileSystem's 
> javadoc, and make sure all implementations conform to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6262) HDFS doesn't raise FileNotFoundException if the source of a rename() is missing

2016-08-02 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15404338#comment-15404338
 ] 

Andras Bokor commented on HDFS-6262:


[~ajisakaa],

Any update on this ticket? Will it be closed or fixed?

My thoughts:
Based on my previous experience with rename related issues (HADOOP-13082) it 
seems the ecosystem strongly based on the current behavior of rename methods 
even they are not consistent. Changing in rename behavior can definitely break 
compatibility (another discussion is HDFS-10385, I have just close it as 
Later). Please check the linked issues.

I am asking because I am going through the points of HDFS-303 and the 3rd point 
is covered by this ticket. 

Thanks in advance.

> HDFS doesn't raise FileNotFoundException if the source of a rename() is 
> missing
> ---
>
> Key: HDFS-6262
> URL: https://issues.apache.org/jira/browse/HDFS-6262
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6262.2.patch, HDFS-6262.patch
>
>
> HDFS's {{rename(src, dest)}} returns false if src does not exist -all the 
> other filesystems raise {{FileNotFoundException}}
> This behaviour is defined in {{FSDirectory.unprotectedRenameTo()}} -the 
> attempt is logged, but the operation then just returns false.
> I propose changing the behaviour of {{DistributedFileSystem}} to be the same 
> as that of the others -and of {{FileContext}}, which does reject renames with 
> nonexistent sources



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-303) Make contracts of LocalFileSystem and DistributedFileSystem consistent

2016-08-02 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-303 started by Andras Bokor.
-
> Make contracts of LocalFileSystem and DistributedFileSystem consistent
> --
>
> Key: HDFS-303
> URL: https://issues.apache.org/jira/browse/HDFS-303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tom White
>Assignee: Andras Bokor
> Attachments: HDFS-303-common-test-case.patch, HDFS-303.patch, 
> hadoop-4114.patch
>
>
> There are a number of edge cases that the two file system implementations 
> handle differently. In particular:
> * When trying to make a directory under an existing file, HDFS throws an 
> IOException while LocalFileSystem doesn't.
> * The FileSytem#listStatus(Path) method returns null for a non-existent file 
> on HDFS, while LocalFileSytem returns an empty FileStatus array.
> * When trying to rename a non-existent path, LocalFileSystem throws an 
> IOException, while HDFS returns false.
> * When renaming a file or directory to a non-existent directory (e.g. /a/b to 
> /c/d, where /c doesn't exist) LocalFileSystem succeeds (returns true) while 
> HDFS fails (false).
> * When renaming a file (or directory) as an existing file (or directory) 
> LocalFileSystem succeeds (returns true) while HDFS fails (false).
> We should document the expected behaviour for these cases in FileSystem's 
> javadoc, and make sure all implementations conform to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-303) Make contracts of LocalFileSystem and DistributedFileSystem consistent

2016-08-02 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-303:
-

Assignee: Andras Bokor

> Make contracts of LocalFileSystem and DistributedFileSystem consistent
> --
>
> Key: HDFS-303
> URL: https://issues.apache.org/jira/browse/HDFS-303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tom White
>Assignee: Andras Bokor
> Attachments: HDFS-303-common-test-case.patch, HDFS-303.patch, 
> hadoop-4114.patch
>
>
> There are a number of edge cases that the two file system implementations 
> handle differently. In particular:
> * When trying to make a directory under an existing file, HDFS throws an 
> IOException while LocalFileSystem doesn't.
> * The FileSytem#listStatus(Path) method returns null for a non-existent file 
> on HDFS, while LocalFileSytem returns an empty FileStatus array.
> * When trying to rename a non-existent path, LocalFileSystem throws an 
> IOException, while HDFS returns false.
> * When renaming a file or directory to a non-existent directory (e.g. /a/b to 
> /c/d, where /c doesn't exist) LocalFileSystem succeeds (returns true) while 
> HDFS fails (false).
> * When renaming a file (or directory) as an existing file (or directory) 
> LocalFileSystem succeeds (returns true) while HDFS fails (false).
> We should document the expected behaviour for these cases in FileSystem's 
> javadoc, and make sure all implementations conform to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10385) LocalFileSystem rename() function should return false when destination file exists

2016-07-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-10385.
-
Resolution: Later

> LocalFileSystem rename() function should return false when destination file 
> exists
> --
>
> Key: HDFS-10385
> URL: https://issues.apache.org/jira/browse/HDFS-10385
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Aihua Xu
>Assignee: Xiaobing Zhou
>
> Currently rename() of LocalFileSystem returns true and renames successfully 
> when the destination file exists. That seems to have different behavior from 
> DFSFileSystem. 
> If they can have the same behavior, then we can use one call to do rename 
> rather than checking if destination exists and then making rename() call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-07-22 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389169#comment-15389169
 ] 

Andras Bokor commented on HDFS-10287:
-

Thanks [~ajisakaa]

> MiniDFSCluster should implement AutoCloseable
> -
>
> Key: HDFS-10287
> URL: https://issues.apache.org/jira/browse/HDFS-10287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch
>
>
> {{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-07-21 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387658#comment-15387658
 ] 

Andras Bokor commented on HDFS-10287:
-

Test failures are unrelated.

> MiniDFSCluster should implement AutoCloseable
> -
>
> Key: HDFS-10287
> URL: https://issues.apache.org/jira/browse/HDFS-10287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch
>
>
> {{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-07-21 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387468#comment-15387468
 ] 

Andras Bokor commented on HDFS-10425:
-

Thanks a lot [~ajisakaa]!

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10425.01.patch, HDFS-10425.02.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-07-21 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10287:

Attachment: HDFS-10287.03.patch

Thanks [~ajisakaa],

HDFS-10375 removed a duplicated test what was modified by my patch.
I am uploading [^HDFS-10287.03.patch]

> MiniDFSCluster should implement AutoCloseable
> -
>
> Key: HDFS-10287
> URL: https://issues.apache.org/jira/browse/HDFS-10287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch
>
>
> {{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-07-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Attachment: HDFS-10425.02.patch

Patch 02. First one was no longer applicable.

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch, HDFS-10425.02.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-07-04 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361992#comment-15361992
 ] 

Andras Bokor commented on HDFS-9353:


[~templedf],

Is my patch looks good to you?

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-9353.01.patch
>
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-07-04 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-9353:
---
Status: Patch Available  (was: Reopened)

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-9353.01.patch
>
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-07-04 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-9353:
---
Attachment: HDFS-9353.01.patch

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-9353.01.patch
>
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2016-06-02 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313356#comment-15313356
 ] 

Andras Bokor commented on HDFS-5059:


Thanks [~ajisakaa] for correction.

> Unnecessary permission denied error when creating/deleting snapshots with a 
> non-existent directory
> --
>
> Key: HDFS-5059
> URL: https://issues.apache.org/jira/browse/HDFS-5059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie
>
> As a non-superuser, when you create and delete a snapshot but accidentally 
> specify a non-existent directory to snapshot, you will see an 
> extra/unnecessary permission denied error right after the "No such file or 
> directory" error.
> {code}
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Permission denied
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
> createSnapshot: `/user/schuf/': No such file or directory
> createSnapshot: Permission denied
> {code}
> As the HDFS superuser, instead of the "Permission denied" error you'll get an 
> extra "Directory does not exist" error.
> {code}
> [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Directory does not exist: /user/schuf
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HDFS-9353:


[~templedf]

Thanks for taking a look on it.
First of all a note:
I checked it again. This comment is no longer in JavaKeyStorePrivider class. 
The related method was replaced into ProviderUtils with HADOOP-13157.

Back to the base problem:
I checked the code and it seems pretty straightforward. If you think it is 
misleading I would remove it. I do not feel we should explain this with in-line 
comments. Or we can add more explanation in the javadoc comment instead of 
inline.

What do you think?

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-9353.

Resolution: Not A Problem

I checked with my team. We agreed that the comment is ok and no change is 
needed.

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-5059.

Resolution: Fixed

> Unnecessary permission denied error when creating/deleting snapshots with a 
> non-existent directory
> --
>
> Key: HDFS-5059
> URL: https://issues.apache.org/jira/browse/HDFS-5059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie
>
> As a non-superuser, when you create and delete a snapshot but accidentally 
> specify a non-existent directory to snapshot, you will see an 
> extra/unnecessary permission denied error right after the "No such file or 
> directory" error.
> {code}
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Permission denied
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
> createSnapshot: `/user/schuf/': No such file or directory
> createSnapshot: Permission denied
> {code}
> As the HDFS superuser, instead of the "Permission denied" error you'll get an 
> extra "Directory does not exist" error.
> {code}
> [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Directory does not exist: /user/schuf
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-10425:
---

Assignee: Andras Bokor

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-26 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Status: Open  (was: Patch Available)

Resubmit patch to kick Hadoop QA

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-26 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Status: Patch Available  (was: Open)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10430) Refactor FileSystem#checkAccessPermissions for better reuse from tests

2016-05-24 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298287#comment-15298287
 ] 

Andras Bokor commented on HDFS-10430:
-

cc: [~cnauroth]. He added this method. He may be able to share some more 
thoughts about this.

> Refactor FileSystem#checkAccessPermissions for better reuse from tests
> --
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10385) LocalFileSystem rename() function should return false when destination file exists

2016-05-24 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298267#comment-15298267
 ] 

Andras Bokor commented on HDFS-10385:
-

Who can/should resolve this?

[~cnauroth],
I have a similar one. HADOOP-9819 also changes a rename behavior but it seems 
as a real bug.
Could you please check. I am confused whether we should do that change or not.

> LocalFileSystem rename() function should return false when destination file 
> exists
> --
>
> Key: HDFS-10385
> URL: https://issues.apache.org/jira/browse/HDFS-10385
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Aihua Xu
>Assignee: Xiaobing Zhou
>
> Currently rename() of LocalFileSystem returns true and renames successfully 
> when the destination file exists. That seems to have different behavior from 
> DFSFileSystem. 
> If they can have the same behavior, then we can use one call to do rename 
> rather than checking if destination exists and then making rename() call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10430) Refactor FileSystem#checkAccessPermissions for better reuse from tests

2016-05-20 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293342#comment-15293342
 ] 

Andras Bokor commented on HDFS-10430:
-

I got your point.
As far as I understand the {{InterfaceAudinece}} annotation is to inform 
callers if the method/class can/should be used from outside of the project.
If we add a public counterpart syntactically we do not break the 
{{InterfaceAudience}} rules but semantically we call a 
{{InterfaceAudience.Private}} method.
Please check HDFS-6570. As far as I understand 
{{FileSystem#checkAccessPermissions}} was made private due to some security 
issues.
Even if we add counterpart to a class we should consider using 
@VisibleForTesting annotation.

> Refactor FileSystem#checkAccessPermissions for better reuse from tests
> --
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-05-19 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292190#comment-15292190
 ] 

Andras Bokor commented on HDFS-10287:
-

[~ozawa] Can you help me on this? I could use this on one of the issues.

> MiniDFSCluster should implement AutoCloseable
> -
>
> Key: HDFS-10287
> URL: https://issues.apache.org/jira/browse/HDFS-10287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch
>
>
> {{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10385) LocalFileSystem rename() function should return false when destination file exists

2016-05-19 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291630#comment-15291630
 ] 

Andras Bokor commented on HDFS-10385:
-

Previously, I had a discussion with [~cnauroth] and [~ste...@apache.org] about 
changing rename behaviors and finally that JIRA was resolved with "Later" 
resolution. Please check the HADOOP-13082 for details. Even if we change 
behaviors it should be part of a bigger and more complex change because there 
are lot of differences between rename behaviors in different FS 
implementations. 
Based on that this should be closed either. What do you guys think?

> LocalFileSystem rename() function should return false when destination file 
> exists
> --
>
> Key: HDFS-10385
> URL: https://issues.apache.org/jira/browse/HDFS-10385
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Aihua Xu
>Assignee: Xiaobing Zhou
>
> Currently rename() of LocalFileSystem returns true and renames successfully 
> when the destination file exists. That seems to have different behavior from 
> DFSFileSystem. 
> If they can have the same behavior, then we can use one call to do rename 
> rather than checking if destination exists and then making rename() call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10430) Refactor FileSystem#checkAccessPermissions for better reuse from tests

2016-05-18 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290009#comment-15290009
 ] 

Andras Bokor commented on HDFS-10430:
-

The Access Control Modifier of the method is default (visible in the package). 
With changing to public it will be visible from everywhere where hadoop-common 
is available.
What do you mean here? Change it to public?

> Refactor FileSystem#checkAccessPermissions for better reuse from tests
> --
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

  Assignee: (was: Andras Bokor)
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-1073)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-18 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289852#comment-15289852
 ] 

Andras Bokor commented on HDFS-2173:


Thanks a lot [~andrew.wang].

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Fix For: 2.9.0
>
> Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, 
> HDFS-2173.02.patch, HDFS-2173.03.patch, HDFS-2173.04.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Attachment: HDFS-10425.01.patch

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Status: Patch Available  (was: Open)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Affects Version/s: (was: 0.23.0)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Priority: Trivial  (was: Major)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Target Version/s:   (was: 2.8.0)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Fix Version/s: (was: 0.23.0)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Description: Since I was working with NNStorage and TestSaveNamespace 
classes it is good time take care with IDE and checkstyle warnings.  (was: 
Since I )

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 0.23.0
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Description: Since I   (was: This JIRA tracks a TODO in TestSaveNamespace. 
Currently, if, while writing the VERSION files in the storage directories, one 
of the directories fails, the entire operation throws IOE. This is unnecessary 
-- instead, just that directory should be marked as failed.

This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
does not ever dataloss, and would rarely occur in practice (the dir would have 
to fail between writing the fsimage file and writing VERSION))

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 0.23.0
>
>
> Since I 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)
Andras Bokor created HDFS-10425:
---

 Summary: Clean up NNStorage and TestSaveNamespace
 Key: HDFS-10425
 URL: https://issues.apache.org/jira/browse/HDFS-10425
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 0.23.0
Reporter: Andras Bokor
Assignee: Andras Bokor


This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing the 
VERSION files in the storage directories, one of the directories fails, the 
entire operation throws IOE. This is unnecessary -- instead, just that 
directory should be marked as failed.

This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
does not ever dataloss, and would rarely occur in practice (the dir would have 
to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >