[jira] [Updated] (HDFS-12101) DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for renames under a file

2017-11-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-12101:
--
Status: Patch Available  (was: Open)

> DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for 
> renames under a file
> 
>
> Key: HDFS-12101
> URL: https://issues.apache.org/jira/browse/HDFS-12101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14630-001.patch, HADOOP-14630-002.patch
>
>
> HADOOP-14630 adds some contract tests trying to create files or rename files 
> *under other files*.
> On a rename under an existing file (or dir under an existing file), HDFS 
> fails throwing 
> {{org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException)}}.
>  
> # is throwing an exception here what people agree is the correct behaviour? 
> If so, it can go into the filesystem spec, tests set up to expect it. object 
> stores tweaked for consistency. If not, HDFS needs a change.
> # At the very least, HDFS should be unwrapping the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12101) DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for renames under a file

2017-11-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-12101:
--
Attachment: HADOOP-14630-002.patch

> DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for 
> renames under a file
> 
>
> Key: HDFS-12101
> URL: https://issues.apache.org/jira/browse/HDFS-12101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14630-001.patch, HADOOP-14630-002.patch
>
>
> HADOOP-14630 adds some contract tests trying to create files or rename files 
> *under other files*.
> On a rename under an existing file (or dir under an existing file), HDFS 
> fails throwing 
> {{org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException)}}.
>  
> # is throwing an exception here what people agree is the correct behaviour? 
> If so, it can go into the filesystem spec, tests set up to expect it. object 
> stores tweaked for consistency. If not, HDFS needs a change.
> # At the very least, HDFS should be unwrapping the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12833) In Distcp, Delete option not having the proper usage message.

2017-11-24 Thread usharani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

usharani updated HDFS-12833:

Attachment: HDFS-12833.001.patch

Thanks [~surendrasingh] for review.. Attached updated patch.
Please review..





> In Distcp, Delete option not having the proper usage message.
> -
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-3638) backport HDFS-3568 (add security to fuse_dfs) to branch-1

2017-11-24 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-3638.

Resolution: Won't Fix

Branch-1 is EoL.

> backport HDFS-3568 (add security to fuse_dfs) to branch-1
> -
>
> Key: HDFS-3638
> URL: https://issues.apache.org/jira/browse/HDFS-3638
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 1.1.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Minor
>
> Backport HDFS-3568 to branch-1.  This will give fuse_dfs support for Kerberos 
> authentication, allowing FUSE to be used in a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12807) Ozone: Expose RockDB stats via JMX for Ozone metadata stores

2017-11-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12807:

Status: Patch Available  (was: In Progress)

rechecking with jenkins

> Ozone: Expose RockDB stats via JMX for Ozone metadata stores
> 
>
> Key: HDFS-12807
> URL: https://issues.apache.org/jira/browse/HDFS-12807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Elek, Marton
> Attachments: HDFS-12807-HDFS-7240.001.patch, 
> HDFS-12807-HDFS-7240.002.patch, HDFS-12807-HDFS-7240.003.patch
>
>
> RocksDB JNI has an option to expose stats, this can be further exposed to 
> graphs and monitoring applications. We should expose them to our Rocks 
> metadata store implementation for troubleshooting metadata related 
> performance issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12833) In Distcp, Delete option not having the proper usage message.

2017-11-24 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265151#comment-16265151
 ] 

Surendra Singh Lilhore commented on HDFS-12833:
---

Thanks [~peruguusha] for patch. 

Minor comment. 
Please give the space between source and delete in {{DistCpOptionSwitch.java}} 
and same in {{DistCp.md.vm}} for {{enable.Delete}}.

> In Distcp, Delete option not having the proper usage message.
> -
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12807) Ozone: Expose RockDB stats via JMX for Ozone metadata stores

2017-11-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12807:

Attachment: HDFS-12807-HDFS-7240.003.patch

> Ozone: Expose RockDB stats via JMX for Ozone metadata stores
> 
>
> Key: HDFS-12807
> URL: https://issues.apache.org/jira/browse/HDFS-12807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Elek, Marton
> Attachments: HDFS-12807-HDFS-7240.001.patch, 
> HDFS-12807-HDFS-7240.002.patch, HDFS-12807-HDFS-7240.003.patch
>
>
> RocksDB JNI has an option to expose stats, this can be further exposed to 
> graphs and monitoring applications. We should expose them to our Rocks 
> metadata store implementation for troubleshooting metadata related 
> performance issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-11-24 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265126#comment-16265126
 ] 

Elek, Marton commented on HDFS-12799:
-

Thanks [~vagarychen] the hints:

1. I rebased the patch on top of the latest HDFS-7240. Now it uses HDFS-12793. 
(And also uses the latest minicluster)

2. I tried to use the mapping.updateContainer/createContainer, but I can't. 
This just manipulates the scm, but I would like to test the communication 
between SCM->datanode. So I also need the container creation on the datanode 
side, which could be triggered (as I know) by the client with key uploading. So 
I kept the original approach in the unit test.


> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch, 
> HDFS-12799-HDFS-7240.002.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-11-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12799:

Attachment: HDFS-12799-HDFS-7240.002.patch

> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch, 
> HDFS-12799-HDFS-7240.002.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-11-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12698:

Status: Patch Available  (was: Open)

> Ozone: Use time units in the Ozone configuration values
> ---
>
> Key: HDFS-12698
> URL: https://issues.apache.org/jira/browse/HDFS-12698
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12698-HDFS-7240.001.patch, 
> HDFS-12698-HDFS-7240.002.patch, HDFS-12698-HDFS-7240.003.patch
>
>
> In HDFS-9847 introduced a new way to configure the time related configuration 
> with using time unit in the vaule (eg. 10s, 5m, ...).
> Because the new behavior I have seen a lot of warning during my tests:
> {code}
> 2017-10-19 18:35:19,955 [main] INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1306)) - No unit for 
> scm.container.client.idle.threshold(1) assuming MILLISECONDS
> {code}
> So we need to add the time unit for every configuration. Unfortunately we 
> have a few configuration parameter which includes the unit in the key name 
> (eg dfs.cblock.block.buffer.flush.interval.seconds or 
> ozone.container.report.interval.ms).
> I suggest to remove all the units from the key name and follow the new 
> convention where any of the units could be used. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-11-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12698:

Attachment: HDFS-12698-HDFS-7240.003.patch

> Ozone: Use time units in the Ozone configuration values
> ---
>
> Key: HDFS-12698
> URL: https://issues.apache.org/jira/browse/HDFS-12698
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12698-HDFS-7240.001.patch, 
> HDFS-12698-HDFS-7240.002.patch, HDFS-12698-HDFS-7240.003.patch
>
>
> In HDFS-9847 introduced a new way to configure the time related configuration 
> with using time unit in the vaule (eg. 10s, 5m, ...).
> Because the new behavior I have seen a lot of warning during my tests:
> {code}
> 2017-10-19 18:35:19,955 [main] INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1306)) - No unit for 
> scm.container.client.idle.threshold(1) assuming MILLISECONDS
> {code}
> So we need to add the time unit for every configuration. Unfortunately we 
> have a few configuration parameter which includes the unit in the key name 
> (eg dfs.cblock.block.buffer.flush.interval.seconds or 
> ozone.container.report.interval.ms).
> I suggest to remove all the units from the key name and follow the new 
> convention where any of the units could be used. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12833) In Distcp, Delete option not having the proper usage message.

2017-11-24 Thread usharani (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16264939#comment-16264939
 ] 

usharani edited comment on HDFS-12833 at 11/24/17 8:40 AM:
---

Harshakiran Reddy thanks for reporting...

It make sense to fix this.. Uploaded the patch..Kindly review


was (Author: peruguusha):
Harshakiran Reddy thanks for reporting...

It make sense fix this issueplease review..

> In Distcp, Delete option not having the proper usage message.
> -
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org