[jira] [Comment Edited] (HADOOP-17277) Correct spelling errors for separator

2020-09-22 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200452#comment-17200452
 ] 

Fei Hui edited comment on HADOOP-17277 at 9/23/20, 1:49 AM:


[~ste...@apache.org] Thanks,Both are OK!


was (Author: ferhui):
[~ste...@apache.org] Thanks, "Fei Hui" is right!

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HADOOP-17277.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17277) Correct spelling errors for separator

2020-09-22 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200452#comment-17200452
 ] 

Fei Hui commented on HADOOP-17277:
--

[~ste...@apache.org] Thanks, "Fei Hui" is right!

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HADOOP-17277.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17276) Extend CallerContext to make it include many items

2020-09-22 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199943#comment-17199943
 ] 

Fei Hui commented on HADOOP-17276:
--

Submit a github PR.
[~aajisaka] [~elgoiri] Could you please take a look?

> Extend CallerContext to make it include many items
> --
>
> Key: HADOOP-17276
> URL: https://issues.apache.org/jira/browse/HADOOP-17276
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now context is string. We need to extend the CallerContext because context 
> may contains many items.
> Items include 
> * router ip
> * MR or CLI
> * etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199431#comment-17199431
 ] 

Fei Hui commented on HADOOP-17277:
--

[~ste...@apache.org] Thanks.
Have submit a github PR.
{quote}
If there's any typos in public APIs, constants we can't change them, but we 
could add correct spelling and @deprecate the old one.
{quote}
Have pay attention to it, there are no typos in public APIS.

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HADOOP-17277.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199354#comment-17199354
 ] 

Fei Hui commented on HADOOP-17277:
--

[~weichiu] [~aajisaka] [~ayushtkn] Could you please take a look? Thanks

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17277.001.patch
>
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17277) Correct spelling errors for separator

2020-09-20 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17277:
-
Status: Patch Available  (was: Open)

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17277.001.patch
>
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17277) Correct spelling errors for separator

2020-09-20 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17277:
-
Attachment: HADOOP-17277.001.patch

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17277.001.patch
>
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17277) Correct spelling errors for separator

2020-09-20 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17277:


 Summary: Correct spelling errors for separator
 Key: HADOOP-17277
 URL: https://issues.apache.org/jira/browse/HADOOP-17277
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.0
Reporter: Fei Hui
Assignee: Fei Hui
 Attachments: HADOOP-17277.001.patch

Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17276) Extend CallerContext to make it include many items

2020-09-20 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17276:


 Summary: Extend CallerContext to make it include many items
 Key: HADOOP-17276
 URL: https://issues.apache.org/jira/browse/HADOOP-17276
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Fei Hui
Assignee: Fei Hui


Now context is string. We need to extend the CallerContext because context may 
contains many items.
Items include 
* router ip
* MR or CLI
* etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2020-09-06 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14176:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17235) Erasure Coding: Remove dead code from common side

2020-08-31 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187594#comment-17187594
 ] 

Fei Hui commented on HADOOP-17235:
--

[~hexiaoqiao][~weichiu][~ayushtkn] Could you please take a look? Thanks

> Erasure Coding: Remove dead code from common side
> -
>
> Key: HADOOP-17235
> URL: https://issues.apache.org/jira/browse/HADOOP-17235
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-17235.001.patch
>
>
> These codes are unused, so remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17235) Erasure Coding: Remove dead code from common side

2020-08-30 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17235:
-
Attachment: HADOOP-17235.001.patch

> Erasure Coding: Remove dead code from common side
> -
>
> Key: HADOOP-17235
> URL: https://issues.apache.org/jira/browse/HADOOP-17235
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-17235.001.patch
>
>
> These codes are unused, so remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17235) Erasure Coding: Remove dead code from common side

2020-08-30 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17235:
-
Status: Patch Available  (was: Open)

> Erasure Coding: Remove dead code from common side
> -
>
> Key: HADOOP-17235
> URL: https://issues.apache.org/jira/browse/HADOOP-17235
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-17235.001.patch
>
>
> These codes are unused, so remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17235) Erasure Coding: Remove dead code from common side

2020-08-30 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17235:


 Summary: Erasure Coding: Remove dead code from common side
 Key: HADOOP-17235
 URL: https://issues.apache.org/jira/browse/HADOOP-17235
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Fei Hui
Assignee: Fei Hui


These codes are unused, so remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186187#comment-17186187
 ] 

Fei Hui commented on HADOOP-17232:
--

[~liuml07] Thanks for review ! Upload v002 patch

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17232.001.patch, HADOOP-17232.002.patch
>
>
> When review ec document and code, find the typo.
> Change "a erasure code" to "an erasure code"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17232:
-
Attachment: HADOOP-17232.002.patch

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17232.001.patch, HADOOP-17232.002.patch
>
>
> When review ec document and code, find the typo.
> Change "a erasure code" to "an erasure code"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17232:
-
Status: Patch Available  (was: Open)

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17232.001.patch
>
>
> When review ec document and code, find the typo.
> Change "a erasure code" to "an erasure code"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185813#comment-17185813
 ] 

Fei Hui commented on HADOOP-17232:
--

[~ayushtkn] [~weichiu] Could you please take a look? Thanks

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17232.001.patch
>
>
> When review ec document and code, find the typo.
> Change "a erasure code" to "an erasure code"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185672#comment-17185672
 ] 

Fei Hui commented on HADOOP-17232:
--

Upload the small fix.

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17232.001.patch
>
>
> When review ec document and code, find the typo.
> Change "a erasure code" to "an erasure code"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17232:
-
Attachment: HADOOP-17232.001.patch

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HADOOP-17232.001.patch
>
>
> When review ec document and code, find the typo.
> Change "a erasure code" to "an erasure code"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17232:
-
Environment: (was: When review ec document and code, find the typo.
Change "a erasure code" to "an erasure code")

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17232:
-
Description: 
When review ec document and code, find the typo.
Change "a erasure code" to "an erasure code"

> Erasure Coding: Typo in document
> 
>
> Key: HADOOP-17232
> URL: https://issues.apache.org/jira/browse/HADOOP-17232
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>
> When review ec document and code, find the typo.
> Change "a erasure code" to "an erasure code"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17232) Erasure Coding: Typo in document

2020-08-27 Thread Fei Hui (Jira)
Fei Hui created HADOOP-17232:


 Summary: Erasure Coding: Typo in document
 Key: HADOOP-17232
 URL: https://issues.apache.org/jira/browse/HADOOP-17232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.3.0
 Environment: When review ec document and code, find the typo.
Change "a erasure code" to "an erasure code"
Reporter: Fei Hui
Assignee: Fei Hui






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17209) Erasure Coding: Native library memory leak

2020-08-18 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-17209:
-
Summary: Erasure Coding: Native library memory leak  (was: Erasure coding: 
Native library memory leak)

> Erasure Coding: Native library memory leak
> --
>
> Key: HADOOP-17209
> URL: https://issues.apache.org/jira/browse/HADOOP-17209
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HADOOP-17209.001.patch, 
> datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png, 
> image-2020-08-17-11-26-04-276.png
>
>
> We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} 
> HDFS in production, and both of them have the memory increasing over {{-Xmx}} 
> value. 
> !image-2020-08-15-18-26-44-744.png!
>  
> We use EC strategy to to save storage costs.
> This's the jvm options:
> {code:java}
> -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT 
> -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true 
> -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled 
> -XX:+HeapDumpOnOutOfMemoryError ...{code}
> The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. 
> All the other datanodes in this hdfs cluster has the same issue.
> {code:java}
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
> 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 
> /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
>  
> This too much memory used leads to my machine unresponsive(if enable swap), 
> or oom-killer happens.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17209) ErasureCode native library memory leak

2020-08-18 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178706#comment-17178706
 ] 

Fei Hui edited comment on HADOOP-17209 at 8/18/20, 6:03 AM:


[~eddyxu] [~drankye] [~Sammi] [~weichiu] Could you please take a look ?


was (Author: ferhui):
[~eddyxu] [~drankye] [~Sammi]] [~weichiu] Could you please take a look ?

> ErasureCode native library memory leak
> --
>
> Key: HADOOP-17209
> URL: https://issues.apache.org/jira/browse/HADOOP-17209
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HADOOP-17209.001.patch, 
> datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png, 
> image-2020-08-17-11-26-04-276.png
>
>
> We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} 
> HDFS in production, and both of them have the memory increasing over {{-Xmx}} 
> value. 
> !image-2020-08-15-18-26-44-744.png!
>  
> We use EC strategy to to save storage costs.
> This's the jvm options:
> {code:java}
> -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT 
> -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true 
> -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled 
> -XX:+HeapDumpOnOutOfMemoryError ...{code}
> The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. 
> All the other datanodes in this hdfs cluster has the same issue.
> {code:java}
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
> 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 
> /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
>  
> This too much memory used leads to my machine unresponsive(if enable swap), 
> or oom-killer happens.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17209) ErasureCode native library memory leak

2020-08-18 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178706#comment-17178706
 ] 

Fei Hui edited comment on HADOOP-17209 at 8/18/20, 6:03 AM:


[~eddyxu] [~drankye] [~Sammi]] [~weichiu] Could you please take a look ?


was (Author: ferhui):
[~eddyxu] [~drankye] [~sammichen] [~weichiu] Could you please take a look ?

> ErasureCode native library memory leak
> --
>
> Key: HADOOP-17209
> URL: https://issues.apache.org/jira/browse/HADOOP-17209
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HADOOP-17209.001.patch, 
> datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png, 
> image-2020-08-17-11-26-04-276.png
>
>
> We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} 
> HDFS in production, and both of them have the memory increasing over {{-Xmx}} 
> value. 
> !image-2020-08-15-18-26-44-744.png!
>  
> We use EC strategy to to save storage costs.
> This's the jvm options:
> {code:java}
> -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT 
> -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true 
> -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled 
> -XX:+HeapDumpOnOutOfMemoryError ...{code}
> The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. 
> All the other datanodes in this hdfs cluster has the same issue.
> {code:java}
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
> 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 
> /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
>  
> This too much memory used leads to my machine unresponsive(if enable swap), 
> or oom-killer happens.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17209) ErasureCode native library memory leak

2020-08-17 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179358#comment-17179358
 ] 

Fei Hui commented on HADOOP-17209:
--

[~seanlook] Could you please change the caption to "Erasure coding: Native 
library memory leak"?  I see other EC issues did that.

> ErasureCode native library memory leak
> --
>
> Key: HADOOP-17209
> URL: https://issues.apache.org/jira/browse/HADOOP-17209
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HADOOP-17209.001.patch, 
> datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png, 
> image-2020-08-17-11-26-04-276.png
>
>
> We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} 
> HDFS in production, and both of them have the memory increasing over {{-Xmx}} 
> value. 
> !image-2020-08-15-18-26-44-744.png!
>  
> We use EC strategy to to save storage costs.
> This's the jvm options:
> {code:java}
> -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT 
> -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true 
> -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled 
> -XX:+HeapDumpOnOutOfMemoryError ...{code}
> The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. 
> All the other datanodes in this hdfs cluster has the same issue.
> {code:java}
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
> 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 
> /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
>  
> This too much memory used leads to my machine unresponsive(if enable swap), 
> or oom-killer happens.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17209) ErasureCode native library memory leak

2020-08-16 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178706#comment-17178706
 ] 

Fei Hui commented on HADOOP-17209:
--

[~eddyxu] [~drankye] [~sammichen] [~weichiu] Could you please take a look ?

> ErasureCode native library memory leak
> --
>
> Key: HADOOP-17209
> URL: https://issues.apache.org/jira/browse/HADOOP-17209
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HADOOP-17209.001.patch, 
> datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png, 
> image-2020-08-17-11-26-04-276.png
>
>
> We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} 
> HDFS in production, and both of them have the memory increasing over {{-Xmx}} 
> value. 
> !image-2020-08-15-18-26-44-744.png!
>  
> We use EC strategy to to save storage costs.
> This's the jvm options:
> {code:java}
> -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT 
> -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true 
> -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled 
> -XX:+HeapDumpOnOutOfMemoryError ...{code}
> The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. 
> All the other datanodes in this hdfs cluster has the same issue.
> {code:java}
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
> 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 
> /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
>  
> This too much memory used leads to my machine unresponsive(if enable swap), 
> or oom-killer happens.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17209) ErasureCode native library memory leak

2020-08-16 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178683#comment-17178683
 ] 

Fei Hui commented on HADOOP-17209:
--

Good catch ! Thanks for reporting this, It looks good !
[~ayushtkn] Could you please take a look?

> ErasureCode native library memory leak
> --
>
> Key: HADOOP-17209
> URL: https://issues.apache.org/jira/browse/HADOOP-17209
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HADOOP-17209.001.patch, 
> datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png
>
>
> We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} 
> HDFS in production, and both of them have the memory increasing over {{-Xmx}} 
> value. 
> !image-2020-08-15-18-26-44-744.png!
>  
> We use EC strategy to to save storage costs.
> This's the jvm options:
> {code:java}
> -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT 
> -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true 
> -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled 
> -XX:+HeapDumpOnOutOfMemoryError ...{code}
> The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. 
> All the other datanodes in this hdfs cluster has the same issue.
> {code:java}
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
> 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 
> /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
>  
> This too much memory used leads to my machine unresponsive(if enable swap), 
> or oom-killer happens.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17204) Fix typo in Hadoop KMS document

2020-08-11 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176021#comment-17176021
 ] 

Fei Hui commented on HADOOP-17204:
--

+1

> Fix typo in Hadoop KMS document
> ---
>
> Key: HADOOP-17204
> URL: https://issues.apache.org/jira/browse/HADOOP-17204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-17204.001.patch
>
>
> [https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#HTTP_Kerberos_Principals_Configuration]
> bq. In order to be able to access directly a specific KMS instance, the KMS 
> instance must also have Keberos service name with its own hostname. This is 
> required for monitoring and admin purposes.
> Keberos -> Kerberos



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16814) Add dropped connections metric for Server

2020-03-15 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059929#comment-17059929
 ] 

Fei Hui commented on HADOOP-16814:
--

[~weichiu] Thanks for ping . Will work on this later

> Add dropped connections metric for Server
> -
>
> Key: HADOOP-16814
> URL: https://issues.apache.org/jira/browse/HADOOP-16814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-16814.001.patch
>
>
> With this metric we can see that the number of handled rpcs which weren't 
> sent to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16814) Add dropped connections metric for Server

2020-01-25 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17023719#comment-17023719
 ] 

Fei Hui commented on HADOOP-16814:
--

[~elgoiri] Thanks for your review. Will fix UT

> Add dropped connections metric for Server
> -
>
> Key: HADOOP-16814
> URL: https://issues.apache.org/jira/browse/HADOOP-16814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-16814.001.patch
>
>
> With this metric we can see that the number of handled rpcs which weren't 
> sent to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16814) Add dropped connections metric for Server

2020-01-20 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019338#comment-17019338
 ] 

Fei Hui commented on HADOOP-16814:
--

[~elgoiri] [~ayushtkn] Could you please take a look.

> Add dropped connections metric for Server
> -
>
> Key: HADOOP-16814
> URL: https://issues.apache.org/jira/browse/HADOOP-16814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-16814.001.patch
>
>
> With this metric we can see that the number of handled rpcs which weren't 
> sent to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16814) Add dropped connections metric for Server

2020-01-19 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-16814:
-
Attachment: HADOOP-16814.001.patch

> Add dropped connections metric for Server
> -
>
> Key: HADOOP-16814
> URL: https://issues.apache.org/jira/browse/HADOOP-16814
> Project: Hadoop Common
>  Issue Type: Test
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-16814.001.patch
>
>
> With this metric we can see that the number of handled rpcs which weren't 
> sent to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16814) Add dropped connections metric for Server

2020-01-19 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019234#comment-17019234
 ] 

Fei Hui commented on HADOOP-16814:
--

Upload v001 patch

> Add dropped connections metric for Server
> -
>
> Key: HADOOP-16814
> URL: https://issues.apache.org/jira/browse/HADOOP-16814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-16814.001.patch
>
>
> With this metric we can see that the number of handled rpcs which weren't 
> sent to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16814) Add dropped connections metric for Server

2020-01-19 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-16814:
-
Issue Type: Improvement  (was: Test)

> Add dropped connections metric for Server
> -
>
> Key: HADOOP-16814
> URL: https://issues.apache.org/jira/browse/HADOOP-16814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-16814.001.patch
>
>
> With this metric we can see that the number of handled rpcs which weren't 
> sent to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16814) Add dropped connections metric for Server

2020-01-19 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-16814:
-
Status: Patch Available  (was: Open)

> Add dropped connections metric for Server
> -
>
> Key: HADOOP-16814
> URL: https://issues.apache.org/jira/browse/HADOOP-16814
> Project: Hadoop Common
>  Issue Type: Test
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HADOOP-16814.001.patch
>
>
> With this metric we can see that the number of handled rpcs which weren't 
> sent to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16814) Add dropped connections metric for Server

2020-01-19 Thread Fei Hui (Jira)
Fei Hui created HADOOP-16814:


 Summary: Add dropped connections metric for Server
 Key: HADOOP-16814
 URL: https://issues.apache.org/jira/browse/HADOOP-16814
 Project: Hadoop Common
  Issue Type: Test
  Components: common
Affects Versions: 3.3.0
Reporter: Fei Hui
Assignee: Fei Hui


With this metric we can see that the number of handled rpcs which weren't sent 
to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7310) Trash location needs to be revisited

2019-09-07 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924818#comment-16924818
 ] 

Fei Hui commented on HADOOP-7310:
-

Extend on solution A.
Solution A occurs the problem: Admins should create a trash directory before 
users remove, othersize users don't allow to create the trash directory.

Create a new Service to move /user/${user}/.Trash/* to 
/trash/user/${user}/.Trash/* periodically. 

With this we can
  1 Resolve users' quota problem.
  2 Forward Compatible.

> Trash location needs to be revisited
> 
>
> Key: HADOOP-7310
> URL: https://issues.apache.org/jira/browse/HADOOP-7310
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16341) ShutDownHookManager: Regressed performance on Hook removals after HADOOP-15679

2019-08-21 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912140#comment-16912140
 ] 

Fei Hui commented on HADOOP-16341:
--

[~gopalv] [~ste...@apache.org] This issue is open and i want to know why the 
patch was committed to trunk and other branches.
Did you discuss it in other places?

> ShutDownHookManager: Regressed performance on Hook removals after HADOOP-15679
> --
>
> Key: HADOOP-16341
> URL: https://issues.apache.org/jira/browse/HADOOP-16341
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.1.2
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16341.branch-3.1.002.patch, 
> HADOOP-16341.branch-3.1.1.patch, shutdown-hook-removal.png
>
>
>  !shutdown-hook-removal.png! 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-17 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583576#comment-16583576
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Thanks for your review

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch, 
> HADOOP-15633.003.patch, HADOOP-15633.004.patch, HADOOP-15633.005.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582680#comment-16582680
 ] 

Fei Hui commented on HADOOP-15633:
--

Upload v005 patch to fix checkstyle

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch, 
> HADOOP-15633.003.patch, HADOOP-15633.004.patch, HADOOP-15633.005.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Attachment: HADOOP-15633.005.patch

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch, 
> HADOOP-15633.003.patch, HADOOP-15633.004.patch, HADOOP-15633.005.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Attachment: HADOOP-15633.004.patch

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch, 
> HADOOP-15633.003.patch, HADOOP-15633.004.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582104#comment-16582104
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Thanks for you quick reply.
{quote}
Remove the space after “new Path 
{quote}
Done
{quote}
you might add a line “—I;” before “continue;”
{quote}
Great idea. I just add some comments before --i so that users can understand it.
Upload v004 patch

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch, 
> HADOOP-15633.003.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
>

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Attachment: (was: HADOOP-15633-003.patch)

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch, 
> HADOOP-15633.003.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Attachment: HADOOP-15633.003.patch

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633-003.patch, HADOOP-15633.001.patch, 
> HADOOP-15633.002.patch, HADOOP-15633.003.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Attachment: HADOOP-15633-003.patch

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633-003.patch, HADOOP-15633.001.patch, 
> HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-16 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582011#comment-16582011
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Thanks for your review. 
Upload v003 patch. I fix the code in TrashPolicyDefault.java  and add a new 
unit test in TrashPolicyDefault.java according to your suggestions.

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-15 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581213#comment-16581213
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Thanks for your review

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-13 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578030#comment-16578030
 ] 

Fei Hui commented on HADOOP-15633:
--

ping [~jzhuge]

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-08-05 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569630#comment-16569630
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Could you please take a look again? Thanks

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-27 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560006#comment-16560006
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/27/18 5:16 PM:
---

[~jzhuge] Thanks. Got it
Upload the new path v002, changes as below. We need to make new baseTrashPath & 
trashPath with timestamp.
{code:java}
baseTrashPath = new Path 
(baseTrashPath.toString().replace(existsFilePath.toString()
  , existsFilePath.toString() + Time.now()));
trashPath = new Path(baseTrashPath, trashPath.getName());
fs.mkdirs(baseTrashPath, PERMISSION);
{code}



was (Author: ferhui):
[~jzhuge] Thanks. Got it
Upload the new path, changes as below. We need to make new baseTrashPath & 
trashPath with timestamp.
{code:java}
baseTrashPath = new Path 
(baseTrashPath.toString().replace(existsFilePath.toString()
  , existsFilePath.toString() + Time.now()));
trashPath = new Path(baseTrashPath, trashPath.getName());
fs.mkdirs(baseTrashPath, PERMISSION);
{code}


> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-27 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Attachment: HADOOP-15633.002.patch

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch, HADOOP-15633.002.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-27 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560006#comment-16560006
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Thanks. Got it
Upload the new path, changes as below. We need to make new baseTrashPath & 
trashPath with timestamp.
{code:java}
baseTrashPath = new Path 
(baseTrashPath.toString().replace(existsFilePath.toString()
  , existsFilePath.toString() + Time.now()));
trashPath = new Path(baseTrashPath, trashPath.getName());
fs.mkdirs(baseTrashPath, PERMISSION);
{code}


> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-27 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559330#comment-16559330
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Sorry i didn't catch your point.
{quote}
what I meant was "name conflict resolution is not done for a parent dir on the 
path." for your command:
hadoop fs -rm /user/hadoop/aaa/bbb/ccc
{quote}
do you mean that */user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927* 
should be 
*/user/hadoop/.Trash/Current/user/hadoop/aaa15326258x/bbb1532625817927* ?

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559139#comment-16559139
 ] 

Fei Hui commented on HADOOP-15633:
--

CC [~andrew.wang] could you please take a look ? Thanks

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559130#comment-16559130
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/27/18 1:46 AM:
---

[~jzhuge] Thanks for your reply
{quote}
If you run the following command after your test steps:
hadoop fs -rm -r /user/hadoop/aaa/bbb
You will see the name conflict resolution, "bbb" -> "bbb1532625817927":
{quote}
Yes. basetrashdir */user/hadoop/.Trash/Current/user/hadoop/aaa* exists, and 
*fs.mkdirs(baseTrashPath, PERMISSION)* will success. the original code as 
follow resolves the name conflict

{code:java}
  try {
// if the target path in Trash already exists, then append with 
// a current time in millisecs.
String orig = trashPath.toString();

while(fs.exists(trashPath)) {
  trashPath = new Path(orig + Time.now());
}

if (fs.rename(path, trashPath))   // move to current trash
  return true;
  } catch (IOException e) {
cause = e;
  }
{code}

{quote}
/user/hadoop/.Trash
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current/user
drwx-- - hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop
drwx-- - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
drwxr-xr-x - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927/ccc
Looks like name conflict resolution is not done for a parent dir on the path.
{quote}
The command *hadoop fs -rm -r /user/hadoop/aaa/bbb* run successfully, and it 
means name conflict does not exist, right ?


was (Author: ferhui):
[~jzhuge] Thanks for your reply
{quote}
If you run the following command after your test steps:
hadoop fs -rm -r /user/hadoop/aaa/bbb
You will see the name conflict resolution, "bbb" -> "bbb1532625817927":
{quote}
Yes. basetrashdir */user/hadoop/.Trash/Current/user/hadoop/aaa* exists, and 
*fs.mkdirs(baseTrashPath, PERMISSION)* will success. the original code as 
follow resolves the name conflict

{code:java}
  try {
// if the target path in Trash already exists, then append with 
// a current time in millisecs.
String orig = trashPath.toString();

while(fs.exists(trashPath)) {
  trashPath = new Path(orig + Time.now());
}

if (fs.rename(path, trashPath))   // move to current trash
  return true;
  } catch (IOException e) {
cause = e;
  }
{code}

{quote}
/user/hadoop/.Trash
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current/user
drwx-- - hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop
drwx-- - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
drwxr-xr-x - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927/ccc
Looks like name conflict resolution is not done for a parent dir on the path.
{quote}
The command *hadoop fs -rm -r /user/hadoop/aaa/bbb* run successfully, and it 
means name conflict does not exits, right ?

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559130#comment-16559130
 ] 

Fei Hui commented on HADOOP-15633:
--

[~jzhuge] Thanks for your reply
{quote}
If you run the following command after your test steps:
hadoop fs -rm -r /user/hadoop/aaa/bbb
You will see the name conflict resolution, "bbb" -> "bbb1532625817927":
{quote}
Yes. basetrashdir */user/hadoop/.Trash/Current/user/hadoop/aaa* exists, and 
*fs.mkdirs(baseTrashPath, PERMISSION)* will success. the original code as 
follow resolves the name conflict

{code:java}
  try {
// if the target path in Trash already exists, then append with 
// a current time in millisecs.
String orig = trashPath.toString();

while(fs.exists(trashPath)) {
  trashPath = new Path(orig + Time.now());
}

if (fs.rename(path, trashPath))   // move to current trash
  return true;
  } catch (IOException e) {
cause = e;
  }
{code}

{quote}
/user/hadoop/.Trash
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current
drwx-- - hadoop hadoop 0 2018-07-26 17:21 /user/hadoop/.Trash/Current/user
drwx-- - hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop
drwx-- - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:21 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
drwxr-xr-x - hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927
-rw-r--r-- 3 hadoop hadoop 0 2018-07-26 17:23 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb1532625817927/ccc
Looks like name conflict resolution is not done for a parent dir on the path.
{quote}
The command *hadoop fs -rm -r /user/hadoop/aaa/bbb* run successfully, and it 
means name conflict does not exits, right ?

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558184#comment-16558184
 ] 

Fei Hui commented on HADOOP-15633:
--

[~raviprak] Could you please take a look ? Thanks

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Status: Patch Available  (was: Open)

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.7, 3.0.3, 3.1.0, 2.8.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558152#comment-16558152
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/26/18 11:01 AM:


upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
} catch (FileAlreadyExistsException e) {
// here we should catch FileAlreadyExistsException, then handle it
} catch (IOException e) {
  LOG.warn("Can't create trash directory: " + baseTrashPath, e);
  cause = e;
  break;
}
...
  }
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.




was (Author: ferhui):
upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
} catch (FileAlreadyExistsException e) {
// here we should catch FileAlreadyExistsException, then handle it
  } catch (IOException e) {
LOG.warn("Can't create trash directory: " + baseTrashPath, e);
cause = e;
break;
  }
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.



> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> 

[jira] [Comment Edited] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558152#comment-16558152
 ] 

Fei Hui edited comment on HADOOP-15633 at 7/26/18 11:00 AM:


upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
} catch (FileAlreadyExistsException e) {
// here we should catch FileAlreadyExistsException, then handle it
  } catch (IOException e) {
LOG.warn("Can't create trash directory: " + baseTrashPath, e);
cause = e;
break;
  }
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.




was (Author: ferhui):
upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
}
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.



> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -mkdir /user/hadoop/aaa
> hadoop fs -touchz /user/hadoop/aaa/bbb
> hadoop fs -rm /user/hadoop/aaa/bbb
> hadoop fs -mkdir /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Description: 
Reproduce it as follow

{code:java}
hadoop fs -mkdir /user/hadoop/aaa
hadoop fs -touchz /user/hadoop/aaa/bbb
hadoop fs -rm /user/hadoop/aaa/bbb
hadoop fs -mkdir /user/hadoop/aaa/bbb
hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
hadoop fs -rm /user/hadoop/aaa/bbb/ccc
{code}

Then we get errors 

{code:java}
18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at 
org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException):
 Path is not a directory: /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Description: 
Reproduce it as follow

{code:java}
hadoop fs -mkdir /user/hadoop/aaa
hadoop fs -touchz /user/hadoop/aaa/bbb
hadoop fs -rm /user/hadoop/aaa
hadoop fs -mkdir /user/hadoop/aaa/bbb
hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
hadoop fs -rm /user/hadoop/aaa/bbb/ccc
{code}

Then we get errors 

{code:java}
18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at 
org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException):
 Path is not a directory: /user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 

[jira] [Commented] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558152#comment-16558152
 ] 

Fei Hui commented on HADOOP-15633:
--

upload v1 patch

{code:java}
 public boolean moveToTrash(Path path) throws IOException {
if (!isEnabled())
  return false;

if (!path.isAbsolute())   // make path absolute
  path = new Path(fs.getWorkingDirectory(), path);

// check that path exists
fs.getFileStatus(path);
String qpath = fs.makeQualified(path).toString();

Path trashRoot = fs.getTrashRoot(path);
Path trashCurrent = new Path(trashRoot, CURRENT);
if (qpath.startsWith(trashRoot.toString())) {
  return false;   // already in trash
}

if (trashRoot.getParent().toString().startsWith(qpath)) {
  throw new IOException("Cannot move \"" + path +
"\" to the trash, as it contains the trash");
}

Path trashPath = makeTrashRelativePath(trashCurrent, path);
Path baseTrashPath = makeTrashRelativePath(trashCurrent, path.getParent());

IOException cause = null;

// try twice, in case checkpoint between the mkdirs() & rename()
for (int i = 0; i < 2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
  return false;
}
{code}

In moveToTrash function, catch  FileAlreadyExistsException and find the 
existing file path, then rename it, mkdir the base trash path at last.



> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -touchz /user/hadoop/aaa
> hadoop fs -rm /user/hadoop/aaa
> hadoop fs -mkdir -p /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Attachment: HADOOP-15633.001.patch

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15633
> URL: https://issues.apache.org/jira/browse/HADOOP-15633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.8.3, 3.1.0, 3.0.3, 2.7.7
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-15633.001.patch
>
>
> Reproduce it as follow
> {code:java}
> hadoop fs -touchz /user/hadoop/aaa
> hadoop fs -rm /user/hadoop/aaa
> hadoop fs -mkdir -p /user/hadoop/aaa/bbb
> hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
> hadoop fs -rm /user/hadoop/aaa/bbb/ccc
> {code}
> Then we get errors 
> {code:java}
> 18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
> hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
> org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
> /user/hadoop/.Trash/Current/user/hadoop/aaa
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
> at 
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> Caused by: 
> 

[jira] [Updated] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-15633:
-
Description: 
Reproduce it as follow

{code:java}
hadoop fs -touchz /user/hadoop/aaa
hadoop fs -rm /user/hadoop/aaa
hadoop fs -mkdir -p /user/hadoop/aaa/bbb
hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
hadoop fs -rm /user/hadoop/aaa/bbb/ccc
{code}

Then we get errors 

{code:java}
18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
/user/hadoop/.Trash/Current/user/hadoop/aaa
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at 
org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException):
 Path is not a directory: /user/hadoop/.Trash/Current/user/hadoop/aaa
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 

[jira] [Created] (HADOOP-15633) fs.TrashPolicyDefault: Can't create trash directory

2018-07-26 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-15633:


 Summary: fs.TrashPolicyDefault: Can't create trash directory
 Key: HADOOP-15633
 URL: https://issues.apache.org/jira/browse/HADOOP-15633
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.7.7, 3.0.3, 3.1.0, 2.8.3
Reporter: Fei Hui
Assignee: Fei Hui


Reproduce it as follow

{code:shell}
hadoop fs -touchz /user/hadoop/aaa
hadoop fs -rm /user/hadoop/aaa
hadoop fs -mkdir -p /user/hadoop/aaa/bbb
hadoop fs -touchz /user/hadoop/aaa/bbb/ccc
hadoop fs -rm /user/hadoop/aaa/bbb/ccc
{code}

Then we get errors 

{code:java}
18/07/26 17:55:24 WARN fs.TrashPolicyDefault: Can't create trash directory: 
hdfs://xxx/user/hadoop/.Trash/Current/user/hadoop/aaa/bbb
org.apache.hadoop.fs.FileAlreadyExistsException: Path is not a directory: 
/user/hadoop/.Trash/Current/user/hadoop/aaa
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at 
org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:118)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:105)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException):
 Path is not a directory: /user/hadoop/.Trash/Current/user/hadoop/aaa
at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3961)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
 

[jira] [Commented] (HADOOP-13114) DistCp should have option to compress data on write

2017-04-25 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982488#comment-15982488
 ] 

Fei Hui commented on HADOOP-13114:
--

[~snayakm] could you please upload a patch for branch-2?

> DistCp should have option to compress data on write
> ---
>
> Key: HADOOP-13114
> URL: https://issues.apache.org/jira/browse/HADOOP-13114
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Suraj Nayak
>Assignee: Suraj Nayak
>Priority: Minor
>  Labels: distcp
> Attachments: HADOOP-13114.05.patch, HADOOP-13114.06.patch, 
> HADOOP-13114-trunk_2016-05-07-1.patch, HADOOP-13114-trunk_2016-05-08-1.patch, 
> HADOOP-13114-trunk_2016-05-10-1.patch, HADOOP-13114-trunk_2016-05-12-1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> DistCp utility should have capability to store data in user specified 
> compression format. This avoids one hop of compressing data after transfer. 
> Backup strategies to different cluster also get benefit of saving one IO 
> operation to and from HDFS, thus saving resources, time and effort.
> * Create an option -compressOutput defaulting to 
> {{org.apache.hadoop.io.compress.BZip2Codec}}. 
> * Users will be able to change codec with {{-D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec}}
> * If distcp compression is enabled, suffix the filenames with default codec 
> extension to indicate the file is compressed. Thus users can be aware of what 
> codec was used to compress the data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-28 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945410#comment-15945410
 ] 

Fei Hui commented on HADOOP-14176:
--

I think we should pick common parameters and set them default in 
distcp-default.xml. After that most of end users do not encounter distcp 
problems. If we keep distcp-default.xml unchanging, many other users will 
encounter this problem, isn't it?
[~aw] [~jrottinghuis] [~cnauroth] [~raviprak]

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-28 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945394#comment-15945394
 ] 

Fei Hui commented on HADOOP-10738:
--

[~aw] I see that HADOOP-13587 has added distcp-site.xml on hadoop 3.0
should we add it to branch 2.x like 3.0? Thanks

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-23 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15939630#comment-15939630
 ] 

Fei Hui commented on HADOOP-14176:
--

[~aw] What is the way you suggest on branch-2 for fix this issue? shoud we 
remove {{mapred.job.[map/reduce].memory.mb}} like MAPREDUCE-5653? I think we 
should resolve it. If there is a problem using default setting, users can be 
troubled. 

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-22 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935824#comment-15935824
 ] 

Fei Hui commented on HADOOP-10738:
--

[~aw] HADOOP-14176 discussed, remove {{mapreduce.[map/reduce].memory.mb}} is 
not suitable.
we need to provide a way to set parameters for distcp, Maybe distcp is 
different from other mr jobs using resources and it is frequently used 

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-21 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935653#comment-15935653
 ] 

Fei Hui commented on HADOOP-10738:
--

[~arpitagarwal] Thanks for replay. HADOOP-14176 discussed.
I need to reconfig {{mapreduce.map.memory.mb}} and {{mapreduce.map.java.opts}} 
to make distcp success.

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-10738:
-
Attachment: HADOOP-10738-branch-2.001.patch

update base on the latest branch-2


> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934116#comment-15934116
 ] 

Fei Hui edited comment on HADOOP-10738 at 3/21/17 5:25 AM:
---

distcp uses parameters
* from distcp-default.xml, which overrides 
hdfs-site.xml,yarn-site.xml,mapred-site.xml
* from -D in commandline.

I think we should add another way to set parameters to override 
distcp-default.xml. If distcp runs with -D every time, it is boring. it is 
useful for adding distcp-site.xml 


was (Author: ferhui):
distcp uses parameters
* from distcp-default.xml, which overrides 
hdfs-site.xml,yarn-site.xml,mapred-site.xml
* from -D in commandline.
I think we should add another way to set parameters to override 
distcp-default.xml. If distcp runs with -D every time, it is boring. it is 
useful for adding distcp-site.xml 

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934116#comment-15934116
 ] 

Fei Hui commented on HADOOP-10738:
--

distcp uses parameters
* from distcp-default.xml, which overrides 
hdfs-site.xml,yarn-site.xml,mapred-site.xml
* from -D in commandline.
I think we should add another way to set parameters to override 
distcp-default.xml. If distcp runs with -D every time, it is boring. it is 
useful for adding distcp-site.xml 

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-20 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933950#comment-15933950
 ] 

Fei Hui commented on HADOOP-14176:
--

CC [~jrottinghuis] [~cnauroth]

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-18 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14176:
-
Attachment: HADOOP-14176-branch-2.004.patch

update
* change {{mapred.job.map.memory.mb}} to {{mapreduce.map.memory.mb}}
* remove {{mapred.job.reduce.memory.mb}}
* add {{mapreduce.map.java.opts}}, value is {{-Xmx800m}}

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-18 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15931076#comment-15931076
 ] 

Fei Hui edited comment on HADOOP-14176 at 3/18/17 6:14 AM:
---

In hadoop 3, {{mapreduce.job.heap.memory-mb.ratio}} is 0.8 by default. If 
{{mapreduce.map.java.opts}} is not set, it will be 0.8 * 
{{mapreduce.map.memory.mb}}. So i think {{mapreduce.map.java.opts}} is 
{{-Xmx800m}} for {{mapreduce.map.memory.mb}} 1024m.
The change is
* {{mapred.job.map.memory.mb}} changes to {{mapreduce.map.memory.mb}}
* add {{mapreduce.map.java.opts}}, which is set to -Xmx800m
* remove {{mapreduce.map.memory.mb}}

[~raviprak] [~jrottinghuis] [~cnauroth] is it ok?


was (Author: ferhui):
In hadoop 3, {{mapreduce.job.heap.memory-mb.ratio}} is 0.8 by default. If 
{{mapreduce.map.java.opts}} is not set, it will be 0.8 * 
{{mapreduce.map.memory.mb}}. So i think {{mapreduce.map.java.opts}} is 
{{-Xmx818m}} for {{mapreduce.map.memory.mb}} 1024m.
The change is
* {{mapred.job.map.memory.mb}} changes to {{mapreduce.map.memory.mb}}
* add {{mapreduce.map.java.opts}}, which is set to -Xmx818m
* remove {{mapreduce.map.memory.mb}}

[~raviprak] [~jrottinghuis] [~cnauroth] is it ok?

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-18 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15931076#comment-15931076
 ] 

Fei Hui edited comment on HADOOP-14176 at 3/18/17 6:10 AM:
---

In hadoop 3, {{mapreduce.job.heap.memory-mb.ratio}} is 0.8 by default. If 
{{mapreduce.map.java.opts}} is not set, it will be 0.8 * 
{{mapreduce.map.memory.mb}}. So i think {{mapreduce.map.java.opts}} is 
{{-Xmx818m}} for {{mapreduce.map.memory.mb}} 1024m.
The change is
* {{mapred.job.map.memory.mb}} changes to {{mapreduce.map.memory.mb}}
* add {{mapreduce.map.java.opts}}, which is set to -Xmx818m
* remove {{mapreduce.map.memory.mb}}

[~raviprak] [~jrottinghuis] [~cnauroth] is it ok?


was (Author: ferhui):
In hadoop 3, {{mapreduce.job.heap.memory-mb.ratio}} is 0.8 by default. If 
{{mapreduce.map.java.opts}} is not set, it will be 0.8 * 
{{mapreduce.map.memory.mb}}. So i think {{mapreduce.map.java.opts}} is 
{{-Xmx818m}} for {{mapreduce.map.memory.mb}} 1024m.
The change is
* {{mapred.job.map.memory.mb}} changes to {{mapreduce.map.memory.mb}}
* add {{mapreduce.map.java.opts}}, which is set to -Xmx818m
* remove {{mapreduce.map.memory.mb}}
* add {{yarn.app.mapreduce.am.resource.mb}}, which is set 1024 

[~raviprak] [~jrottinghuis] [~cnauroth] is it ok?

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Comment Edited] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-18 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15931076#comment-15931076
 ] 

Fei Hui edited comment on HADOOP-14176 at 3/18/17 6:06 AM:
---

In hadoop 3, {{mapreduce.job.heap.memory-mb.ratio}} is 0.8 by default. If 
{{mapreduce.map.java.opts}} is not set, it will be 0.8 * 
{{mapreduce.map.memory.mb}}. So i think {{mapreduce.map.java.opts}} is 
{{-Xmx818m}} for {{mapreduce.map.memory.mb}} 1024m.
The change is
* {{mapred.job.map.memory.mb}} changes to {{mapreduce.map.memory.mb}}
* add {{mapreduce.map.java.opts}}, which is set to -Xmx818m
* remove {{mapreduce.map.memory.mb}}
* add {{yarn.app.mapreduce.am.resource.mb}}, which is set 1024 

[~raviprak] [~jrottinghuis] [~cnauroth] is it ok?


was (Author: ferhui):
In hadoop 3, {{mapreduce.job.heap.memory-mb.ratio}} is 0.8 by default. If 
{{mapreduce.map.java.opts}} is not set, it will be 0.8 * 
{{mapreduce.map.memory.mb}}. So i think {{mapreduce.map.java.opts}} is 
{{-Xmx818m}} for {{mapreduce.map.memory.mb}} 1024m.
The change is
* {{mapred.job.map.memory.mb}} changes to {{mapreduce.map.memory.mb}}
* add {{mapreduce.map.java.opts}}, which is set to -Xmx818m
* remove {{mapreduce.map.memory.mb}}
* add {{yarn.app.mapreduce.am.resource.mb}}, which is set 1024 
[~raviprak] [~jrottinghuis] [~cnauroth] is it ok?

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-18 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15931076#comment-15931076
 ] 

Fei Hui commented on HADOOP-14176:
--

In hadoop 3, {{mapreduce.job.heap.memory-mb.ratio}} is 0.8 by default. If 
{{mapreduce.map.java.opts}} is not set, it will be 0.8 * 
{{mapreduce.map.memory.mb}}. So i think {{mapreduce.map.java.opts}} is 
{{-Xmx818m}} for {{mapreduce.map.memory.mb}} 1024m.
The change is
* {{mapred.job.map.memory.mb}} changes to {{mapreduce.map.memory.mb}}
* add {{mapreduce.map.java.opts}}, which is set to -Xmx818m
* remove {{mapreduce.map.memory.mb}}
* add {{yarn.app.mapreduce.am.resource.mb}}, which is set 1024 
[~raviprak] [~jrottinghuis] [~cnauroth] is it ok?

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-17 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929504#comment-15929504
 ] 

Fei Hui edited comment on HADOOP-14176 at 3/17/17 7:42 AM:
---

[~raviprak] I think {{mapreduce.map.memory.mb}} should appear together with 
{{mapreduce.map.java.opts}} .
What value of {{mapreduce.map.java.opts}} is suitable for 1024m  
{{mapreduce.map.memory.mb}} ?
We can reserve {{mapred.reducer.new-api}} and {{mapreduce.reduce.class}}


was (Author: ferhui):
[~raviprak] I think {{mapreduce.map.memory.mb}} should appear together with 
{{mapreduce.map.java.opts}} .
What value of {{mapreduce.map.java.opts}} is suitable for 1024m  
{{mapreduce.map.memory.mb}} ?

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-17 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929504#comment-15929504
 ] 

Fei Hui edited comment on HADOOP-14176 at 3/17/17 7:41 AM:
---

[~raviprak] I think {{mapreduce.map.memory.mb}} should appear together with 
{{mapreduce.map.java.opts}} .
What value of {{mapreduce.map.java.opts}} is suitable for 1024m  
{{mapreduce.map.memory.mb}} ?


was (Author: ferhui):
[~raviprak] I think *mapreduce.map.memory.mb* should appear together with 
*mapreduce.map.java.opts* .
What value of *mapreduce.map.java.opts* is suitable for 1024m  
*mapreduce.map.memory.mb* ?

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-17 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929504#comment-15929504
 ] 

Fei Hui edited comment on HADOOP-14176 at 3/17/17 6:58 AM:
---

[~raviprak] I think *mapreduce.map.memory.mb* should appear together with 
*mapreduce.map.java.opts*
what value of *mapreduce.map.java.opts* is suitable for 1024m  
*mapreduce.map.memory.mb* 


was (Author: ferhui):
[~raviprak] I think {code}mapreduce.map.memory.mb{code} should appear together 
with {code}mapreduce.map.java.opts{code}
what value of {code}mapreduce.map.java.opts{code} is suitable for 1024m  
{code}mapreduce.map.memory.mb{code} 

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-17 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929504#comment-15929504
 ] 

Fei Hui edited comment on HADOOP-14176 at 3/17/17 6:59 AM:
---

[~raviprak] I think *mapreduce.map.memory.mb* should appear together with 
*mapreduce.map.java.opts* .
What value of *mapreduce.map.java.opts* is suitable for 1024m  
*mapreduce.map.memory.mb* ?


was (Author: ferhui):
[~raviprak] I think *mapreduce.map.memory.mb* should appear together with 
*mapreduce.map.java.opts*
what value of *mapreduce.map.java.opts* is suitable for 1024m  
*mapreduce.map.memory.mb* 

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-17 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929504#comment-15929504
 ] 

Fei Hui commented on HADOOP-14176:
--

[~raviprak] I think {code}mapreduce.map.memory.mb{code} should appear together 
with {code}mapreduce.map.java.opts{code}
what value of {code}mapreduce.map.java.opts{code} is suitable for 1024m  
{code}mapreduce.map.memory.mb{code} 

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14194:
-
Summary: Aliyun OSS should not use empty endpoint as default  (was: Alyun 
OSS should not use empty endpoint as default)

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14189:
-
External issue URL:   (was: 
https://issues.apache.org/jira/browse/HADOOP-10738)
 External issue ID:   (was:  HADOOP-10738)

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14189:
-
External issue URL: https://issues.apache.org/jira/browse/HADOOP-10738
 External issue ID:  HADOOP-10738

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui resolved HADOOP-14189.
--
Resolution: Duplicate

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929348#comment-15929348
 ] 

Fei Hui commented on HADOOP-14189:
--

[~raviprak] i will close this jira as duplicate

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14189:
-
Issue Type: Bug  (was: Task)

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15927887#comment-15927887
 ] 

Fei Hui commented on HADOOP-14189:
--

[~raviprak] could you please take a look? Thanks

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14189:
-
Attachment: HADOOP-14189-branch-2.001.patch

patch upload
add distcp-site.xml to override hadoop parameters for distcp

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-16 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-14176:
-
Attachment: HADOOP-14176-branch-2.003.patch

patch update
* overrides *mapreduce.map.memory.mb* and *mapreduce.map.java.opts*
* set yarn.app.mapreduce.am.resource.mb to be the same as 
mapreduce.map.memory.mb
* remove reducers' options that do not anything.

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-16 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15927768#comment-15927768
 ] 

Fei Hui commented on HADOOP-14176:
--

agree with [~jrottinghuis] [~cnauroth]
* overrides *mapreduce.map.memory.mb*,  *mapreduce.map.java.opts*
* set *yarn.app.mapreduce.am.resource.mb* to be the same as 
*mapreduce.map.memory.mb*
* remove reducers's options that do not anything.

Maybe we need to file another JARA to config distcp for hadoop 2.x HADOOP-14189

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-16 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-14189:


 Summary: add distcp-site.xml for distcp on branch-2
 Key: HADOOP-14189
 URL: https://issues.apache.org/jira/browse/HADOOP-14189
 Project: Hadoop Common
  Issue Type: Task
  Components: tools/distcp
Reporter: Fei Hui


On hadoop 2.x , we could not config hadoop parameters for distcp. It only uses 
distcp-default.xml.
We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >