Re: Heads up: branch-2.1-beta

2013-06-06 Thread Arun C Murthy

On Jun 5, 2013, at 11:04 AM, Roman Shaposhnik wrote
 
 On the Bigtop side of things, once we have stable Bigtop 0.6.0 platform
 based on Hadoop 2.0.x codeline we plan to start running the same battery
 of integration tests on the branch-2.1-beta.
 
 We plan to simply file JIRAs if anything gets detected and I will also
 publish the URL of the Jenkins job once it gets created.

Thanks Roman. Is there an ETA for this? Also, please file jiras with Blocker 
priority to catch attention.

thanks,
Arun




Hadoop-Hdfs-trunk - Build # 1422 - Still Failing

2013-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10388 lines...]
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:29:57.953s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [1.713s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:30:00.442s
[INFO] Finished at: Thu Jun 06 13:03:16 UTC 2013
[INFO] Final Memory: 30M/681M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs: ExecutionException; nested exception is 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without saying properly goodbye. VM crash or System.exit called ? 
- [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-8957
Updating HADOOP-9607
Updating HADOOP-9526
Updating HADOOP-9605
Updating HDFS-4053
Updating HADOOP-8982
Updating HADOOP-9593
Updating HDFS-4850
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1422

2013-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/changes

Changes:

[suresh] HADOOP-8982. TestSocketIOWithTimeout fails on Windows. Contributed by 
Chris Nauroth.

[suresh] HADOOP-9526. TestShellCommandFencer and TestShell fail on Windows. 
Contributed by Arpit Agarwal.

[suresh] Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, 
HADOOP-9605 to BUG FIXES

[acmurthy] HADOOP-9593. Changing CHANGES.txt to reflect merge to 
branch-2.1-beta.

[jing9] HDFS-4053. Move the jira description to release 2.1.0 section

[jing9] HDFS-4850. Fix OfflineImageViewer to work on fsimages with empty files 
or snapshots. Contributed by Jing Zhao.

[tgraves] Updating release date for 0.23.8

--
[...truncated 10195 lines...]
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.768 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.138 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.296 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.171 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.965 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.005 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 144.993 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.365 sec
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.523 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.185 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.592 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.537 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.299 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.313 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.232 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.187 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.493 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.192 sec
Running org.apache.hadoop.hdfs.TestFileInputStreamCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.204 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.956 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.256 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.05 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.905 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.086 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.444 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.491 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.384 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.04 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.498 sec
Running 

[jira] [Created] (HDFS-4884) [Umbrella] Block Placement Policy Optimizer

2013-06-06 Thread Junping Du (JIRA)
Junping Du created HDFS-4884:


 Summary: [Umbrella] Block Placement Policy Optimizer
 Key: HDFS-4884
 URL: https://issues.apache.org/jira/browse/HDFS-4884
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.4-alpha, 1.2.0
Reporter: Junping Du
Assignee: Junping Du


The BlockPlacementPolicy (BPP) is extensible and already multiple instances in 
system: BlockPlacementPolicyDefault, BlockPlacementPolicyWithNodeGroup, 
BlockPlacementPolicyRAID, etc. When cluster is switched from one BPP to another 
BPP, HDFS will not check if block locations conform the new BPP. Now, you can 
manually fsck on specific directory to identify replica placement policy is 
violated for some blocks but have no way to fix it so far. We should provide 
way to fix it. Also, in the long term, we should allow multiple BPPs 
co-existing on the same HDFS cluster for different purpose of data, i.e. we 
should allow some old, infrequently accessed data/directory to under RAID 
policy so that can save space. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4885) Update verifyBlockPlacement() API in BlockPlacementPolicy

2013-06-06 Thread Junping Du (JIRA)
Junping Du created HDFS-4885:


 Summary: Update verifyBlockPlacement() API in BlockPlacementPolicy
 Key: HDFS-4885
 URL: https://issues.apache.org/jira/browse/HDFS-4885
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Junping Du
Assignee: Junping Du


verifyBlockPlacement() has unused parameter -srcPath as its responsibility just 
verify single block rather than files under a specific path. Also the return 
value (int) does not make sense as the violation of block placement has other 
case than number of racks, so boolean value should be better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4886) Override verifyBlockPlacement() API in BlockPlacementPolicyWithNodeGroup

2013-06-06 Thread Junping Du (JIRA)
Junping Du created HDFS-4886:


 Summary: Override verifyBlockPlacement() API in 
BlockPlacementPolicyWithNodeGroup
 Key: HDFS-4886
 URL: https://issues.apache.org/jira/browse/HDFS-4886
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Junping Du
Assignee: Junping Du


Each block placement policy instance should override this method, so fsck can 
identify illegal placement for blocks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4887) TestNNThroughputBenchmark exits abruptly

2013-06-06 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4887:


 Summary: TestNNThroughputBenchmark exits abruptly
 Key: HDFS-4887
 URL: https://issues.apache.org/jira/browse/HDFS-4887
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: benchmarks, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Kihwal Lee


After HDFS-4840, TestNNThroughputBenchmark exits in the middle. This is because 
ReplicationMonitor is being stopped while NN is still running. 

This is only valid during testing. In normal cases, ReplicationMonitor thread 
runs all the time once started. In standby or safemode, it just skips 
calculating DN work. I think NNThroughputBenchmark needs to use ExitUtil to 
prevent termination, rather than modifying ReplicationMonitor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-06 Thread Ravi Prakash (JIRA)
Ravi Prakash created HDFS-4888:
--

 Summary: Refactor and fix FSNamesystem.getTurnOffTip to sanity
 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.8, 2.0.4-alpha, 3.0.0
Reporter: Ravi Prakash


e.g. When resources are low, the command to leave safe mode is not printed.
This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4889) passing -Dfs.trash.interval to command line is not respected.

2013-06-06 Thread Trupti Dhavle (JIRA)
Trupti Dhavle created HDFS-4889:
---

 Summary: passing -Dfs.trash.interval to command line is not 
respected.
 Key: HDFS-4889
 URL: https://issues.apache.org/jira/browse/HDFS-4889
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Trupti Dhavle
 Fix For: 2.1.0-beta


Ran hadoop dfs -Dfs.trash.interval=0 -rm /user/username/README
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Moved: 'hdfs://host:port/user/username/README' to trash at: 
hdfs://host:port/user/username/.Trash/Current

Expected that the file doesnt go to Trash and gets deleted directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4867) metaSave NPEs when there are invalid blocks in repl queue.

2013-06-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-4867.
---

   Resolution: Fixed
Fix Version/s: (was: 2.0.5-alpha)
   (was: 3.0.0)
   0.23.9
   2.1.0-beta
 Release Note: I just committed this. Thank you Plamen and Ravi.
 Hadoop Flags: Reviewed

 metaSave NPEs when there are invalid blocks in repl queue.
 --

 Key: HDFS-4867
 URL: https://issues.apache.org/jira/browse/HDFS-4867
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.7, 2.0.4-alpha, 0.23.8
Reporter: Kihwal Lee
Assignee: Plamen Jeliazkov
 Fix For: 2.1.0-beta, 0.23.9

 Attachments: HDFS-4867.branch-0.23.patch, 
 HDFS-4867.branch-0.23.patch, HDFS-4867.branch-0.23.patch, 
 HDFS-4867.branch-0.23.patch, HDFS-4867.branch-2.patch, 
 HDFS-4867.branch2.patch, HDFS-4867.branch2.patch, HDFS-4867.branch2.patch, 
 HDFS-4867.trunk.patch, HDFS-4867.trunk.patch, HDFS-4867.trunk.patch, 
 HDFS-4867.trunk.patch, testMetaSave.log


 Since metaSave cannot get the inode holding a orphaned/invalid block, it NPEs 
 and stops generating further report. Normally ReplicationMonitor removes them 
 quickly, but if the queue is huge, it takes very long time. Also in safe 
 mode, they stay.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4890) Stop generating/sending datanode work in standby mode

2013-06-06 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4890:


 Summary: Stop generating/sending datanode work in standby mode
 Key: HDFS-4890
 URL: https://issues.apache.org/jira/browse/HDFS-4890
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, namenode
Affects Versions: 2.1.0-beta
Reporter: Kihwal Lee


If NN comes up in standby and stays that way, repl queues are empty so no work 
is generated even though ReplicationMonitor is running. But if an ANN 
transitions to standby, more work can be generated and sent. Also any remaining 
work will be sent to datanodes. 

Current code drains existing work and stops generating new work in safe mode. 
HDFS-4832 will make it immediately stop in safe mode. Same can be done for 
standby.

This change was also suggested in HDFS-3744.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4891) Incorrect exit code when copying a file bigger than given quota

2013-06-06 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created HDFS-4891:
-

 Summary: Incorrect exit code when copying a file bigger than given 
quota
 Key: HDFS-4891
 URL: https://issues.apache.org/jira/browse/HDFS-4891
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Tassapol Athiapinya
 Fix For: 2.1.0-beta


Exit code is incorrect for hdfs command.

===Repro step===
1. Set quota on a directory in HDFS.
2. Get a file of which the size is bigger than given quota
3. Do $ hdfs fs -copyFromLocal command to copy the file to HDFS
4. There will be an exception. That is expected error message.
   Exit code will be zero. This is incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4892) Number of transceivers reported by the datanode is incorrect

2013-06-06 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4892:
-

 Summary: Number of transceivers reported by the datanode is 
incorrect
 Key: HDFS-4892
 URL: https://issues.apache.org/jira/browse/HDFS-4892
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas


Currently a datanode reports the transceiver count to namenode. Namenode 
aggregates this as TotalLoad metrics and is used for monitoring cluster 
activity. It is also used for making block placement decision, to qualify if a 
datanode is a good target.

Currently transceiver count is = 1 (for XceiverServer) + 1 * (number of 
readers) + 2 * (Number of writers in pipeline) + 1 * (number of datanode 
replications) + 1 * (number of recover blocks)

Should the number of transceiver just reflect number of readers + writers, 
instead of reporting it as is currently done. Separately we should perhaps 
report readers and writers as separate count.

This 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4893) Report number of readers and writes from Datanode to Namenode

2013-06-06 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4893:
-

 Summary: Report number of readers and writes from Datanode to 
Namenode
 Key: HDFS-4893
 URL: https://issues.apache.org/jira/browse/HDFS-4893
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas


Currently datanode reports combined number of readers and writers as 
transceiver count to the namenode in heartbeats. This jira proposes reporting 
number of readers and writers as separate fields in heartbeats.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4894) Multiple Replica Placement Policies support in one HDFS cluster

2013-06-06 Thread Junping Du (JIRA)
Junping Du created HDFS-4894:


 Summary: Multiple Replica Placement Policies support in one HDFS 
cluster
 Key: HDFS-4894
 URL: https://issues.apache.org/jira/browse/HDFS-4894
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Junping Du


We should allow multiple BPPs co-existing on the same HDFS cluster for 
different purpose of data and different types of application, i.e. we should 
allow some old, infrequently accessed data/directory to under RAID policy so 
that can save space. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4889) passing -Dfs.trash.interval to command line is not respected.

2013-06-06 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-4889.
---

Resolution: Not A Problem

 passing -Dfs.trash.interval to command line is not respected.
 -

 Key: HDFS-4889
 URL: https://issues.apache.org/jira/browse/HDFS-4889
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Trupti Dhavle
 Fix For: 2.1.0-beta


 Ran hadoop dfs -Dfs.trash.interval=0 -rm /user/username/README
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Moved: 'hdfs://host:port/user/username/README' to trash at: 
 hdfs://host:port/user/username/.Trash/Current
 Expected that the file doesnt go to Trash and gets deleted directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira