[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-10-01 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466794#comment-13466794
 ] 

Daryn Sharp commented on HADOOP-8589:
-

I like the sanity check in {{TestLFS#delete}}, but should the local fs maybe 
just use a chrooted fs to the build test dir?  That would help ensure that 
viewfs doesn't try to mount / - I think it does or used to do that.

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Attachments: HADOOP-8589.patch, HADOOP-8589.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-10-01 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466852#comment-13466852
 ] 

Hemanth Yamijala commented on HADOOP-8776:
--

Jianbin, Thanks for your interest.

I was told that native compilation / functionality is not working on Solaris as 
well. So, it looks like the issue is not just MacOS specific.

The way I see this, the core issue is with native compilation being broken. 
This JIRA is just providing a workaround that will serve a (hopefully) 
temporary purpose. I'd rather keep the workaround simple (given that there are 
other platforms as well that might need to be considered). Please let me know 
if you are fine with this.

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8386) hadoop script doesn't work if 'cd' prints to stdout (default behavior in Ubuntu)

2012-10-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8386:


Fix Version/s: 0.23.5

I pulled this into branch-0.23

 hadoop script doesn't work if 'cd' prints to stdout (default behavior in 
 Ubuntu)
 

 Key: HADOOP-8386
 URL: https://issues.apache.org/jira/browse/HADOOP-8386
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.2
 Environment: Ubuntu
Reporter: Christopher Berner
Assignee: Christopher Berner
 Fix For: 1.2.0, 3.0.0, 0.23.5

 Attachments: hadoop-8386-1.diff, hadoop-8386-1.diff, 
 hadoop-8386.diff, hadoop.diff


 if the 'hadoop' script is run as 'bin/hadoop' on a distro where the 'cd' 
 command prints to stdout, the script will fail due to this line: 'bin=`cd 
 $bin; pwd`'
 Workaround: execute from the bin/ directory as './hadoop'
 Fix: change that line to 'bin=`cd $bin  /dev/null; pwd`'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-10-01 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466929#comment-13466929
 ] 

Eli Collins commented on HADOOP-8783:
-

+1 to branch-2

 Improve RPC.Server's digest auth
 

 Key: HADOOP-8783
 URL: https://issues.apache.org/jira/browse/HADOOP-8783
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8783.patch, HADOOP-8783.patch


 RPC.Server should always allow digest auth (tokens) if a secret manager if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-10-01 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466930#comment-13466930
 ] 

Jianbin Wei commented on HADOOP-8776:
-

Hemanth,

I agree that the core issue is the broken native compilation and that should be 
fixed.

My suggestion is to disable the native compilation for all broken platforms 
by default, file tickets to have them fixed and enable native compilation as 
part of the fix.

My purpose is to make it more convenient/efficient for other developers.  

Every time I generate a patch, I do the patch test and it fails due to javac 
warning.  I had to check if it is because of native compilation or not.   The 
whole process for compilation and checking takes about 3 minutes. 

If we can save this 3 minutes for each patch/developer, we may easily save 
thousands of minutes before the native compilation is fixed.  This should 
justify a slightly more complicated workaround that may take you 60 minutes.

Just my 2 cents.


 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8791) rm Only deletes non empty directory and files.

2012-10-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8791:


Fix Version/s: 0.23.5

I pulled this into branch-0.23

 rm Only deletes non empty directory and files.
 

 Key: HADOOP-8791
 URL: https://issues.apache.org/jira/browse/HADOOP-8791
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.3, 3.0.0
Reporter: Bertrand Dechoux
Assignee: Jing Zhao
  Labels: documentation
 Fix For: 1.2.0, 3.0.0, 2.0.3-alpha, 0.23.5

 Attachments: HADOOP-8791-branch-1.001.patch, 
 HADOOP-8791-branch-1.patch, HADOOP-8791-branch-1.patch, 
 HADOOP-8791-trunk.001.patch, HADOOP-8791-trunk.patch, HADOOP-8791-trunk.patch


 The documentation (1.0.3) is describing the opposite of what rm does.
 It should be  Only delete files and empty directories.
 With regards to file, the size of the file should not matter, should it?
 OR I am totally misunderstanding the semantic of this command and I am not 
 the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-10-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466939#comment-13466939
 ] 

Colin Patrick McCabe commented on HADOOP-8776:
--

I don't have a strong opinion about this issue.  However, I will note that you 
can determine if the platform is Linux quite easily:

{code}
if uname -o | grep -q Linux; then
  ... set linux-specific defaults...
fi
{code}

So your patch shouldn't take an hour to prepare if you do choose to go this 
route :)
Of course you'll probably want to add an {{\--enable-native}} if you do choose 
to do this.

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8845) When looking for parent paths info, globStatus must filter out non-directory elements to avoid an AccessControlException

2012-10-01 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466965#comment-13466965
 ] 

Eli Collins commented on HADOOP-8845:
-

globPathsLevel is a generic method, and globStatus which calls it claims to 
return all matching path names, why is it OK to unconditionally filter out all 
files from its results?  Since * can match the empty string, in other contexts 
it could be appropriate to return /tmp/testdir/testfile for 
/tmp/testdir/*/testfile.

Ie is there a place where we know we should just be checking directory path 
elements? The comment in globStatusInternal (// list parent directories and 
then glob the results) by one of the cases indicates is the intent but it's 
valid to pass both files and directories to listStatus.


 When looking for parent paths info, globStatus must filter out non-directory 
 elements to avoid an AccessControlException
 

 Key: HADOOP-8845
 URL: https://issues.apache.org/jira/browse/HADOOP-8845
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
  Labels: glob
 Attachments: HADOOP-8845.patch, HADOOP-8845.patch, HADOOP-8845.patch


 A brief description from my colleague Stephen Fritz who helped discover it:
 {code}
 [root@node1 ~]# su - hdfs
 -bash-4.1$ echo My Test Stringtestfile -- just a text file, for testing 
 below
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir -- create a directory
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir/1 -- create a subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/1/testfile -- put the test 
 file in the subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/testfile -- put the test 
 file in the directory
 -bash-4.1$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 All files are where we expect them...OK, let's try reading
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/*/testfile
 My Test String -- success!  
 Note that we used an '*' in the cat command, and it correctly found the 
 subdirectory '/tmp/testdir/1', and ignore the regular file 
 '/tmp/testdir/testfile'
 -bash-4.1$ exit
 logout
 [root@node1 ~]# su - testuser -- lets try it as a different user:
 [testuser@node1 ~]$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- so far so good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/*/testfile
 cat: org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=EXECUTE, 
 inode=/tmp/testdir/testfile:hdfs:hadoop:-rw-r--r--
 {code}
 Essentially, we hit a ACE with access=EXECUTE on file /tmp/testdir/testfile 
 cause we tried to access the /tmp/testdir/testfile/testfile as a path. This 
 shouldn't happen, as the testfile is a file and not a path parent to be 
 looked up upon.
 {code}
 2012-09-25 07:24:27,406 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 2 on 8020, call getFileInfo(/tmp/testdir/testfile/testfile)
 {code}
 Surprisingly the superuser avoids hitting into the error, as a result of 
 bypassing permissions, but that can be looked up on another JIRA - if it is 
 fine to let it be like that or not.
 This JIRA targets a client-sided fix to not cause such /path/file/dir or 
 /path/file/file kinda lookups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466979#comment-13466979
 ] 

Ivan A. Veselovsky commented on HADOOP-8851:


For example, add the following test:

  /**
   * Test to OOME: with the -XX:+HeapDumpOnOutOfMemoryError option the mem dump 
should be created by the JVM.   
   * @throws Exception
   */
  public void testOOME() throws Exception {
final ListObject list = new LinkedListObject();
while (true) {
  Object placeHolder = new HashMapObject,Object();
  list.add(placeHolder);
}
  }

Typical output will look like the following:
-
Running org.apache.hadoop.fs.permission.TestStickyBit
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid14838.hprof ...
Heap dump file created [1515333529 bytes in 21.641 secs]
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 85.68 sec  
FAILURE!
-

The heap dump (this is a huge binary file) will be named 
java_pidprocess_pid.hprof, and will be created by JVM in the current 
directory of the test run process (e.g. 
.../hadoop-common/hadoop-hdfs-project/hadoop-hdfs/ in my case). The heap dump 
can be opened and investigated with almost any profiler, including NetBeans.

Note, however, that the -XX ( 
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html ) 
are HostSpot options, they may not work on other JVM implementations. 
But, afaik, most of the testing is done on Orecle's JVMs 1.6.0_XX, so the 
option will work and will be helpful in case of OOME problems. 
As experience shows, if there are no OOMEs, this option does not appear to 
introduce any problems or performance penalties.

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466986#comment-13466986
 ] 

Ivan A. Veselovsky commented on HADOOP-8851:


If you're using Jenkins (Hudson) builds, also it's good idea to save **/*.hprof 
artifacts to protect the memory dumps from being deleted upon the workspace 
cleanup. 

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-10-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8755:


Fix Version/s: 0.23.5

I just pulled this into branch-0.23

 Print thread dump when tests fail due to timeout 
 -

 Key: HADOOP-8755
 URL: https://issues.apache.org/jira/browse/HADOOP-8755
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Fix For: 2.0.3-alpha, 0.23.5

 Attachments: HADOOP-8755.patch, HADOOP-8755.patch, HADOOP-8755.patch, 
 HDFS-3762-branch-0.23.patch, HDFS-3762.patch, HDFS-3762.patch, 
 HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch


 When a test fails due to timeout it's often not clear what is the root cause. 
 See HDFS-3364 as an example.
 We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8851:
---

  Component/s: test
 Target Version/s: 2.0.3-alpha
Affects Version/s: 2.0.1-alpha

Thanks a lot for the info, Ivan, and for testing out the patch. I've also just 
tested out this option with OpenJDK and confirmed that it works similarly to 
the Sun JDK.

I'm going to go ahead and commit this shortly.

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-10-01 Thread Tony Kew (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467000#comment-13467000
 ] 

Tony Kew commented on HADOOP-8568:
--

Any objection to listing this issue as fixed?

Tony

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8819) Should use instead of in a few places in FTPFileSystem,FTPInputStream,S3InputStream,ViewFileSystem,ViewFs

2012-10-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8819:


Fix Version/s: 0.23.5

I just pulled this into branch-0.23

 Should use  instead of   in a few places in 
 FTPFileSystem,FTPInputStream,S3InputStream,ViewFileSystem,ViewFs
 ---

 Key: HADOOP-8819
 URL: https://issues.apache.org/jira/browse/HADOOP-8819
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 1.2.0, 3.0.0, 2.0.3-alpha, 0.23.5

 Attachments: HADOOP-8819.branch-1.patch, HADOOP-8819.patch


 Should use  instead of   in a few places in 
 FTPFileSystem,FTPInputStream,S3InputStream,ViewFileSystem,ViewFs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8851:
---

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Ivan.

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-10-01 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467005#comment-13467005
 ] 

Jianbin Wei commented on HADOOP-8776:
-

In hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh the platform 
check is done through

{quote}
MAC_OSX=false
case `uname` in
Darwin*) MAC_OSX=true;;
esac
if $MAC_OSX; then
export HADOOP_OPTS=$HADOOP_OPTS -Djava.security.krb5.realm= 
-Djava.security.krb5.kdc=
fi
{quote}

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467012#comment-13467012
 ] 

Hudson commented on HADOOP-8851:


Integrated in Hadoop-Hdfs-trunk-Commit #2861 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2861/])
HADOOP-8851. Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked 
tests. Contributed by Ivan A. Veselovsky. (Revision 1392466)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392466
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467014#comment-13467014
 ] 

Hudson commented on HADOOP-8851:


Integrated in Hadoop-Common-trunk-Commit #2798 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2798/])
HADOOP-8851. Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked 
tests. Contributed by Ivan A. Veselovsky. (Revision 1392466)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392466
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-10-01 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467037#comment-13467037
 ] 

Andy Isaacson commented on HADOOP-8568:
---

Tony,

no need to delete old patches when uploading a new one, it can be useful to 
reviewers to have old patches available to use {{interdiff}} or similar tools 
or simply to review advancement of a change.  I tend to use a new name 
(hdfs-123.patch, hdfs-123-1.patch, etc) for each upload, but that's just for my 
convenience since Jira keeps track of different uploads with the same name just 
fine.

{code}
+  byte rawaddr[] = hostIp.getAddress();
...
+  String[] parts = hostaddr.split(\\.);
+  reverseIP = parts[3] + . + parts[2] + . + parts[1] + .
++ parts[0] + .in-addr.arpa;
{code}
I think the {{byte[]}} version of this code, used for IPv6, is significantly 
superior to the regex based string version used for IPv4.  Could you rewrite 
the IPv4 section of the code using {{getAddress()}}?  This may also result in 
greater code sharing between the two branches.


 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-10-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8789:


Fix Version/s: 0.23.5

I pulled this into branch-0.23

 Tests setLevel(Level.OFF) should be Level.ERROR
 ---

 Key: HADOOP-8789
 URL: https://issues.apache.org/jira/browse/HADOOP-8789
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.5

 Attachments: hdfs-3911.txt


 Multiple tests have code like
 {code}
 ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
 {code}
 Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
 and makes debugging other test failures, especially intermittent test 
 failures like HDFS-3664, difficult.  Instead the code should use 
 {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467091#comment-13467091
 ] 

Hudson commented on HADOOP-8851:


Integrated in Hadoop-Mapreduce-trunk-Commit #2820 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2820/])
HADOOP-8851. Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked 
tests. Contributed by Ivan A. Veselovsky. (Revision 1392466)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392466
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-10-01 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467092#comment-13467092
 ] 

Andy Isaacson commented on HADOOP-8568:
---

{code}
+// rawaddr bytes are of type unsigned int - this converts the given
+// byte to a (signed) int rawintaddr
+int rawintaddr = rawaddr[i]  0xff;
+// format rawintaddr into a hex String
+String addressbyte = String.format(%02x, rawintaddr);
{code}
This can be more simply and clearly written as
{code}
String addressbyte = String.format(%02x, rawaddr[i]  0xff);
{code}
It's actually sufficient to say {{format(%02x, rawaddr[i])}} but I find that 
a little too magic; making the 8-bit truncation explicit seems to more clearly 
express the intent to me.  (The mask-free version only gives the correct 
two-nibble output because of the overspecified {{FormatSpecifier#print(byte, 
Locale)}} implementation in {{java.util.Formatter}}, and breaks if you change 
to a local int variable for example.)

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8775) MR2 distcp permits non-positive value to -bandwidth option which causes job never to complete

2012-10-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8775:


Fix Version/s: 0.23.5

I pulled this into branch-0.23

 MR2 distcp permits non-positive value to -bandwidth option which causes job 
 never to complete
 -

 Key: HADOOP-8775
 URL: https://issues.apache.org/jira/browse/HADOOP-8775
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.2-alpha, 0.23.5

 Attachments: diff2.txt, HADOOP-8775.patch


 The likelihood that someone would want to enter a non-positive value for 
 -bandwidth seems really low. However, the job would never complete if a 
 non-positive value was specified. It'd just get stuck at map 100%. Luckily, a 
 positive value would always lead to the job completing.
 {noformat}
 bash-4.1$ hadoop distcp -bandwidth 0 
 hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir 
 hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir58
 hadoop distcp -bandwidth 0 
 hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir 
 hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir58
 12/05/23 15:53:01 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsiz\
 e', sourceFileListing=null, 
 sourcePaths=[hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir], 
 targetPath=hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir58}
 12/05/23 15:53:02 WARN conf.Configuration: io.sort.mb is deprecated. Instead, 
 use mapreduce.task.io.sort.mb
 12/05/23 15:53:02 WARN conf.Configuration: io.sort.factor is deprecated. 
 Instead, use mapreduce.task.io.sort.factor
 12/05/23 15:53:02 INFO util.NativeCodeLoader: Loaded the native-hadoop library
 12/05/23 15:53:03 INFO mapreduce.JobSubmitter: number of splits:3
 12/05/23 15:53:04 WARN conf.Configuration: mapred.jar is deprecated. Instead, 
 use mapreduce.job.jar
 12/05/23 15:53:04 WARN conf.Configuration: 
 mapred.map.tasks.speculative.execution is deprecated. Instead, use 
 mapreduce.map.speculative
 12/05/23 15:53:04 WARN conf.Configuration: mapred.reduce.tasks is deprecated. 
 Instead, use mapreduce.job.reduces
 12/05/23 15:53:04 WARN conf.Configuration: mapred.mapoutput.value.class is 
 deprecated. Instead, use mapreduce.map.output.value.class
 12/05/23 15:53:04 WARN conf.Configuration: mapreduce.map.class is deprecated. 
 Instead, use mapreduce.job.map.class
 12/05/23 15:53:04 WARN conf.Configuration: mapred.job.name is deprecated. 
 Instead, use mapreduce.job.name
 12/05/23 15:53:04 WARN conf.Configuration: mapreduce.inputformat.class is 
 deprecated. Instead, use mapreduce.job.inputformat.class
 12/05/23 15:53:04 WARN conf.Configuration: mapred.output.dir is deprecated. 
 Instead, use mapreduce.output.fileoutputformat.outputdir
 12/05/23 15:53:04 WARN conf.Configuration: mapreduce.outputformat.class is 
 deprecated. Instead, use mapreduce.job.outputformat.class
 12/05/23 15:53:04 WARN conf.Configuration: mapred.map.tasks is deprecated. 
 Instead, use mapreduce.job.maps
 12/05/23 15:53:04 WARN conf.Configuration: mapred.mapoutput.key.class is 
 deprecated. Instead, use mapreduce.map.output.key.class
 12/05/23 15:53:04 WARN conf.Configuration: mapred.working.dir is deprecated. 
 Instead, use mapreduce.job.working.dir
 12/05/23 15:53:04 INFO mapred.ResourceMgrDelegate: Submitted application 
 application_1337808305464_0014 to ResourceManager at 
 c1204.hal.cloudera.com/172.29.98.195:8040
 12/05/23 15:53:04 INFO mapreduce.Job: The url to track the job: 
 http://auto0:8088/proxy/application_1337808305464_0014/
 12/05/23 15:53:04 INFO tools.DistCp: DistCp job-id: job_1337808305464_0014
 12/05/23 15:53:04 INFO mapreduce.Job: Running job: job_1337808305464_0014
 12/05/23 15:53:09 INFO mapreduce.Job: Job job_1337808305464_0014 running in 
 uber mode : false
 12/05/23 15:53:09 INFO mapreduce.Job:  map 0% reduce 0%
 12/05/23 15:53:14 INFO mapreduce.Job:  map 33% reduce 0%
 12/05/23 15:53:19 INFO mapreduce.Job:  map 100% reduce 0%
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-10-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8851:


Fix Version/s: 0.23.5

I pulled this into branch-0.23 too

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.5

 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8845) When looking for parent paths info, globStatus must filter out non-directory elements to avoid an AccessControlException

2012-10-01 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467110#comment-13467110
 ] 

Harsh J commented on HADOOP-8845:
-

bq. Since * can match the empty string, in other contexts it could be 
appropriate to return /tmp/testdir/testfile for /tmp/testdir/*/testfile.

Nice catch. I will add a test for this to see if we aren't handling it already.

bq. Ie is there a place where we know we should just be checking directory path 
elements? The comment in globStatusInternal (// list parent directories and 
then glob the results) by one of the cases indicates is the intent but it's 
valid to pass both files and directories to listStatus.

The parts I've changed this under, try to fetch parents, which can't mean 
anything but directories AFAICT.

 When looking for parent paths info, globStatus must filter out non-directory 
 elements to avoid an AccessControlException
 

 Key: HADOOP-8845
 URL: https://issues.apache.org/jira/browse/HADOOP-8845
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
  Labels: glob
 Attachments: HADOOP-8845.patch, HADOOP-8845.patch, HADOOP-8845.patch


 A brief description from my colleague Stephen Fritz who helped discover it:
 {code}
 [root@node1 ~]# su - hdfs
 -bash-4.1$ echo My Test Stringtestfile -- just a text file, for testing 
 below
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir -- create a directory
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir/1 -- create a subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/1/testfile -- put the test 
 file in the subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/testfile -- put the test 
 file in the directory
 -bash-4.1$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 All files are where we expect them...OK, let's try reading
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/*/testfile
 My Test String -- success!  
 Note that we used an '*' in the cat command, and it correctly found the 
 subdirectory '/tmp/testdir/1', and ignore the regular file 
 '/tmp/testdir/testfile'
 -bash-4.1$ exit
 logout
 [root@node1 ~]# su - testuser -- lets try it as a different user:
 [testuser@node1 ~]$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- so far so good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/*/testfile
 cat: org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=EXECUTE, 
 inode=/tmp/testdir/testfile:hdfs:hadoop:-rw-r--r--
 {code}
 Essentially, we hit a ACE with access=EXECUTE on file /tmp/testdir/testfile 
 cause we tried to access the /tmp/testdir/testfile/testfile as a path. This 
 shouldn't happen, as the testfile is a file and not a path parent to be 
 looked up upon.
 {code}
 2012-09-25 07:24:27,406 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 2 on 8020, call getFileInfo(/tmp/testdir/testfile/testfile)
 {code}
 Surprisingly the superuser avoids hitting into the error, as a result of 
 bypassing permissions, but that can be looked up on another JIRA - if it is 
 fine to let it be like that or not.
 This JIRA targets a client-sided fix to not cause such /path/file/dir or 
 /path/file/file kinda lookups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8694) Create true symbolic links on Windows

2012-10-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467151#comment-13467151
 ] 

Suresh Srinivas commented on HADOOP-8694:
-

+1 for the patch. I do not see any DOS mode characters in the patch.

 Create true symbolic links on Windows
 -

 Key: HADOOP-8694
 URL: https://issues.apache.org/jira/browse/HADOOP-8694
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-8694-branch-1-win-2.patch, 
 HADOOP-8694-branch-1-win.patch, secpol.png


 In branch-1-win, we currently copy files for symbolic links in Hadoop on 
 Windows. We have talked to [~davidlao] who made the original fix, and did 
 some investigation on Windows. Windows began to support symbolic links 
 (symlinks) since Vista/Server 2008. The original reason to copy files instead 
 of creating actual symlinks is that only Administrators has the privilege to 
 create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
 the reason for that is mostly due to security, and this default behavior may 
 not be changed in near future. Though this behavior can be changed via  the 
 Local Security Policy management console, i.e. secpol.msc, under Security 
 Settings\Local Policies\User Rights Assignment\Create symbolic links.
  
 In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
 logs. We felt the usages are important enough for us to provide true symlinks 
 support, and users need to have the symlink creation privilege enabled on 
 Windows to use Hadoop.
 This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8694) Create true symbolic links on Windows

2012-10-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8694:


   Resolution: Fixed
Fix Version/s: 1-win
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Chuan.

 Create true symbolic links on Windows
 -

 Key: HADOOP-8694
 URL: https://issues.apache.org/jira/browse/HADOOP-8694
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 1-win

 Attachments: HADOOP-8694-branch-1-win-2.patch, 
 HADOOP-8694-branch-1-win.patch, secpol.png


 In branch-1-win, we currently copy files for symbolic links in Hadoop on 
 Windows. We have talked to [~davidlao] who made the original fix, and did 
 some investigation on Windows. Windows began to support symbolic links 
 (symlinks) since Vista/Server 2008. The original reason to copy files instead 
 of creating actual symlinks is that only Administrators has the privilege to 
 create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
 the reason for that is mostly due to security, and this default behavior may 
 not be changed in near future. Though this behavior can be changed via  the 
 Local Security Policy management console, i.e. secpol.msc, under Security 
 Settings\Local Policies\User Rights Assignment\Create symbolic links.
  
 In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
 logs. We felt the usages are important enough for us to provide true symlinks 
 support, and users need to have the symlink creation privilege enabled on 
 Windows to use Hadoop.
 This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8836) UGI should throw exception in case winutils.exe cannot be loaded

2012-10-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467157#comment-13467157
 ] 

Suresh Srinivas commented on HADOOP-8836:
-

+1 for the patch.

 UGI should throw exception in case winutils.exe cannot be loaded
 

 Key: HADOOP-8836
 URL: https://issues.apache.org/jira/browse/HADOOP-8836
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8836.branch-1-win.1.patch


 In upstream projects like Hive its hard to see why getting user group 
 information failed because the API swallows the exception. One of the cases 
 is when winutils is not present where Hadoop expects it to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8845) When looking for parent paths info, globStatus must filter out non-directory elements to avoid an AccessControlException

2012-10-01 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467162#comment-13467162
 ] 

Robert Joseph Evans commented on HADOOP-8845:
-

I would argue that even if there is a specific need for non-standard globbing 
we don't want to support it.  POSIX compliance is what most people would expect 
from HDFS, when we deviate from it users will get confused and angry. 
Especially if rm deletes more files then they want.

 When looking for parent paths info, globStatus must filter out non-directory 
 elements to avoid an AccessControlException
 

 Key: HADOOP-8845
 URL: https://issues.apache.org/jira/browse/HADOOP-8845
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
  Labels: glob
 Attachments: HADOOP-8845.patch, HADOOP-8845.patch, HADOOP-8845.patch


 A brief description from my colleague Stephen Fritz who helped discover it:
 {code}
 [root@node1 ~]# su - hdfs
 -bash-4.1$ echo My Test Stringtestfile -- just a text file, for testing 
 below
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir -- create a directory
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir/1 -- create a subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/1/testfile -- put the test 
 file in the subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/testfile -- put the test 
 file in the directory
 -bash-4.1$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 All files are where we expect them...OK, let's try reading
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/*/testfile
 My Test String -- success!  
 Note that we used an '*' in the cat command, and it correctly found the 
 subdirectory '/tmp/testdir/1', and ignore the regular file 
 '/tmp/testdir/testfile'
 -bash-4.1$ exit
 logout
 [root@node1 ~]# su - testuser -- lets try it as a different user:
 [testuser@node1 ~]$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- so far so good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/*/testfile
 cat: org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=EXECUTE, 
 inode=/tmp/testdir/testfile:hdfs:hadoop:-rw-r--r--
 {code}
 Essentially, we hit a ACE with access=EXECUTE on file /tmp/testdir/testfile 
 cause we tried to access the /tmp/testdir/testfile/testfile as a path. This 
 shouldn't happen, as the testfile is a file and not a path parent to be 
 looked up upon.
 {code}
 2012-09-25 07:24:27,406 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 2 on 8020, call getFileInfo(/tmp/testdir/testfile/testfile)
 {code}
 Surprisingly the superuser avoids hitting into the error, as a result of 
 bypassing permissions, but that can be looked up on another JIRA - if it is 
 fine to let it be like that or not.
 This JIRA targets a client-sided fix to not cause such /path/file/dir or 
 /path/file/file kinda lookups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-10-01 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467164#comment-13467164
 ] 

Arun C Murthy commented on HADOOP-8738:
---

For now I propose we revert this to unblock 2.0.2-alpha.

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-10-01 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467165#comment-13467165
 ] 

Arun C Murthy commented on HADOOP-8738:
---

Tucu?

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-10-01 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467168#comment-13467168
 ] 

Alejandro Abdelnur commented on HADOOP-8738:


Agree, we should put back junit JAR and mockito JAR to be able to run things 
like TestDFSIO until we toolize those.

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8310) FileContext#checkPath should handle URIs with no port

2012-10-01 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-8310:
---

Fix Version/s: 0.23.5

Thanks Aaron, I pulled this into branch-0.23.

 FileContext#checkPath should handle URIs with no port
 -

 Key: HADOOP-8310
 URL: https://issues.apache.org/jira/browse/HADOOP-8310
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0-alpha, 0.23.5

 Attachments: HADOOP-8310.patch


 AbstractFileSystem#checkPath is used to verify that a given path is for the 
 same file system as represented by the AbstractFileSystem instance.
 The original intent of the code was to allow for no port to be provided in 
 the checked path, if the default port was being used by the 
 AbstractFileSystem instance. However, before performing port handling, 
 AFS#checkPath compares the full URI authorities for equality. Since the URI 
 authority includes the port, the port handling code is never reached, and 
 thus valid paths may be erroneously considered invalid.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-10-01 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467224#comment-13467224
 ] 

Daryn Sharp commented on HADOOP-8568:
-

I'm still puzzled why hbase is using a class marked:
{noformat}@InterfaceAudience.LimitedPrivate({HDFS, MapReduce})
@InterfaceStability.Unstable{noformat}

Why isn't hbase using {{NetUtils}}?


 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8845) When looking for parent paths info, globStatus must filter out non-directory elements to avoid an AccessControlException

2012-10-01 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467229#comment-13467229
 ] 

Andy Isaacson commented on HADOOP-8845:
---

(sorry for the markup messup in my last comment.)

The currently pending patch specifically checks in {{pTestClosure6}} that the 
case I mentioned is handled correctly, so I think we're all on the same page. :)

Code-wise, one minor comment:
{code}
+  public boolean apply(FileStatus input) {
+return input.isDirectory() ? true : false;
+  }
{code}

This is an anti-pattern; {{foo() ? true : false}} is the same as {{foo()}}.

Other than that, LGTM on the code level. I haven't carefully read the 
GlobFilter implementation to see if there's a cleaner/simpler way to implement 
this bugfix.

 When looking for parent paths info, globStatus must filter out non-directory 
 elements to avoid an AccessControlException
 

 Key: HADOOP-8845
 URL: https://issues.apache.org/jira/browse/HADOOP-8845
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
  Labels: glob
 Attachments: HADOOP-8845.patch, HADOOP-8845.patch, HADOOP-8845.patch


 A brief description from my colleague Stephen Fritz who helped discover it:
 {code}
 [root@node1 ~]# su - hdfs
 -bash-4.1$ echo My Test Stringtestfile -- just a text file, for testing 
 below
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir -- create a directory
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir/1 -- create a subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/1/testfile -- put the test 
 file in the subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/testfile -- put the test 
 file in the directory
 -bash-4.1$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 All files are where we expect them...OK, let's try reading
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/*/testfile
 My Test String -- success!  
 Note that we used an '*' in the cat command, and it correctly found the 
 subdirectory '/tmp/testdir/1', and ignore the regular file 
 '/tmp/testdir/testfile'
 -bash-4.1$ exit
 logout
 [root@node1 ~]# su - testuser -- lets try it as a different user:
 [testuser@node1 ~]$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- so far so good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/*/testfile
 cat: org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=EXECUTE, 
 inode=/tmp/testdir/testfile:hdfs:hadoop:-rw-r--r--
 {code}
 Essentially, we hit a ACE with access=EXECUTE on file /tmp/testdir/testfile 
 cause we tried to access the /tmp/testdir/testfile/testfile as a path. This 
 shouldn't happen, as the testfile is a file and not a path parent to be 
 looked up upon.
 {code}
 2012-09-25 07:24:27,406 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 2 on 8020, call getFileInfo(/tmp/testdir/testfile/testfile)
 {code}
 Surprisingly the superuser avoids hitting into the error, as a result of 
 bypassing permissions, but that can be looked up on another JIRA - if it is 
 fine to let it be like that or not.
 This JIRA targets a client-sided fix to not cause such /path/file/dir or 
 /path/file/file kinda lookups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467281#comment-13467281
 ] 

Hudson commented on HADOOP-8738:


Integrated in Hadoop-Hdfs-trunk-Commit #2862 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2862/])
HADOOP-8738. Reverted since it broke MR based system tests. (Revision 
1392675)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392675
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8804:
---

Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

Marking patch available so that Jenkins runs.

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha, 1.0.0
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467327#comment-13467327
 ] 

Hudson commented on HADOOP-8738:


Integrated in Hadoop-Mapreduce-trunk-Commit #2821 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2821/])
HADOOP-8738. Reverted since it broke MR based system tests. (Revision 
1392675)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392675
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8608) Add Configuration API for parsing time durations

2012-10-01 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8608:


Description: 
Hadoop has a lot of configurations which specify durations or intervals of 
time. Unfortunately these different configurations have little consistency in 
units - eg some are in milliseconds, some in seconds, and some in minutes. This 
makes it difficult for users to configure, since they have to always refer back 
to docs to remember the unit for each property.

The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
which allows the user to specify the units with a postfix. For example, 10ms, 
10s, 10m, 10h, or even 10d. For backwards-compatibility, if the user 
does not specify a unit, the API can specify the default unit, and warn the 
user that they should specify an explicit unit instead.

  was:
Hadoop has a lot of configurations which specify durations or intervals of 
time. Unfortunately these different configurations have little consistency in 
units - eg some are in milliseconds, some in seconds, and some in minutes. This 
makes it difficult for users to configure, since they have to always refer back 
to docs to remember the unit for each property.

The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
which allows the user to specify the units with a prefix. For example, 10ms, 
10s, 10m, 10h, or even 10d. For backwards-compatibility, if the user 
does not specify a unit, the API can specify the default unit, and warn the 
user that they should specify an explicit unit instead.


 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Todd Lipcon

 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-10-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8616:
---

Target Version/s: 2.0.3-alpha  (was: 2.0.2-alpha)

+1, the latest patch looks good to me. I'm going to commit this momentarily.

 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
 Attachments: HADOOP-8616.patch, HADOOP-8616.patch


 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-10-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8616:
---

Attachment: HADOOP-8616.patch

Right after I initially committed the patch, I realized that the new test file 
was missing the Apache license header. Here's an updated patch which just adds 
that missing license.

I've reverted my initial commit, and am going to commit this updated patch now, 
since it differs only in the comment.

 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
 Attachments: HADOOP-8616.patch, HADOOP-8616.patch, HADOOP-8616.patch


 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-10-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8616:
---

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Sandy.

 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8616.patch, HADOOP-8616.patch, HADOOP-8616.patch


 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467400#comment-13467400
 ] 

Hudson commented on HADOOP-8616:


Integrated in Hadoop-Common-trunk-Commit #2801 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2801/])
HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by 
Sandy Ryza. (Revision 1392707)
Revert an errant commit of HADOOP-8616. (Revision 1392705)
HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by 
Sandy Ryza. (Revision 1392703)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392707
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsURIs.java

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392705
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392703
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java


 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8616.patch, HADOOP-8616.patch, HADOOP-8616.patch


 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467402#comment-13467402
 ] 

Hudson commented on HADOOP-8616:


Integrated in Hadoop-Hdfs-trunk-Commit #2863 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2863/])
HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by 
Sandy Ryza. (Revision 1392707)
Revert an errant commit of HADOOP-8616. (Revision 1392705)
HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by 
Sandy Ryza. (Revision 1392703)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392707
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsURIs.java

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392705
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392703
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java


 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8616.patch, HADOOP-8616.patch, HADOOP-8616.patch


 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467408#comment-13467408
 ] 

Hudson commented on HADOOP-8616:


Integrated in Hadoop-Mapreduce-trunk-Commit #2823 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2823/])
HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by 
Sandy Ryza. (Revision 1392703)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392703
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java


 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8616.patch, HADOOP-8616.patch, HADOOP-8616.patch


 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467415#comment-13467415
 ] 

Hadoop QA commented on HADOOP-8804:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12546891/HADOOP-8804-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestBlockReport
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1549//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1549//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1549//console

This message is automatically generated.

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467424#comment-13467424
 ] 

Hudson commented on HADOOP-8616:


Integrated in Hadoop-Mapreduce-trunk-Commit #2824 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2824/])
HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by 
Sandy Ryza. (Revision 1392707)
Revert an errant commit of HADOOP-8616. (Revision 1392705)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392707
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsURIs.java

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392705
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java


 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8616.patch, HADOOP-8616.patch, HADOOP-8616.patch


 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8868) FileUtil#symlink and FileUtil#chmod should normalize the path before calling into shell APIs

2012-10-01 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-8868:
--

 Summary: FileUtil#symlink and FileUtil#chmod should normalize the 
path before calling into shell APIs
 Key: HADOOP-8868
 URL: https://issues.apache.org/jira/browse/HADOOP-8868
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic


We have seen cases where paths passed in from FileUtil#symlink or 
FileUtil#chmod to Shell APIs can contain both forward and backward slashes on 
Windows.

This causes problems, since some Windows APIs do not work well with mixed 
slashes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8868) FileUtil#chmod should normalize the path before calling into shell APIs

2012-10-01 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8868:
---

Description: 
We have seen cases where paths passed in from FileUtil#chmod to Shell APIs can 
contain both forward and backward slashes on Windows.

This causes problems, since some Windows APIs do not work well with mixed 
slashes.

  was:
We have seen cases where paths passed in from FileUtil#symlink or 
FileUtil#chmod to Shell APIs can contain both forward and backward slashes on 
Windows.

This causes problems, since some Windows APIs do not work well with mixed 
slashes.

Summary: FileUtil#chmod should normalize the path before calling into 
shell APIs  (was: FileUtil#symlink and FileUtil#chmod should normalize the path 
before calling into shell APIs)

 FileUtil#chmod should normalize the path before calling into shell APIs
 ---

 Key: HADOOP-8868
 URL: https://issues.apache.org/jira/browse/HADOOP-8868
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic

 We have seen cases where paths passed in from FileUtil#chmod to Shell APIs 
 can contain both forward and backward slashes on Windows.
 This causes problems, since some Windows APIs do not work well with mixed 
 slashes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8869) Links at the bottom of the jobdetails page do not render correctly in IE9

2012-10-01 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-8869:
--

 Summary: Links at the bottom of the jobdetails page do not render 
correctly in IE9
 Key: HADOOP-8869
 URL: https://issues.apache.org/jira/browse/HADOOP-8869
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: IE9.png, OtherBrowsers.png

See attached screen shoots IE9.png vs OtherBrowsers.png

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-8869) Links at the bottom of the jobdetails page do not render correctly in IE9

2012-10-01 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-8869 started by Ivan Mitic.

 Links at the bottom of the jobdetails page do not render correctly in IE9
 -

 Key: HADOOP-8869
 URL: https://issues.apache.org/jira/browse/HADOOP-8869
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: Fixed_IE_Chrome_FF.png, 
 HADOOP-8869.branch-1-win.ie_links.patch, IE9.png, OtherBrowsers.png


 See attached screen shoots IE9.png vs OtherBrowsers.png

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8869) Links at the bottom of the jobdetails page do not render correctly in IE9

2012-10-01 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8869:
---

Status: Patch Available  (was: In Progress)

 Links at the bottom of the jobdetails page do not render correctly in IE9
 -

 Key: HADOOP-8869
 URL: https://issues.apache.org/jira/browse/HADOOP-8869
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: Fixed_IE_Chrome_FF.png, 
 HADOOP-8869.branch-1-win.ie_links.patch, IE9.png, OtherBrowsers.png


 See attached screen shoots IE9.png vs OtherBrowsers.png

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8869) Links at the bottom of the jobdetails page do not render correctly in IE9

2012-10-01 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8869:
---

Attachment: HADOOP-8869.branch-1-win.ie_links.patch
Fixed_IE_Chrome_FF.png

Attaching the fix.

The problem is caused by an invalid markup, where a table tag does not have a 
matching end tag. Different browsers seems to have different recovery 
mechanisms for invalid markup, hence the difference.

Also attaching screen shoots after the fix is applied.

 Links at the bottom of the jobdetails page do not render correctly in IE9
 -

 Key: HADOOP-8869
 URL: https://issues.apache.org/jira/browse/HADOOP-8869
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: Fixed_IE_Chrome_FF.png, 
 HADOOP-8869.branch-1-win.ie_links.patch, IE9.png, OtherBrowsers.png


 See attached screen shoots IE9.png vs OtherBrowsers.png

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8869) Links at the bottom of the jobdetails page do not render correctly in IE9

2012-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467475#comment-13467475
 ] 

Hadoop QA commented on HADOOP-8869:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547338/HADOOP-8869.branch-1-win.ie_links.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1550//console

This message is automatically generated.

 Links at the bottom of the jobdetails page do not render correctly in IE9
 -

 Key: HADOOP-8869
 URL: https://issues.apache.org/jira/browse/HADOOP-8869
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: Fixed_IE_Chrome_FF.png, 
 HADOOP-8869.branch-1-win.ie_links.patch, IE9.png, OtherBrowsers.png


 See attached screen shoots IE9.png vs OtherBrowsers.png

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira