[jira] [Commented] (HADOOP-8912) adding .gitattributes file to prevent CRLF and LF mismatches for source and text files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474842#comment-13474842
 ] 

Hudson commented on HADOOP-8912:


Integrated in Hadoop-Mapreduce-trunk-Commit #2877 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2877/])
HADOOP-8912. Adding missing CHANGES.txt changes in the previous commit 
1397437. (Revision 1397438)
HADOOP-8912. Add .gitattributes file to prevent CRLF and LF mismatches for 
source and text files. Contributed by Raja Aluri. (Revision 1397437)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397438
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397437
Files : 
* /hadoop/common/trunk/.gitattributes


 adding .gitattributes file to prevent CRLF and LF mismatches for source and 
 text files
 --

 Key: HADOOP-8912
 URL: https://issues.apache.org/jira/browse/HADOOP-8912
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha, 1-win
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 3.0.0, 1-win, 2.0.3-alpha

 Attachments: HADOOP-8912.branch-1-win.patch, 
 HADOOP-8912.branch-2.patch, HADOOP-8912.trunk.patch


 Source code in hadoop-common repo has a bunch of files that have CRLF endings.
 With more development happening on windows there is a higher chance of more 
 CRLF files getting into the source tree.
 I would like to avoid that by creating .gitattributes file which prevents 
 sources from having CRLF entries in text files.
 I am adding a couple of links here to give more primer on what exactly is the 
 issue and how we are trying to fix it.
 # http://git-scm.com/docs/gitattributes#_checking_out_and_checking_in
 # 
 http://stackoverflow.com/questions/170961/whats-the-best-crlf-handling-strategy-with-git
  
 This issue for adding .gitattributes file to the tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-12 Thread Senthil V Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474857#comment-13474857
 ] 

Senthil V Kumar commented on HADOOP-8804:
-

Hi Eli,

Thanks for committing this patch. Also I have included a patch for branch-1.1. 
Can you also look into it, as jobtracker will get fixed by this patch.

Regards
Senthil

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Fix For: 2.0.3-alpha

 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8920) Add more javadoc to metrics2 related classes

2012-10-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474882#comment-13474882
 ] 

Steve Loughran commented on HADOOP-8920:


+1 -makes me happy too

 Add more javadoc to metrics2 related classes
 

 Key: HADOOP-8920
 URL: https://issues.apache.org/jira/browse/HADOOP-8920
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Suresh Srinivas
 Attachments: HADOOP-8920.patch


 Metrics2 related code can is very sparsely documented. Here is patch that 
 adds javadoc that should help some of the code easier to browse and 
 understand.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8916) make it possible to build hadoop tarballs without java5+ forrest

2012-10-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474884#comment-13474884
 ] 

Steve Loughran commented on HADOOP-8916:


HADOOP-8166 lets forrest run without java5; I'd turned forrest off entirely. I 
hadn't seen HADOOP-8399 -it looks more like the duplicate -I'll compare them,

 make it possible to build hadoop tarballs without java5+ forrest
 

 Key: HADOOP-8916
 URL: https://issues.apache.org/jira/browse/HADOOP-8916
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Trivial
 Fix For: 1.1.1

 Attachments: HADOOP-8916.patch

   Original Estimate: 1m
  Remaining Estimate: 1m

 Although you can build hadoop binaries without java5  Forrest, you can't do 
 the tarballs as {{tar}} depends on {{packaged}}, which depends on the 
 {{docs}} and {{cn-docs}}, which both depend on the forrest/java5 checks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)
Gopal V created HADOOP-8921:
---

 Summary: ant build.xml in branch-1 ignores -Dcompile.native
 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial


ant -Dcompile.native=false still runs autoconf and libtoolize

According to ant 1.8 manual, any target if conditions are checked only after 
the dependencies are run through. The current if condition in code fails to 
prevent the autoconf/libtool components from running.

Fixing it by moving the if condition up into the compile-native target and 
changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Attachment: HADOOP-8921.patch

Patch to build.xml

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474919#comment-13474919
 ] 

Gopal V commented on HADOOP-8921:
-

After patch

{code}
$ ant compile-native  -Dcompile.native=false
Buildfile: /home/gopal/hw/hadoop-1/build.xml

compile-native:

BUILD SUCCESSFUL
Total time: 0 seconds



$ ant compile-native  -Dcompile.native=true
Buildfile: /home/gopal/hw/hadoop-1/build.xml

compile-native:

create-native-configure:
 [exec] configure.ac:42: warning: AC_COMPILE_IFELSE was called before 
AC_USE_SYSTEM_EXTENSIONS

{code}

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Attachment: HADOOP-8921.2.patch

Expand patch to cover the compile-core target as well

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.2.patch, HADOOP-8921.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Attachment: (was: HADOOP-8921.patch)

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.2.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Attachment: (was: HADOOP-8921.2.patch)

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.3.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Attachment: HADOOP-8921.3.patch

Re-update patch to run native builds by default, but disable when 
-Dcompile.native=false is provided.

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.3.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Attachment: HADOOP-8921.4.patch

Enable native libraries only after checking for x86_64/x86 availability.

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.3.patch, HADOOP-8921.4.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Attachment: (was: HADOOP-8921.3.patch)

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.4.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-8921:


Release Note: Prevent autotools dependency when native libraries are 
disabled/unavailable  (was: Prevent autotools dependency when native libraries 
are disabled)
  Status: Patch Available  (was: Open)

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.4.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474929#comment-13474929
 ] 

Hadoop QA commented on HADOOP-8921:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548878/HADOOP-8921.4.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1621//console

This message is automatically generated.

 ant build.xml in branch-1 ignores -Dcompile.native
 --

 Key: HADOOP-8921
 URL: https://issues.apache.org/jira/browse/HADOOP-8921
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: Mac OS X 10.7.4
Reporter: Gopal V
Priority: Trivial
  Labels: ant, autoconf, patch
 Attachments: HADOOP-8921.4.patch


 ant -Dcompile.native=false still runs autoconf and libtoolize
 According to ant 1.8 manual, any target if conditions are checked only 
 after the dependencies are run through. The current if condition in code 
 fails to prevent the autoconf/libtool components from running.
 Fixing it by moving the if condition up into the compile-native target and 
 changing it to a param substitution instead of being evaluated as a condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8918) dev-support/test-patch.sh is parsing modified files wrong

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474971#comment-13474971
 ] 

Hudson commented on HADOOP-8918:


Integrated in Hadoop-Hdfs-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1193/])
HADOOP-8918. test-patch.sh is parsing modified files wrong. Contributed by 
Raja Aluri. (Revision 1397411)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397411
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 dev-support/test-patch.sh is parsing modified files wrong
 -

 Key: HADOOP-8918
 URL: https://issues.apache.org/jira/browse/HADOOP-8918
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 3.0.0

 Attachments: HADOOP-8918.trunk.patch


 dev-support/test-patch.sh is parsing modified files wrong from the patch.
 In test-patch.sh script, for running findbugs command, it is trying to find 
 out the modified files by doing the following command
 {code}
 $GREP '^+++\|^---' $PATCH_DIR/patch | cut -c '5-' | $GREP -v /dev/null | sort 
 | uniq  $TMP
 {code}
 A patch file can have an entry with xml comments removed, which would match 
 that to be a filename. If you look at the last line of the below text, it 
 would match the filename to be '^M'
 {code}
 -?xml version=1.0?^M
 -!--^M
 -   Licensed to the Apache Software Foundation (ASF) under one or more^M-   
 contributor license agreements.  See the NOTICE file distributed with^M
 -   this work for additional information regarding copyright ownership.^M
 -   The ASF licenses this file to You under the Apache License, Version 
 2.0^M-   (the License); you may not use this file except in compliance 
 with^M
 -   the License.  You may obtain a copy of the License at^M
 -^M
 -   http://www.apache.org/licenses/LICENSE-2.0^M
 -^M
 -   Unless required by applicable law or agreed to in writing, software^M
 -   distributed under the License is distributed on an AS IS BASIS,^M
 -   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.^M
 -   See the License for the specific language governing permissions and^M
 -   limitations under the License.^M
 ---^M
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8909) Hadoop Common Maven protoc calls must not depend on external sh script

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474972#comment-13474972
 ] 

Hudson commented on HADOOP-8909:


Integrated in Hadoop-Hdfs-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1193/])
HADOOP-8909. Hadoop Common Maven protoc calls must not depend on external 
sh script. Contributed by Chris Nauroth. (Revision 1397338)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397338
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml


 Hadoop Common Maven protoc calls must not depend on external sh script
 --

 Key: HADOOP-8909
 URL: https://issues.apache.org/jira/browse/HADOOP-8909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8909.patch, HADOOP-8909.patch, HADOOP-8909.patch


 Currently, several pom.xml files rely on external shell scripting to call 
 protoc.  The sh binary may not be available on all developers' machines (e.g. 
 Windows without Cygwin).  This issue tracks removal of that dependency in 
 Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8912) adding .gitattributes file to prevent CRLF and LF mismatches for source and text files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474974#comment-13474974
 ] 

Hudson commented on HADOOP-8912:


Integrated in Hadoop-Hdfs-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1193/])
HADOOP-8912. Adding missing CHANGES.txt changes in the previous commit 
1397437. (Revision 1397438)
HADOOP-8912. Add .gitattributes file to prevent CRLF and LF mismatches for 
source and text files. Contributed by Raja Aluri. (Revision 1397437)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397438
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397437
Files : 
* /hadoop/common/trunk/.gitattributes


 adding .gitattributes file to prevent CRLF and LF mismatches for source and 
 text files
 --

 Key: HADOOP-8912
 URL: https://issues.apache.org/jira/browse/HADOOP-8912
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha, 1-win
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 3.0.0, 1-win, 2.0.3-alpha

 Attachments: HADOOP-8912.branch-1-win.patch, 
 HADOOP-8912.branch-2.patch, HADOOP-8912.trunk.patch


 Source code in hadoop-common repo has a bunch of files that have CRLF endings.
 With more development happening on windows there is a higher chance of more 
 CRLF files getting into the source tree.
 I would like to avoid that by creating .gitattributes file which prevents 
 sources from having CRLF entries in text files.
 I am adding a couple of links here to give more primer on what exactly is the 
 issue and how we are trying to fix it.
 # http://git-scm.com/docs/gitattributes#_checking_out_and_checking_in
 # 
 http://stackoverflow.com/questions/170961/whats-the-best-crlf-handling-strategy-with-git
  
 This issue for adding .gitattributes file to the tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8911) CRLF characters in source and text files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474977#comment-13474977
 ] 

Hudson commented on HADOOP-8911:


Integrated in Hadoop-Hdfs-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1193/])
HADOOP-8911. CRLF characters in source and text files. Contributed Raja 
Aluri. (Revision 1397432)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397432
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/ContextFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsRecord.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/file/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/AbstractMetricsContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/MetricsRecordImpl.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestClientProtocolProviderImpls.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestYarnClientProtocolProvider.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMean.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMedian.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordStandardDeviation.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestWordStats.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/sample/data.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/sample/data2.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/HashingDistributionPolicy.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/IdentityLocalAnalysis.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocLocalAnalysis.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocRecordReader.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocTextAndOp.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/RoundRobinDistributionPolicy.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/LuceneIndexFileNameFilter.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/LuceneUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/MixedDeletionPolicy.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/MixedDirectory.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/RAMDirectoryUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/ShardWriter.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/main/UpdateIndex.java
* 

[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-10-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474984#comment-13474984
 ] 

Daryn Sharp commented on HADOOP-8589:
-

Is it possible to just make sure everything is rooted under build.test.data?  
Then we may not need to worry about how deep the home dir is in the directory 
structure?

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Attachments: HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7105) [IPC] Improvement of lock mechanism in Listener and Reader thread

2012-10-12 Thread jinglong.liujl (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474994#comment-13474994
 ] 

jinglong.liujl commented on HADOOP-7105:


There're duplicate synchronized locks in registerChannel(), one is 
registerChannel which's in Reader , the other is in socketchannel.register 
which's in jdk. It's safty to remove one of it.
As Todd's suggestion, we've add a non-blocking queue in rpc to make accept not 
block by reader. And this patch will be released in another issue.

 [IPC] Improvement of lock mechanism in Listener and Reader thread
 -

 Key: HADOOP-7105
 URL: https://issues.apache.org/jira/browse/HADOOP-7105
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.21.0
Reporter: jinglong.liujl
 Attachments: improveListenerLock2.patch, improveListenerLock.patch


 In many client cocurrent access, single thread Listener will become 
 bottleneck. Many client can't be served, and get connection time out.
 To improve Listener capacity, we make 2 modification.
 1.  Tuning ipc.server.listen.queue.size to a larger value to avoid client 
 retry.
 2. In currently implement, Listener will call registerChannel(), and 
 finishAdd() in Reader, which will request Reader synchronized lock. Listener 
 will cost too many time in waiting for this lock.
 We have made test, 
 ./bin/hadoop org.apache.hadoop.hdfs.NNThroughputBenchmark  -op create 
 -threads 1 -files 1
 case 1 : Currently 
 can not pass. and report 
 hadoop-rd101.jx.baidu.com/10.65.25.166:59310. Already tried 0 time(s).
 case 2 : tuning back log to 10240
 average cost : 1285.72 ms
 case 3 : tuning back log to 10240 , and improve lock mechanism in patch
 average cost :  941.32 ms
 performance in average cost will improve 26%

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8918) dev-support/test-patch.sh is parsing modified files wrong

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474997#comment-13474997
 ] 

Hudson commented on HADOOP-8918:


Integrated in Hadoop-Mapreduce-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1224/])
HADOOP-8918. test-patch.sh is parsing modified files wrong. Contributed by 
Raja Aluri. (Revision 1397411)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397411
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 dev-support/test-patch.sh is parsing modified files wrong
 -

 Key: HADOOP-8918
 URL: https://issues.apache.org/jira/browse/HADOOP-8918
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 3.0.0

 Attachments: HADOOP-8918.trunk.patch


 dev-support/test-patch.sh is parsing modified files wrong from the patch.
 In test-patch.sh script, for running findbugs command, it is trying to find 
 out the modified files by doing the following command
 {code}
 $GREP '^+++\|^---' $PATCH_DIR/patch | cut -c '5-' | $GREP -v /dev/null | sort 
 | uniq  $TMP
 {code}
 A patch file can have an entry with xml comments removed, which would match 
 that to be a filename. If you look at the last line of the below text, it 
 would match the filename to be '^M'
 {code}
 -?xml version=1.0?^M
 -!--^M
 -   Licensed to the Apache Software Foundation (ASF) under one or more^M-   
 contributor license agreements.  See the NOTICE file distributed with^M
 -   this work for additional information regarding copyright ownership.^M
 -   The ASF licenses this file to You under the Apache License, Version 
 2.0^M-   (the License); you may not use this file except in compliance 
 with^M
 -   the License.  You may obtain a copy of the License at^M
 -^M
 -   http://www.apache.org/licenses/LICENSE-2.0^M
 -^M
 -   Unless required by applicable law or agreed to in writing, software^M
 -   distributed under the License is distributed on an AS IS BASIS,^M
 -   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.^M
 -   See the License for the specific language governing permissions and^M
 -   limitations under the License.^M
 ---^M
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8909) Hadoop Common Maven protoc calls must not depend on external sh script

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474998#comment-13474998
 ] 

Hudson commented on HADOOP-8909:


Integrated in Hadoop-Mapreduce-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1224/])
HADOOP-8909. Hadoop Common Maven protoc calls must not depend on external 
sh script. Contributed by Chris Nauroth. (Revision 1397338)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397338
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml


 Hadoop Common Maven protoc calls must not depend on external sh script
 --

 Key: HADOOP-8909
 URL: https://issues.apache.org/jira/browse/HADOOP-8909
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8909.patch, HADOOP-8909.patch, HADOOP-8909.patch


 Currently, several pom.xml files rely on external shell scripting to call 
 protoc.  The sh binary may not be available on all developers' machines (e.g. 
 Windows without Cygwin).  This issue tracks removal of that dependency in 
 Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8912) adding .gitattributes file to prevent CRLF and LF mismatches for source and text files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475000#comment-13475000
 ] 

Hudson commented on HADOOP-8912:


Integrated in Hadoop-Mapreduce-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1224/])
HADOOP-8912. Adding missing CHANGES.txt changes in the previous commit 
1397437. (Revision 1397438)
HADOOP-8912. Add .gitattributes file to prevent CRLF and LF mismatches for 
source and text files. Contributed by Raja Aluri. (Revision 1397437)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397438
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397437
Files : 
* /hadoop/common/trunk/.gitattributes


 adding .gitattributes file to prevent CRLF and LF mismatches for source and 
 text files
 --

 Key: HADOOP-8912
 URL: https://issues.apache.org/jira/browse/HADOOP-8912
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha, 1-win
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 3.0.0, 1-win, 2.0.3-alpha

 Attachments: HADOOP-8912.branch-1-win.patch, 
 HADOOP-8912.branch-2.patch, HADOOP-8912.trunk.patch


 Source code in hadoop-common repo has a bunch of files that have CRLF endings.
 With more development happening on windows there is a higher chance of more 
 CRLF files getting into the source tree.
 I would like to avoid that by creating .gitattributes file which prevents 
 sources from having CRLF entries in text files.
 I am adding a couple of links here to give more primer on what exactly is the 
 issue and how we are trying to fix it.
 # http://git-scm.com/docs/gitattributes#_checking_out_and_checking_in
 # 
 http://stackoverflow.com/questions/170961/whats-the-best-crlf-handling-strategy-with-git
  
 This issue for adding .gitattributes file to the tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8911) CRLF characters in source and text files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475003#comment-13475003
 ] 

Hudson commented on HADOOP-8911:


Integrated in Hadoop-Mapreduce-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1224/])
HADOOP-8911. CRLF characters in source and text files. Contributed Raja 
Aluri. (Revision 1397432)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397432
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/ContextFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsRecord.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/file/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/AbstractMetricsContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/MetricsRecordImpl.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestClientProtocolProviderImpls.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestYarnClientProtocolProvider.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMean.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMedian.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordStandardDeviation.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestWordStats.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/sample/data.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/sample/data2.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/HashingDistributionPolicy.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/IdentityLocalAnalysis.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocLocalAnalysis.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocRecordReader.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/LineDocTextAndOp.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/example/RoundRobinDistributionPolicy.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/LuceneIndexFileNameFilter.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/LuceneUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/MixedDeletionPolicy.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/MixedDirectory.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/RAMDirectoryUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/lucene/ShardWriter.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index/src/java/org/apache/hadoop/contrib/index/main/UpdateIndex.java
* 

[jira] [Updated] (HADOOP-7468) hadoop-core JAR contains a log4j.properties file

2012-10-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-7468:
---

Affects Version/s: 1.0.4
   1.0.3

still there in 1.04+

 hadoop-core JAR contains a log4j.properties file
 

 Key: HADOOP-7468
 URL: https://issues.apache.org/jira/browse/HADOOP-7468
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 0.20.203.0, 1.0.3, 1.0.4
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: hadoop-7468-for-204.patch, hadoop-7468.txt


 the hadoop-core JAR in the distribution and in the Maven repositories has a 
 log4j JAR. This can break the logging of any client programs which import 
 that JAR to do things like DFSClient work. It should be stripped from future 
 releases. This should not impact server-side deployments, as the properties 
 file in conf/ should be picked up instead. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8888) add the ability to suppress the deprecated warnings when using hadoop cli

2012-10-12 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475076#comment-13475076
 ] 

Arpit Gupta commented on HADOOP-:
-

@Harsh

These are not bad config references. This when a user uses hadoop dfs|jar etc 
commands on trunk they would get a deprecated warning. What i am suggesting is 
that by default the warnings should come up. But for what ever reason if I as a 
user want to keep using the same command this would provide me the ability to 
suppress those.

 add the ability to suppress the deprecated warnings when using hadoop cli
 -

 Key: HADOOP-
 URL: https://issues.apache.org/jira/browse/HADOOP-
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta

 some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1
 May be we can introduce
 HADOOP_DEPRECATED_WARN_SUPPRESS
 which if set to yes will suppress the various warnings that are thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-10-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475084#comment-13475084
 ] 

Daryn Sharp commented on HADOOP-8589:
-

Maybe use a chroot fs over local fs to lock it down to build.test.data?

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Attachments: HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8888) add the ability to suppress the deprecated warnings when using hadoop cli

2012-10-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475087#comment-13475087
 ] 

Harsh J commented on HADOOP-:
-

Oh I see by deprecated you meant the scripts and not the configs. I am sorry 
then - am fine with doing that so long as the default is not suppressive 
behavior. Thanks for clarifying Arpit!

 add the ability to suppress the deprecated warnings when using hadoop cli
 -

 Key: HADOOP-
 URL: https://issues.apache.org/jira/browse/HADOOP-
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta

 some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1
 May be we can introduce
 HADOOP_DEPRECATED_WARN_SUPPRESS
 which if set to yes will suppress the various warnings that are thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8888) add the ability to suppress the deprecated warnings when using hadoop cli

2012-10-12 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-:


Description: 
some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1

May be we can introduce

HADOOP_DEPRECATED_WARN_SUPPRESS

which if set to yes will suppress the various warnings that are thrown.

For example commands like

{code}
hadoop dfs
hadoop jar
{code}

etc will print out deprecated warnings.

  was:
some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1

May be we can introduce

HADOOP_DEPRECATED_WARN_SUPPRESS

which if set to yes will suppress the various warnings that are thrown.


 add the ability to suppress the deprecated warnings when using hadoop cli
 -

 Key: HADOOP-
 URL: https://issues.apache.org/jira/browse/HADOOP-
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta

 some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1
 May be we can introduce
 HADOOP_DEPRECATED_WARN_SUPPRESS
 which if set to yes will suppress the various warnings that are thrown.
 For example commands like
 {code}
 hadoop dfs
 hadoop jar
 {code}
 etc will print out deprecated warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8922) Proviade alternate JSONP output for JMXJsonServlet to enable javascript in browser dashboard

2012-10-12 Thread Damien Hardy (JIRA)
Damien Hardy created HADOOP-8922:


 Summary: Proviade alternate JSONP output for JMXJsonServlet to 
enable javascript in browser dashboard
 Key: HADOOP-8922
 URL: https://issues.apache.org/jira/browse/HADOOP-8922
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Damien Hardy
Priority: Trivial


JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in 
browser GUI to make requests.
For security purpose about XSS, browser limit request on other domain[¹|#ref1] 
so that metrics from cluster nodes cannot be used in a full js interface.
An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for 
ElasticSearch.

In order to achieve that the servlet should detect a GET parameter 
(callback=) and modify the response by surrounding the Json value with 
( and ); [³|#ref3]
value  is variable and should be provide by client as callback parameter 
value.

{anchor:ref1}[1] 
https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript
{anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk
{anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-12 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8906:


Attachment: HADOOP-8906.patch
HADOOP-8906-branch_0.23.patch

Add more tests.  As cited by Jason, return null for non-glob queries that 
filter out all results.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, 
 HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8784) Improve IPC.Client's token use

2012-10-12 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8784:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I'm committed to trunk and branch-2.  Thanks Owen!

 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8784) Improve IPC.Client's token use

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475127#comment-13475127
 ] 

Hudson commented on HADOOP-8784:


Integrated in Hadoop-Hdfs-trunk-Commit #2918 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2918/])
HADOOP-8784. Improve IPC.Client's token use (daryn) (Revision 1397634)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397634
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8784) Improve IPC.Client's token use

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475130#comment-13475130
 ] 

Hudson commented on HADOOP-8784:


Integrated in Hadoop-Common-trunk-Commit #2856 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2856/])
HADOOP-8784. Improve IPC.Client's token use (daryn) (Revision 1397634)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397634
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8922) Proviade alternate JSONP output for JMXJsonServlet to enable javascript in browser dashboard

2012-10-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475131#comment-13475131
 ] 

Harsh J commented on HADOOP-8922:
-

Hi Damien,

So this would allow folks to build own applications and pull in metrics in an 
easier manner (via JS)?

The policy you link to, is that a proper standard across major browsers, or 
otherwise?

Would you be working on a patch to allow this and perhaps post a demonstration 
use-case for supporting its addition?

Thanks a ton!

 Proviade alternate JSONP output for JMXJsonServlet to enable javascript in 
 browser dashboard
 

 Key: HADOOP-8922
 URL: https://issues.apache.org/jira/browse/HADOOP-8922
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Damien Hardy
Priority: Trivial

 JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in 
 browser GUI to make requests.
 For security purpose about XSS, browser limit request on other 
 domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full 
 js interface.
 An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for 
 ElasticSearch.
 In order to achieve that the servlet should detect a GET parameter 
 (callback=) and modify the response by surrounding the Json value with 
 ( and ); [³|#ref3]
 value  is variable and should be provide by client as callback 
 parameter value.
 {anchor:ref1}[1] 
 https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript
 {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk
 {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8923) WEBUI shows an intermediatory page when the cookie expires.

2012-10-12 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8923:


 Summary: WEBUI shows an intermediatory page when the cookie 
expires.
 Key: HADOOP-8923
 URL: https://issues.apache.org/jira/browse/HADOOP-8923
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor


The WEBUI does Authentication (SPNEGO/Custom) and then drops a cookie. 
Once the cookie expires, the webui displays a page saying that authentication 
token expired. The user has to refresh the page to get authenticated again. 
This page can be avoided and the user can authenticated without showing such a 
page to the user.
Also the when the cookie expires, a warning is logged. But there is no need to 
log this as this is not of any significance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8784) Improve IPC.Client's token use

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475177#comment-13475177
 ] 

Hudson commented on HADOOP-8784:


Integrated in Hadoop-Mapreduce-trunk-Commit #2879 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2879/])
HADOOP-8784. Improve IPC.Client's token use (daryn) (Revision 1397634)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397634
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8923) WEBUI shows an intermediatory page when the cookie expires.

2012-10-12 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-8923:
-

Attachment: HADOOP-8923.patch

 WEBUI shows an intermediatory page when the cookie expires.
 ---

 Key: HADOOP-8923
 URL: https://issues.apache.org/jira/browse/HADOOP-8923
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HADOOP-8923.patch


 The WEBUI does Authentication (SPNEGO/Custom) and then drops a cookie. 
 Once the cookie expires, the webui displays a page saying that 
 authentication token expired. The user has to refresh the page to get 
 authenticated again. This page can be avoided and the user can authenticated 
 without showing such a page to the user.
 Also the when the cookie expires, a warning is logged. But there is no need 
 to log this as this is not of any significance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-10-12 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475204#comment-13475204
 ] 

Sanjay Radia commented on HADOOP-8589:
--

I had planned to make chroot fs a full fledged fs but it turns out it cannot 
and now it is used mostly internally in viewfs as a implementation. (I can't 
remember the reasons for why it cannot be made a full fledged file system.)
Are you suggesting this because trash test can delete your home dir? BTW  the 
issue of a trash test  deleting the home dir is also possible in the other 
trash tests (ie the non-viewfs trash tests).
I suggest we move the trash issue to another jira.

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Attachments: HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8922) Proviade alternate JSONP output for JMXJsonServlet to enable javascript in browser dashboard

2012-10-12 Thread Damien Hardy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475208#comment-13475208
 ] 

Damien Hardy commented on HADOOP-8922:
--

Hi Harsh,

Yes it allows to load a page with some JS pulling periodicly data as json to 
elaborate living graphs for exemple.

Also Yes :) it is described in more generic sources : 
  * http://www.w3.org/Security/wiki/Same_Origin_Policy 
  * http://en.wikipedia.org/wiki/Same_origin_policy

I can try to make some patch (I am more an admin than a dev profile) but should 
be quite simple and I would love to have my name in hadoop-common changelog :D

 Proviade alternate JSONP output for JMXJsonServlet to enable javascript in 
 browser dashboard
 

 Key: HADOOP-8922
 URL: https://issues.apache.org/jira/browse/HADOOP-8922
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Damien Hardy
Priority: Trivial

 JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in 
 browser GUI to make requests.
 For security purpose about XSS, browser limit request on other 
 domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full 
 js interface.
 An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for 
 ElasticSearch.
 In order to achieve that the servlet should detect a GET parameter 
 (callback=) and modify the response by surrounding the Json value with 
 ( and ); [³|#ref3]
 value  is variable and should be provide by client as callback 
 parameter value.
 {anchor:ref1}[1] 
 https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript
 {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk
 {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475210#comment-13475210
 ] 

Hadoop QA commented on HADOOP-8906:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548911/HADOOP-8906.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1622//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1622//console

This message is automatically generated.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, 
 HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8910) Add examples to GlobExpander#expand method

2012-10-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475244#comment-13475244
 ] 

Daryn Sharp commented on HADOOP-8910:
-

+1 I hope you can find a way to forgive me for doubting you :)

 Add examples to GlobExpander#expand method
 --

 Key: HADOOP-8910
 URL: https://issues.apache.org/jira/browse/HADOOP-8910
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HADOOP-8910.patch


 Every time I review code related to glob I end up having to relearn how the 
 code works. Adding few examples should help understand some of this code 
 better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8910) Add examples to GlobExpander#expand method

2012-10-12 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8910:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk.

 Add examples to GlobExpander#expand method
 --

 Key: HADOOP-8910
 URL: https://issues.apache.org/jira/browse/HADOOP-8910
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8910.patch


 Every time I review code related to glob I end up having to relearn how the 
 code works. Adding few examples should help understand some of this code 
 better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8922) Proviade alternate JSONP output for JMXJsonServlet to enable javascript in browser dashboard

2012-10-12 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475258#comment-13475258
 ] 

Robert Joseph Evans commented on HADOOP-8922:
-

+1 for the idea.  There are a lot of JS frameworks that really do prefer to 
have the JSONP format.

 Proviade alternate JSONP output for JMXJsonServlet to enable javascript in 
 browser dashboard
 

 Key: HADOOP-8922
 URL: https://issues.apache.org/jira/browse/HADOOP-8922
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Damien Hardy
Priority: Trivial

 JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in 
 browser GUI to make requests.
 For security purpose about XSS, browser limit request on other 
 domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full 
 js interface.
 An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for 
 ElasticSearch.
 In order to achieve that the servlet should detect a GET parameter 
 (callback=) and modify the response by surrounding the Json value with 
 ( and ); [³|#ref3]
 value  is variable and should be provide by client as callback 
 parameter value.
 {anchor:ref1}[1] 
 https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript
 {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk
 {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8910) Add examples to GlobExpander#expand method

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475260#comment-13475260
 ] 

Hudson commented on HADOOP-8910:


Integrated in Hadoop-Hdfs-trunk-Commit #2919 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2919/])
HADOOP-8910. Add examples to GlobExpander#expand method. Contributed by 
Suresh Srinivas. (Revision 1397691)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397691
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobExpander.java


 Add examples to GlobExpander#expand method
 --

 Key: HADOOP-8910
 URL: https://issues.apache.org/jira/browse/HADOOP-8910
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8910.patch


 Every time I review code related to glob I end up having to relearn how the 
 code works. Adding few examples should help understand some of this code 
 better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8910) Add examples to GlobExpander#expand method

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475261#comment-13475261
 ] 

Hudson commented on HADOOP-8910:


Integrated in Hadoop-Common-trunk-Commit #2857 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2857/])
HADOOP-8910. Add examples to GlobExpander#expand method. Contributed by 
Suresh Srinivas. (Revision 1397691)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397691
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobExpander.java


 Add examples to GlobExpander#expand method
 --

 Key: HADOOP-8910
 URL: https://issues.apache.org/jira/browse/HADOOP-8910
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8910.patch


 Every time I review code related to glob I end up having to relearn how the 
 code works. Adding few examples should help understand some of this code 
 better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475267#comment-13475267
 ] 

Bikas Saha commented on HADOOP-8847:


Is there anything else left for me to do wrt getting this patch ready for 
commit?

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475270#comment-13475270
 ] 

Jason Lowe commented on HADOOP-8906:


+1, thanks Daryn.  I'll commit this shortly.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, 
 HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8868) FileUtil#chmod should normalize the path before calling into shell APIs

2012-10-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475279#comment-13475279
 ] 

Bikas Saha commented on HADOOP-8868:


Ok. This also avoids cases where a path is composed of a root path from 
config/default which might contain / and a subpath from locaFS that contains a 
\.
+1

 FileUtil#chmod should normalize the path before calling into shell APIs
 ---

 Key: HADOOP-8868
 URL: https://issues.apache.org/jira/browse/HADOOP-8868
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8868.branch-1-win.chmod.patch


 We have seen cases where paths passed in from FileUtil#chmod to Shell APIs 
 can contain both forward and backward slashes on Windows.
 This causes problems, since some Windows APIs do not work well with mixed 
 slashes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8914) Automate release builds

2012-10-12 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475287#comment-13475287
 ] 

Robert Joseph Evans commented on HADOOP-8914:
-

+1 for that.  I have the beginnings of a script that I would be happy to donate 
to the effort.  However that is the script that caused the signing issues.  The 
big problem I see is how do we sign the builds?  I am not really sure I trust 
putting my PGP private key on the jenkins build hosts, and giving jenkins my 
password to decrypt it.

 Automate release builds
 ---

 Key: HADOOP-8914
 URL: https://issues.apache.org/jira/browse/HADOOP-8914
 Project: Hadoop Common
  Issue Type: Task
Reporter: Eli Collins

 Hadoop releases are currently created manually by the RM (following 
 http://wiki.apache.org/hadoop/HowToRelease), which means various aspects of 
 the build are ad hoc, eg what tool chain was used to compile the java and 
 native code varies from release to release. Other steps can be inconsistent 
 since they're done manually eg recently the checksums for an RC were 
 incorrect. Let's use the jenkins toolchain and create a job that automates 
 creating release builds so that the only manual thing about releasing is 
 publishing to mvn central.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475294#comment-13475294
 ] 

Hudson commented on HADOOP-8906:


Integrated in Hadoop-Hdfs-trunk-Commit #2920 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2920/])
HADOOP-8906. paths with multiple globs are unreliable. Contributed by Daryn 
Sharp. (Revision 1397704)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397704
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, 
 HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475296#comment-13475296
 ] 

Hudson commented on HADOOP-8906:


Integrated in Hadoop-Common-trunk-Commit #2858 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2858/])
HADOOP-8906. paths with multiple globs are unreliable. Contributed by Daryn 
Sharp. (Revision 1397704)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397704
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, 
 HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-12 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-8906:
---

   Resolution: Fixed
Fix Version/s: 0.23.5
   2.0.3-alpha
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2, and branch-0.23.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.5

 Attachments: HADOOP-8906-branch_0.23.patch, 
 HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8910) Add examples to GlobExpander#expand method

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475302#comment-13475302
 ] 

Hudson commented on HADOOP-8910:


Integrated in Hadoop-Mapreduce-trunk-Commit #2880 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2880/])
HADOOP-8910. Add examples to GlobExpander#expand method. Contributed by 
Suresh Srinivas. (Revision 1397691)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397691
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobExpander.java


 Add examples to GlobExpander#expand method
 --

 Key: HADOOP-8910
 URL: https://issues.apache.org/jira/browse/HADOOP-8910
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8910.patch


 Every time I review code related to glob I end up having to relearn how the 
 code works. Adding few examples should help understand some of this code 
 better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-10-12 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475333#comment-13475333
 ] 

Sanjay Radia commented on HADOOP-8589:
--

{quote}
This is indeed still a problem on trunk:
/home/foo/bar
...
/home/harsh
..
/foo
..
{quote}
Harsh what did you mean in your comment above - did you run the tests with  the 
home directory or wd set to the above 3 paths?

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Attachments: HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8901) GZip and Snappy support may not work without unversioned libraries

2012-10-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475345#comment-13475345
 ] 

Todd Lipcon commented on HADOOP-8901:
-

+1, will commit this this afternoon unless there are any comments.

 GZip and Snappy support may not work without unversioned libraries
 --

 Key: HADOOP-8901
 URL: https://issues.apache.org/jira/browse/HADOOP-8901
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8901.001.patch, HADOOP-8901.002.patch, 
 HADOOP-8901.003.patch


 Currently, we use {{dlopen}} to open {{libz.so}} and {{libsnappy.so}}, to get 
 Gzip and Snappy support, respectively.
 However, this is not correct; we should be dlopening {{libsnappy.so.1}} 
 instead.  The versionless form of the shared library is not commonly 
 installed except by development packages.  Also, we may run into subtle 
 compatibility problems if a new version of libsnappy comes out.
 Thanks to Brandon Vargo for reporting this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-10-12 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475348#comment-13475348
 ] 

Andrey Klochkov commented on HADOOP-8589:
-

Sanjay,
The idea of moving TestLFS into ViewFileSystemTestSetup was to make all tests 
related to viewfs use the same configuration with 1) home dir located under 
test dir 2) preventing any delete operations outside of test dir. I agree that 
using chroot fs would be even better - all the changes in the local FS would be 
locked under test dir, which is the right thing, but I really don't know any 
details on whether chroot is capable of that.

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Attachments: HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475349#comment-13475349
 ] 

Hudson commented on HADOOP-8906:


Integrated in Hadoop-Mapreduce-trunk-Commit #2881 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2881/])
HADOOP-8906. paths with multiple globs are unreliable. Contributed by Daryn 
Sharp. (Revision 1397704)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397704
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.5

 Attachments: HADOOP-8906-branch_0.23.patch, 
 HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8924) Hadoop Common creating version annotation must not depend on sh

2012-10-12 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-8924:
-

 Summary: Hadoop Common creating version annotation must not depend 
on sh
 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


Currently, the build process relies on saveVersion.sh to generate 
package-info.java with a version annotation.  The sh binary may not be 
available on all developers' machines (e.g. Windows without Cygwin). This issue 
tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh

2012-10-12 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Summary: Hadoop Common creating package-info.java must not depend on sh  
(was: Hadoop Common creating version annotation must not depend on sh)

 Hadoop Common creating package-info.java must not depend on sh
 --

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth

 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh

2012-10-12 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924-branch-trunk-win.patch

This patch converts saveVersion.sh to an equivalent saveVersion.py.  I also 
expanded the template and the arguments to include additional fields, so that 
the same script can be used from both the Hadoop Common and Yarn modules.

I have tested mvn generate-sources in the following combinations:

1. Mac/git
2. Windows/git
3. Windows + Cygwin/git
4. Mac/svn
5. Windows/svn
6. Windows + Cygwin/svn

All tests produced the correct package-info.java, with the same value for 
srcChecksum.

I'm starting this change in branch-trunk-win, because the driver for this 
change is to get a working Windows build for a branch off of trunk.

 Hadoop Common creating package-info.java must not depend on sh
 --

 Key: HADOOP-8924
 URL: https://issues.apache.org/jira/browse/HADOOP-8924
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8924-branch-trunk-win.patch


 Currently, the build process relies on saveVersion.sh to generate 
 package-info.java with a version annotation.  The sh binary may not be 
 available on all developers' machines (e.g. Windows without Cygwin). This 
 issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475389#comment-13475389
 ] 

Steve Loughran commented on HADOOP-8847:


Bikas -is there any reason to not make the {{close()}} operation in 
{{unpackEntries()}} part of the try/finally logic? Other than that it looks 
good to be -the {{Shell.WINDOWS}} checks ensure that the Unix untars will be 
the existing shell commands and not break anything  

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8900) BuiltInGzipDecompressor : java.io.IOException: stored gzip size doesn't match decompressed size (Slavik Krassovsky)

2012-10-12 Thread Slavik Krassovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475397#comment-13475397
 ] 

Slavik Krassovsky commented on HADOOP-8900:
---

Andy, sounds good, I'll port to branch-1.

 BuiltInGzipDecompressor : java.io.IOException: stored gzip size doesn't match 
 decompressed size (Slavik Krassovsky)
 ---

 Key: HADOOP-8900
 URL: https://issues.apache.org/jira/browse/HADOOP-8900
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win, 2.0.1-alpha
 Environment: Encountered failure when processing large GZIP file
Reporter: Slavik Krassovsky
Assignee: Andy Isaacson
 Attachments: BuiltInGzipDecompressor2.patch, hadoop8900-2.txt, 
 hadoop8900.txt


 Encountered failure when processing large GZIP file
 • Gz: Failed in 1hrs, 13mins, 57sec with the error:
  ¸java.io.IOException: IO error in map input file 
 hdfs://localhost:9000/Halo4/json_m/gz/NewFileCat.txt.gz
  at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:242)
  at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)
  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
  at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:435)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:371)
  at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
  at org.apache.hadoop.mapred.Child.main(Child.java:260)
  Caused by: java.io.IOException: stored gzip size doesn't match decompressed 
 size
  at 
 org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeTrailerState(BuiltInGzipDecompressor.java:389)
  at 
 org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:224)
  at 
 org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:82)
  at 
 org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:76)
  at java.io.InputStream.read(InputStream.java:102)
  at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
  at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:136)
  at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:40)
  at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:66)
  at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:32)
  at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:67)
  at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:236)
  ... 9 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8716) Users/Groups are not created during installation of DEB package

2012-10-12 Thread David Dossot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475406#comment-13475406
 ] 

David Dossot commented on HADOOP-8716:
--

For the sake of others having the same issue, I've been able to workaround the 
problem by running:

{noformat}sudo groupadd -r hadoop
sudo useradd --comment Hadoop MapReduce --shell /bin/bash -M -r --groups 
hadoop --home /var/lib/hadoop/mapred mapred
sudo useradd --comment Hadoop HDFS --shell /bin/bash -M -r --groups hadoop 
--home /var/lib/hadoop/hdfs hdfs{noformat} 

 Users/Groups are not created during installation of DEB package
 ---

 Key: HADOOP-8716
 URL: https://issues.apache.org/jira/browse/HADOOP-8716
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.3
 Environment: Ubuntu 12.04 LTS
 x64
Reporter: Mikhail
  Labels: install

 During DEB x64 package installation I got the following errors:
 mak@mak-laptop:~/Downloads$ sudo dpkg -i hadoop_1.0.3-1_x86_64.deb 
 [sudo] password for mak: 
 Selecting previously unselected package hadoop.
 (Reading database ... 195000 files and directories currently installed.)
 Unpacking hadoop (from hadoop_1.0.3-1_x86_64.deb) ...
 groupadd: GID '123' already exists
 Setting up hadoop (1.0.3) ...
 chown: invalid group: `root:hadoop'
 chown: invalid group: `root:hadoop'
 Processing triggers for ureadahead ...
 ureadahead will be reprofiled on next reboot
 Group with ID=123 already exists and belongs to 'saned' according to my 
 /etc/group: saned:x:123:
 Also, during first run I see the following:
 mak@mak-laptop:~/Downloads$ sudo service hadoop-namenode start
  * Starting Apache Hadoop Name Node server hadoop-namenode
   start-stop-daemon: user 'hdfs' not found
 This user wasn't created during installation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8925) Remove packaging

2012-10-12 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8925:
---

 Summary: Remove packaging
 Key: HADOOP-8925
 URL: https://issues.apache.org/jira/browse/HADOOP-8925
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins


Per discussion on HADOOP-8809, now that Bigtop is TLP and supports Hadoop v2 
let's remove the Hadoop packaging from trunk and branch-2. We should remove it 
anyway since it no longer part of the build post mavenization, was not updated 
post MR1 (there's no MR2/YARN packaging) and is not maintained.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8809) RPMs should skip useradds if the users already exist

2012-10-12 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HADOOP-8809.
-

Resolution: Won't Fix

Filed HADOOP-8925.

 RPMs should skip useradds if the users already exist
 

 Key: HADOOP-8809
 URL: https://issues.apache.org/jira/browse/HADOOP-8809
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Minor

 The hadoop.spec preinstall script creates users -but it does this even if 
 they already exist. This may causes problems if the installation has already 
 got those users with different uids. A check with {{id}} can avoid this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-6616) Improve documentation for rack awareness

2012-10-12 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HADOOP-6616:
---

Assignee: Adam Faris

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
Assignee: Adam Faris
  Labels: newbie
 Attachments: hadoop-6616.patch, hadoop-6616.patch.2, 
 hadoop-6616.patch.3


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6616) Improve documentation for rack awareness

2012-10-12 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475467#comment-13475467
 ] 

Jakob Homan commented on HADOOP-6616:
-

bq.   #  1) each rack is it's own layer 3 network with a /24 subnet, which 
could be typical where each rack has it's own 

bq.   # can create it's 'off-rack' block copy.

s/it's/its/g
Otherwise looks good and ready for commit.

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
Assignee: Adam Faris
  Labels: newbie
 Attachments: hadoop-6616.patch, hadoop-6616.patch.2, 
 hadoop-6616.patch.3


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8887:
-

Attachment: HADOOP-8887.005.patch

* move org.apache.maven.plugin.cmake.ng.* to org.apache.hadoop.cmake.maven.ng.*

* removed CleanMojo

* merged GenerateMojo and CompileMojo

* clearer test output.

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
 HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475480#comment-13475480
 ] 

Colin Patrick McCabe commented on HADOOP-8887:
--

By the way, one thing that's cool about this patch is that you can now run

{code}
mvn test -Pnative -Dtest=test_native_mini_dfs
{code}

and it will run the native test, just like you would expect.

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
 HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475481#comment-13475481
 ] 

Colin Patrick McCabe commented on HADOOP-8887:
--

oh, also, I added the default parameters as tucu suggested.

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
 HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6616) Improve documentation for rack awareness

2012-10-12 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475502#comment-13475502
 ] 

Joep Rottinghuis commented on HADOOP-6616:
--

LGTM

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
Assignee: Adam Faris
  Labels: newbie
 Attachments: hadoop-6616.patch, hadoop-6616.patch.2, 
 hadoop-6616.patch.3


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira