[jira] [Commented] (HADOOP-10087) UserGroupInformation.getGroupNames() fails to return primary group first when JniBasedUnixGroupsMappingWithFallback is used
[ https://issues.apache.org/jira/browse/HADOOP-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846372#comment-13846372 ] Hudson commented on HADOOP-10087: - FAILURE: Integrated in Hadoop-Hdfs-trunk #1610 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1610/]) HADOOP-10087. UserGroupInformation.getGroupNames() fails to return primary group first when JniBasedUnixGroupsMappingWithFallback is used (cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1550229) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/hadoop_user_info.c UserGroupInformation.getGroupNames() fails to return primary group first when JniBasedUnixGroupsMappingWithFallback is used --- Key: HADOOP-10087 URL: https://issues.apache.org/jira/browse/HADOOP-10087 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.1.0-beta, 2.2.0 Environment: SUSE Linux Enterprise Server 11 (x86_64) Reporter: Yu Gao Assignee: Colin Patrick McCabe Labels: security Fix For: 2.3.0 Attachments: HADOOP-10087.001.patch, HADOOP-10087.002.patch When JniBasedUnixGroupsMappingWithFallback is used as the group mapping resolution provider, UserGroupInformation.getGroupNames() fails to return the primary group first in the list as documented. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-10162) BACKPORT HADOOP-10052 to Branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-10162: - Assignee: Mit Desai (was: Jonathan Eagles) BACKPORT HADOOP-10052 to Branch-2 - Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-10162) BACKPORT HADOOP-10052 to Branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mit Desai updated HADOOP-10162: --- Status: Patch Available (was: Open) BACKPORT HADOOP-10052 to Branch-2 - Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10162) BACKPORT HADOOP-10052 to Branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846435#comment-13846435 ] Hadoop QA commented on HADOOP-10162: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12618431/HADOOP-10162.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:red}-1 javac{color:red}. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3357//console This message is automatically generated. BACKPORT HADOOP-10052 to Branch-2 - Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-10162) BACKPORT HADOOP-10052 to Branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mit Desai updated HADOOP-10162: --- Attachment: HADOOP-10162.patch Patch posted ONLY for branch-2. This change does not go into trunk. BACKPORT HADOOP-10052 to Branch-2 - Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10162) BACKPORT HADOOP-10052 to Branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846452#comment-13846452 ] Mit Desai commented on HADOOP-10162: It was supposed to fail as this change is only for branch 2 BACKPORT HADOOP-10052 to Branch-2 - Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-10163) Enhance jenkinsPrecommitAdmin.py to pass attachment Id for last tested patch
Ted Yu created HADOOP-10163: --- Summary: Enhance jenkinsPrecommitAdmin.py to pass attachment Id for last tested patch Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-10163: Summary: Attachment Id for last tested patch should be passed to test-patch.sh (was: Enhance jenkinsPrecommitAdmin.py to pass attachment Id for last tested patch) Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-10162) Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HADOOP-10162: -- Summary: Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2 (was: BACKPORT HADOOP-10052 to Branch-2) Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2 --- Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10162) Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846582#comment-13846582 ] Colin Patrick McCabe commented on HADOOP-10162: --- Thanks for looking at this, Mit. Since this patch is for branch-2, it does not require a Jenkins run and we usually don't press patch available. I renamed this to Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2 since that's really what this patch is about (the disabling patch is already backported to these branches) I have run the tests and verified that it fixes them as promised. +1. Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2 --- Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-10044) Improve the javadoc of rpc code
[ https://issues.apache.org/jira/browse/HADOOP-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sanjay Radia updated HADOOP-10044: -- Resolution: Fixed Target Version/s: 2.3.0 Status: Resolved (was: Patch Available) The failed test timeout is unrelated (and I also ran it successfully). Committed. Improve the javadoc of rpc code --- Key: HADOOP-10044 URL: https://issues.apache.org/jira/browse/HADOOP-10044 Project: Hadoop Common Issue Type: Improvement Reporter: Sanjay Radia Assignee: Sanjay Radia Priority: Minor Attachments: HADOOP-10044.20131014.patch, hadoop-10044.patch -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-10162) Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HADOOP-10162: -- Resolution: Fixed Fix Version/s: 2.4.0 Target Version/s: 2.4.0 Status: Resolved (was: Patch Available) Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2 --- Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Fix For: 2.4.0 Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846592#comment-13846592 ] Ted Yu commented on HADOOP-10163: - That approach would imply scanning comments until ATTACHMENT ID is found, right ? Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846585#comment-13846585 ] Brock Noland commented on HADOOP-10163: --- We handled this scenario in Hive by placing ATTACHMENT ID attachment id in the HiveQA comment: https://issues.apache.org/jira/browse/HIVE-5973?focusedCommentId=13846235page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13846235 If Hadoop/HBase did that then they could eliminate duplicate runs as we have done in Hive. Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10044) Improve the javadoc of rpc code
[ https://issues.apache.org/jira/browse/HADOOP-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846602#comment-13846602 ] Hudson commented on HADOOP-10044: - SUCCESS: Integrated in Hadoop-trunk-Commit #4874 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/4874/]) HADOOP-10044 Improve the javadoc of rpc code (sanjay Radia) (sradia: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1550486) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java Improve the javadoc of rpc code --- Key: HADOOP-10044 URL: https://issues.apache.org/jira/browse/HADOOP-10044 Project: Hadoop Common Issue Type: Improvement Reporter: Sanjay Radia Assignee: Sanjay Radia Priority: Minor Attachments: HADOOP-10044.20131014.patch, hadoop-10044.patch -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10162) Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846622#comment-13846622 ] Mit Desai commented on HADOOP-10162: Thanks Colin. I will keep this in mind from now on :-) Fix symlink-related test failures in TestFileContextResolveAfs and TestStat in branch-2 --- Key: HADOOP-10162 URL: https://issues.apache.org/jira/browse/HADOOP-10162 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Assignee: Mit Desai Fix For: 2.4.0 Attachments: HADOOP-10162.patch We need to backport HADOOP-10052 to branch-2 as I found that the TestFileContextResolveAfs is failing after HADOOP-10020 went in. Also the test TestStat is failing for the same reason. It needs to be fixed as well. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846634#comment-13846634 ] Brock Noland commented on HADOOP-10163: --- Yep Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846646#comment-13846646 ] Ted Yu commented on HADOOP-10163: - Do you have sample code on how to iterate through JIRA comments ? Thanks Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
How can I make a custom counter in the MapReduce engine?
I tried to make a my counter to check the operation time of Maptask.run and ReduceTask.run using global counter or even LOG.info(). In a pseudo-distributed mode, using builded mapreduce-core-snapshot.jar which I modified in the MapTask.java and ReduceTask.java to make my goal, it works perfectly as I expected. However, in a real cluster, the source code I injected isn't work and just system-default counter variables are showing. I think there are different logging logic or security issue between pseudo-distributed and cluster mode, but I am not sure about that. Is there anyone who can let me know about it? Thanks!
[jira] [Commented] (HADOOP-10110) hadoop-auth has a build break due to missing dependency
[ https://issues.apache.org/jira/browse/HADOOP-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846691#comment-13846691 ] Arpit Agarwal commented on HADOOP-10110: What's the target branch for 2.0.x? Thanks. hadoop-auth has a build break due to missing dependency --- Key: HADOOP-10110 URL: https://issues.apache.org/jira/browse/HADOOP-10110 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0, 2.0.6-alpha Reporter: Chuan Liu Assignee: Chuan Liu Priority: Blocker Fix For: 3.0.0, 2.3.0 Attachments: HADOOP-10110.patch We have a build break in hadoop-auth if build with maven cache cleaned. The error looks like the follows. The problem exists on both Windows and Linux. If you have old jetty jars in your maven cache, you won't see the error. {noformat} [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 1:29.469s [INFO] Finished at: Mon Nov 18 12:30:36 PST 2013 [INFO] Final Memory: 37M/120M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hadoop-auth: Compilation failure: Compilation failure: [ERROR] /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[84,13] cannot access org.mortbay.component.AbstractLifeCycle [ERROR] class file for org.mortbay.component.AbstractLifeCycle not found [ERROR] server = new Server(0); [ERROR] /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[94,29] cannot access org.mortbay.component.LifeCycle [ERROR] class file for org.mortbay.component.LifeCycle not found [ERROR] server.getConnectors()[0].setHost(host); [ERROR] /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,10] cannot find symbol [ERROR] symbol : method start() [ERROR] location: class org.mortbay.jetty.Server [ERROR] /home/chuan/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[102,12] cannot find symbol [ERROR] symbol : method stop() [ERROR] location: class org.mortbay.jetty.Server [ERROR] - [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn goals -rf :hadoop-auth {noformat} -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846759#comment-13846759 ] Brock Noland commented on HADOOP-10163: --- We don't iterate jira comments, it's a simple (jira text contains ATTACHMENT ID XXX). Not sure if this holds for other projects but in hive a specific attachment is only tested once. Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846765#comment-13846765 ] Ted Yu commented on HADOOP-10163: - Here is proposed change from HBASE-10044 : {code} +relativePatchURL=`$GREP -o '/jira/secure/attachment/[0-9]*/[^]*' $PATCH_DIR/jira | $EGREP '(\.txt$|\.patch$|\.diff$)' | sort | tail -1 | $GREP -o '/jira/secure/attachment/[0-9]*/[^]*'` {code} Suppose patch A was tested on day 1 - JIRA stayed in Patch Available status. Some more comments were then added. Tread dump named thread.out was attached. QA bot would be triggered due to the new attachment. However the above change would filter out the dump file and go back to patch A which has been run. The attachment ID of last patch should be known so that test-patch.sh can decide that no new patch has been attached and bail out. Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846820#comment-13846820 ] Ted Yu commented on HADOOP-10163: - The above scenario actually may happen to Hive: Patch A was tested on day 1. B.htm is attached to same JIRA on day 2. test-patch.sh would filter out B.htm and run patch A again. Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846844#comment-13846844 ] Brock Noland commented on HADOOP-10163: --- Here is what we do with hive where $JIRA_TEXT is a file containing the entire HTML page: {noformat} # pull attachments from JIRA (hack stolen from hadoop since rest api doesn't show attachments) PATCH_URL=$(grep -o '/jira/secure/attachment/[0-9]*/[^]*' $JIRA_TEXT | \ grep -v -e 'htm[l]*$' | sort | tail -1 | \ grep -o '/jira/secure/attachment/[0-9]*/[^]*') if [[ -z $PATCH_URL ]] then echo Unable to find attachment for $JIRA_NAME exit 1 fi # ensure attachment has not already been tested ATTACHMENT_ID=$(basename $(dirname $PATCH_URL)) if grep -q ATTACHMENT ID: $ATTACHMENT_ID $JIRA_TEXT then echo Attachment $ATTACHMENT_ID is already tested for $JIRA_NAME exit 1 fi {noformat} Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846852#comment-13846852 ] Brock Noland commented on HADOOP-10163: --- also later we match the file name. {noformat} # validate the patch name, parse branch if needed shopt -s nocasematch PATCH_NAME=$(basename $PATCH_URL) # Test examples: # HIVE-123.patch HIVE-123.1.patch HIVE-123.D123.patch HIVE-123.D123.1.patch HIVE-123-tez.patch HIVE-123.1-tez.patch # HIVE-.patch, HIVE-.XX.patch HIVE-.XX-branch.patch HIVE--branch.patch if [[ $PATCH_NAME =~ ^HIVE-[0-9]+(\.[0-9]+)?(-[a-z0-9-]+)?\.(patch|patch.\txt)$ ]] then if [[ -n ${BASH_REMATCH[2]} ]] then BRANCH=${BASH_REMATCH[2]#*-} else echo Assuming branch $BRANCH fi # HIVE-.D.patch or HIVE-.D.XX.patch elif [[ $PATCH_NAME =~ ^(HIVE-[0-9]+\.)?D[0-9]+(\.[0-9]+)?\.(patch|patch.\txt)$ ]] then echo Assuming branch $BRANCH else echo Patch $PATCH_NAME does not appear to be a patch exit 1 fi {noformat} Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Updated] (HADOOP-8753) LocalDirAllocator throws ArithmeticException: / by zero when there is no available space on configured local dir
[ https://issues.apache.org/jira/browse/HADOOP-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hitesh Shah updated HADOOP-8753: Resolution: Fixed Fix Version/s: 2.4.0 Status: Resolved (was: Patch Available) Committed to trunk and branch-2. Thanks for the patience [~benoyantony] LocalDirAllocator throws ArithmeticException: / by zero when there is no available space on configured local dir -- Key: HADOOP-8753 URL: https://issues.apache.org/jira/browse/HADOOP-8753 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.2-alpha Reporter: Nishan Shetty, Huawei Assignee: Benoy Antony Priority: Minor Fix For: 2.4.0 Attachments: HADOOP-8753.1.copy.patch, HADOOP-8753.1.patch, YARN-16.patch 12/08/09 13:59:49 INFO mapreduce.Job: Task Id : attempt_1344492468506_0023_m_00_0, Status : FAILED java.lang.ArithmeticException: / by zero at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:371) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115) at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:257) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849) Instead of throwing exception directly we can log a warning saying no available space on configured local dir -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-8753) LocalDirAllocator throws ArithmeticException: / by zero when there is no available space on configured local dir
[ https://issues.apache.org/jira/browse/HADOOP-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846897#comment-13846897 ] Hudson commented on HADOOP-8753: FAILURE: Integrated in Hadoop-trunk-Commit #4876 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/4876/]) HADOOP-8753. LocalDirAllocator throws ArithmeticException: divide by zero when there is no available space on configured local dir. Contributed by Benoy Antony. (hitesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1550570) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java LocalDirAllocator throws ArithmeticException: / by zero when there is no available space on configured local dir -- Key: HADOOP-8753 URL: https://issues.apache.org/jira/browse/HADOOP-8753 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.2-alpha Reporter: Nishan Shetty, Huawei Assignee: Benoy Antony Priority: Minor Fix For: 2.4.0 Attachments: HADOOP-8753.1.copy.patch, HADOOP-8753.1.patch, YARN-16.patch 12/08/09 13:59:49 INFO mapreduce.Job: Task Id : attempt_1344492468506_0023_m_00_0, Status : FAILED java.lang.ArithmeticException: / by zero at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:371) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115) at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:257) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849) Instead of throwing exception directly we can log a warning saying no available space on configured local dir -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10163) Attachment Id for last tested patch should be passed to test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847064#comment-13847064 ] Ted Yu commented on HADOOP-10163: - I will try to digest the above code. However, for JIRA with long discussion, e.g. HBASE-8755, searching comments for attachment Id seems inefficient. Attachment Id for last tested patch should be passed to test-patch.sh - Key: HADOOP-10163 URL: https://issues.apache.org/jira/browse/HADOOP-10163 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu In HBASE-10044, attempt was made to filter attachments according to known file extensions. However, that change alone wouldn't work because when non-patch is attached, QA bot doesn't provide attachment Id for last tested patch. This results in the modified test-patch.sh to seek backward and launch duplicate test run for last tested patch. If attachment Id for last tested patch is provided, test-patch.sh can decide whether there is need to run test. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Commented] (HADOOP-10149) Create ByteBuffer-based cipher API
[ https://issues.apache.org/jira/browse/HADOOP-10149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847237#comment-13847237 ] Yi Liu commented on HADOOP-10149: - Hi Owen, as for “we should use the standard javax.security.Cipher API with a customer provider that is based on openssl.”, we had already done the work, you can find it on 10150. Create ByteBuffer-based cipher API -- Key: HADOOP-10149 URL: https://issues.apache.org/jira/browse/HADOOP-10149 Project: Hadoop Common Issue Type: Bug Reporter: Owen O'Malley Assignee: Owen O'Malley As part of HDFS-5143, [~hitliuyi] included a ByteBuffer-based API for encryption and decryption. Especially, because of the zero-copy work this seems like an important piece of work. This API should be discussed independently instead of just as part of HDFS-5143. -- This message was sent by Atlassian JIRA (v6.1.4#6159)