[jira] [Created] (HADOOP-11206) TestCryptoCodec.testOpensslAesCtrCryptoCodec fails on master without native code compiled
Robert Joseph Evans created HADOOP-11206: Summary: TestCryptoCodec.testOpensslAesCtrCryptoCodec fails on master without native code compiled Key: HADOOP-11206 URL: https://issues.apache.org/jira/browse/HADOOP-11206 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Robert Joseph Evans I tried to run the unit tests recently for another issue, and didn't turn on native code. I got the following error. {code} Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.71 sec FAILURE! - in org.apache.hadoop.crypto.TestCryptoCodec testOpensslAesCtrCryptoCodec(org.apache.hadoop.crypto.TestCryptoCodec) Time elapsed: 0.064 sec ERROR! java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z at org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native Method) at org.apache.hadoop.crypto.TestCryptoCodec.testOpensslAesCtrCryptoCodec(TestCryptoCodec.java:66) {code} Looks like that test needs an assume that native code is loaded/compiled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-10164) Allow UGI to login with a known Subject
Robert Joseph Evans created HADOOP-10164: Summary: Allow UGI to login with a known Subject Key: HADOOP-10164 URL: https://issues.apache.org/jira/browse/HADOOP-10164 Project: Hadoop Common Issue Type: Improvement Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Attachments: login-from-subject-branch-0.23.txt, login-from-subject.txt For storm I would love to let Hadoop initialize based off of credentials that were already populated in a Subject. This is not currently possible because logging in a user always creates a new blank Subject. This is to allow a user to be logged in based off a pre-existing subject through a new method. -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory
Robert Joseph Evans created HADOOP-9438: --- Summary: LocalFileContext does not throw an exception on mkdir for already existing directory Key: HADOOP-9438 URL: https://issues.apache.org/jira/browse/HADOOP-9438 Project: Hadoop Common Issue Type: Bug Reporter: Robert Joseph Evans Priority: Critical according to http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29 should throw a FileAlreadyExistsException if the directory already exists. I tested this and {code} FileContext lfc = FileContext.getLocalFSFileContext(new Configuration()); Path p = new Path(/tmp/bobby.12345); FsPermission cachePerms = new FsPermission((short) 0755); lfc.mkdir(p, cachePerms, false); lfc.mkdir(p, cachePerms, false); {code} never throws an exception. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9419) CodecPool should avoid OOMs with buggy codecs
[ https://issues.apache.org/jira/browse/HADOOP-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans resolved HADOOP-9419. - Resolution: Won't Fix Never mind. I created a patch, and it is completely useless in fixing this problem. The tasks still OOM because the codec itself is so small and the MergeManager creates new codecs so quickly that on a job with lots of reduces it literally uses up all of the address space with direct byte buffers. Some of the processes get killed by the NM for going over the virtual address space before they OOM. We could try and have the CodecPool detect that the codec is doing the wrong thing and correct it for the codec, but that is too heavy handed in my opinion. CodecPool should avoid OOMs with buggy codecs - Key: HADOOP-9419 URL: https://issues.apache.org/jira/browse/HADOOP-9419 Project: Hadoop Common Issue Type: Improvement Reporter: Robert Joseph Evans I recently found a bug in the gpl compression libraries that was causing map tasks for a particular job to OOM. https://github.com/omalley/hadoop-gpl-compression/issues/3 Now granted it does not make a lot of sense for a job to use the LzopCodec for map output compression over the LzoCodec, but arguably other codecs could be doing similar things and causing the same sort of memory leaks. I propose that we do a sanity check when creating a new decompressor/compressor. If the codec newly created object does not match the value from getType... it should turn off caching for that Codec. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9169) Bring branch-0.23 ExitUtil up to same level as branch-2
Robert Joseph Evans created HADOOP-9169: --- Summary: Bring branch-0.23 ExitUtil up to same level as branch-2 Key: HADOOP-9169 URL: https://issues.apache.org/jira/browse/HADOOP-9169 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.5 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans ExitUtil in 0.23 is behind branch-2, because a number of changes went in that were part of HDFS JIRA. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9025) org.apache.hadoop.tools.TestCopyListing failing
Robert Joseph Evans created HADOOP-9025: --- Summary: org.apache.hadoop.tools.TestCopyListing failing Key: HADOOP-9025 URL: https://issues.apache.org/jira/browse/HADOOP-9025 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Robert Joseph Evans https://builds.apache.org/job/PreCommit-HADOOP-Build/1732//testReport/org.apache.hadoop.tools/TestCopyListing/testDuplicates/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8996) Error in Hadoop installation
[ https://issues.apache.org/jira/browse/HADOOP-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans resolved HADOOP-8996. - Resolution: Invalid Error in Hadoop installation Key: HADOOP-8996 URL: https://issues.apache.org/jira/browse/HADOOP-8996 Project: Hadoop Common Issue Type: Bug Environment: fedora 15 Reporter: Shiva I am trying to install `Hadoop` on fedora machine by seeing http://hadoop.apache.org/docs/r0.15.2/quickstart.html; 1. Installed java (and verified whether java exists with `java -version`) and it exists 2. I had ssh installed(since it is linux) 3. Downloaded latest version `hadoop 1.0.4` from http://apache.techartifact.com/mirror/hadoop/common/hadoop-1.0.4/; I have followed the process shown in installation tutorial(link given above) as below $ mkdir input $ cp conf/*.xml input $ bin/hadoop jar hadoop-examples.1.0.4.jar grep input output 'dfs[a-z.]+' Then i had got the following error, which i am unable to understand sh-4.2$ bin/hadoop jar hadoop-examples-1.0.4.jar grep input output 'dfs[a-z.]+' 12/10/31 16:14:35 INFO util.NativeCodeLoader: Loaded the native-hadoop library 12/10/31 16:14:35 WARN snappy.LoadSnappy: Snappy native library not loaded 12/10/31 16:14:35 INFO mapred.FileInputFormat: Total input paths to process : 8 12/10/31 16:14:35 INFO mapred.JobClient: Cleaning up the staging area file:/tmp/hadoop-thomas/mapred/staging/thomas-857393825/.staging/job_local_0001 12/10/31 16:14:35 ERROR security.UserGroupInformation: PriviledgedActionException as:thomas cause:java.io.IOException: Not a file: file:/home/local/thomas/Hadoop/hadoop-1.0.4/input/conf java.io.IOException: Not a file: file:/home/local/thomas/Hadoop/hadoop-1.0.4/input/conf at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981) at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:416) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261) at org.apache.hadoop.examples.Grep.run(Grep.java:69) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.Grep.main(Grep.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8986) Server$Call object is never released after it is sent
Robert Joseph Evans created HADOOP-8986: --- Summary: Server$Call object is never released after it is sent Key: HADOOP-8986 URL: https://issues.apache.org/jira/browse/HADOOP-8986 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 0.23.4, 2.0.2-alpha Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Critical When an IPC response cannot be returned without blocking the Server$Call object attached to the SelectionKey of the write selector. However the call object is never removed from the SlectionKey. So for a connection that rarely has large responses but is long lived there is a lot of data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8826) Docs still refer to 0.20.205 as stable line
Robert Joseph Evans created HADOOP-8826: --- Summary: Docs still refer to 0.20.205 as stable line Key: HADOOP-8826 URL: https://issues.apache.org/jira/browse/HADOOP-8826 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Minor The main docs page still refers to 0.20.205 as the stable line, 1.0 is the stable line now. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8824) in docs yarn-default.xml is pointing to the wrong spot
[ https://issues.apache.org/jira/browse/HADOOP-8824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans resolved HADOOP-8824. - Resolution: Duplicate in docs yarn-default.xml is pointing to the wrong spot -- Key: HADOOP-8824 URL: https://issues.apache.org/jira/browse/HADOOP-8824 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3, 3.0.0, 2.0.2-alpha Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Attachments: HADOOP-8824.txt simple change, just need to update the site.xml to point to where the file is placed. I assume this happend when yarn was split off from MR. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8822) relnotes.py was deleted post mavenization
Robert Joseph Evans created HADOOP-8822: --- Summary: relnotes.py was deleted post mavenization Key: HADOOP-8822 URL: https://issues.apache.org/jira/browse/HADOOP-8822 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.3 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans relnotes.py was removed post mavinization. It needs to be added back in so we can generate release notes, and it should be updated to deal with YARN and the separate release notes files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8724) Add improved APIs for globbing
Robert Joseph Evans created HADOOP-8724: --- Summary: Add improved APIs for globbing Key: HADOOP-8724 URL: https://issues.apache.org/jira/browse/HADOOP-8724 Project: Hadoop Common Issue Type: Improvement Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans After the discussion on HADOOP-8709 it was decided that we need better APIs for globbing to remove some of the inconsistencies with other APIs. Inorder to maintain backwards compatibility we should deprecate the existing APIs and add in new ones. See HADOOP-8709 for more information about exactly how those APIs should look and behave. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-8551) fs -mkdir creates parent directories without the -p option
[ https://issues.apache.org/jira/browse/HADOOP-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans reopened HADOOP-8551: - -mkdir a -mkdir a/b/ (Fails) -mkdir a/b (Succeeds) I am going to revert this until this is fixed, thanks for catching this John. fs -mkdir creates parent directories without the -p option -- Key: HADOOP-8551 URL: https://issues.apache.org/jira/browse/HADOOP-8551 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Daryn Sharp Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8551.patch, HADOOP-8551.patch hadoop fs -mkdir foo/bar will work even if bar is not present. It should only work if -p is given and foo is not present. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8573) Configuration tries to read from an inputstream resource multiple times.
Robert Joseph Evans created HADOOP-8573: --- Summary: Configuration tries to read from an inputstream resource multiple times. Key: HADOOP-8573 URL: https://issues.apache.org/jira/browse/HADOOP-8573 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 1.0.2, 0.23.3, 2.0.1-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans If someone calls Configuration.addResource(InputStream) and then reloadConfiguration is called for any reason, Configruation will try to reread the contents of the InputStream, after it has already closed it. This never showed up in 1.0 because the framework itself does not call addResource with an InputStream, and typically by the time user code starts running that might call this, all of the default and site resources have already been loaded. In 0.23 mapreduce is now a client library, and mapred-site.xml and mapred-default.xml are loaded much later in the process. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8559) PMML Support in Hadoop Cluster
[ https://issues.apache.org/jira/browse/HADOOP-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans resolved HADOOP-8559. - Resolution: Won't Fix PMML Support in Hadoop Cluster -- Key: HADOOP-8559 URL: https://issues.apache.org/jira/browse/HADOOP-8559 Project: Hadoop Common Issue Type: New Feature Components: util Environment: Software Platform Reporter: Duraimurugan Priority: Minor Labels: newbie Would like to request a support for PMML. With that once the predictive models are built and provided in PMML format, we should be able to import into hadoop cluster for scoring. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8550) hadoop fs -touchz automatically created parent directories
Robert Joseph Evans created HADOOP-8550: --- Summary: hadoop fs -touchz automatically created parent directories Key: HADOOP-8550 URL: https://issues.apache.org/jira/browse/HADOOP-8550 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.3, 2.0.1-alpha, 3.0.0 Reporter: Robert Joseph Evans Recently many of the fsShell commands were updated to be more POSIX compliant. touchz appears to have been missed, or has regressed. If it has regressed then the target version should be 0.23.3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8551) fs -mkdir creates parent directories without the -p option
Robert Joseph Evans created HADOOP-8551: --- Summary: fs -mkdir creates parent directories without the -p option Key: HADOOP-8551 URL: https://issues.apache.org/jira/browse/HADOOP-8551 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.3, 2.0.1-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Daryn Sharp hadoop fs -mkdir foo/bar will work even if bar is not present. It should only work if -p is given and foo is not present. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8542) TestViewFsTrash failed several times on Precommit test
[ https://issues.apache.org/jira/browse/HADOOP-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans resolved HADOOP-8542. - Resolution: Duplicate TestViewFsTrash failed several times on Precommit test -- Key: HADOOP-8542 URL: https://issues.apache.org/jira/browse/HADOOP-8542 Project: Hadoop Common Issue Type: Bug Components: fs, test Reporter: Junping Du I met this error several times before (different patches), the latest one is in HADOOP-8472 which is unrelated to patch. Looks like this error comes and goes, and I cannot reproduce on my local dev environment. The error log for precommit test is as below: junit.framework.AssertionFailedError: -expunge failed expected:0 but was:1 at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.failNotEquals(Assert.java:283) at junit.framework.Assert.assertEquals(Assert.java:64) at junit.framework.Assert.assertEquals(Assert.java:195) at org.apache.hadoop.fs.TestTrash.trashShell(TestTrash.java:322) at org.apache.hadoop.fs.viewfs.TestViewFsTrash.testTrash(TestViewFsTrash.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8525) Provide Improved Traceability for Configuration
Robert Joseph Evans created HADOOP-8525: --- Summary: Provide Improved Traceability for Configuration Key: HADOOP-8525 URL: https://issues.apache.org/jira/browse/HADOOP-8525 Project: Hadoop Common Issue Type: Improvement Reporter: Robert Joseph Evans Priority: Trivial Configuration provides basic traceability to see where a config setting came from, but once the configuration is written out that information is written to a comment in the XML and then lost the next time the configuration is read back in. It would really be great to be able to store a complete history of where the config came from in the XML, so that it can then be retrieved later for debugging. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8477) Pull in Yahoo! Hadoop Tutorial and update it accordingly.
Robert Joseph Evans created HADOOP-8477: --- Summary: Pull in Yahoo! Hadoop Tutorial and update it accordingly. Key: HADOOP-8477 URL: https://issues.apache.org/jira/browse/HADOOP-8477 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: 1.1.0, 2.0.1-alpha Reporter: Robert Joseph Evans I was able to get the Yahoo! Hadoop tutorial released under an Apache 2.0 license. This allows us to make it a official part of the Hadoop Project. This ticket is to pull the tutorial and update it as needed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8460) Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR
Robert Joseph Evans created HADOOP-8460: --- Summary: Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR Key: HADOOP-8460 URL: https://issues.apache.org/jira/browse/HADOOP-8460 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.0.0-alpha, 1.0.3 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans We should document that in a properly setup cluster HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR should not point to /tmp, but should point to a directory that normal users do not have access to. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8461) Programatically prevent symlink attacks on hadoop pid files
Robert Joseph Evans created HADOOP-8461: --- Summary: Programatically prevent symlink attacks on hadoop pid files Key: HADOOP-8461 URL: https://issues.apache.org/jira/browse/HADOOP-8461 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.0-alpha, 1.0.3 Reporter: Robert Joseph Evans pid files stored in HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR are vulnerable to symlink attacks when not properly set. We should programatically prevent symlink attacks on these files even if the directories are set to something that others can write to. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-8351) Add exclude/include file , need restart NN or RM.
[ https://issues.apache.org/jira/browse/HADOOP-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans reopened HADOOP-8351: - Add exclude/include file , need restart NN or RM. - Key: HADOOP-8351 URL: https://issues.apache.org/jira/browse/HADOOP-8351 Project: Hadoop Common Issue Type: Bug Components: util Environment: suse Reporter: xieguiming yarn.resourcemanager.nodes.include-path default value is , if we need add one include file. and we must restart the RM. I suggest that adding one include or exclude file, no need restart the RM. only execute the refresh command. NN is the same. Modify the HostsFileReader class: public HostsFileReader(String inFile, String exFile) to public HostsFileReader(Configuration conf, String NODES_INCLUDE_FILE_PATH,String DEFAULT_NODES_INCLUDE_FILE_PATH, String NODES_EXCLUDE_FILE_PATH,String DEFAULT_NODES_EXCLUDE_FILE_PATH) and thus, we can read the config file dynamic. and no need to restart the NM/NN. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools
Robert Joseph Evans created HADOOP-8341: --- Summary: Fix or filter findbugs issues in hadoop-tools Key: HADOOP-8341 URL: https://issues.apache.org/jira/browse/HADOOP-8341 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Now that the precommit build can test hadoop-tools we need to fix or filter the many findbugs warnings that are popping up in there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.
[ https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans resolved HADOOP-8172. - Resolution: Fixed Fix Version/s: 3.0.0 2.0.0 Thanks Anupam, I put this into trunk and branch-2. +1 Configuration no longer sets all keys in a deprecated key list. --- Key: HADOOP-8172 URL: https://issues.apache.org/jira/browse/HADOOP-8172 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 0.23.3, 0.24.0 Reporter: Robert Joseph Evans Assignee: Anupam Seth Priority: Critical Fix For: 2.0.0, 3.0.0 Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch I did not look at the patch for HADOOP-8167 previously, but I did in response to a recent test failure. The patch appears to have changed the following code (I am just paraphrasing the code) {code} if(!deprecated(key)) { set(key, value); } else { for(String newKey: depricatedKeyMap.get(key)) { set(newKey, value); } } {code} to be {code} set(key, value); if(depricatedKeyMap.contains(key)) { set(deprecatedKeyMap.get(key)[0], value); } else if(reverseKeyMap.contains(key)) { set(reverseKeyMap.get(key), value); } {code} If a key is deprecated and is mapped to more then one new key value only the first one in the list will be set, where as previously all of them would be set. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8326) test-patch can leak processes in some cases
Robert Joseph Evans created HADOOP-8326: --- Summary: test-patch can leak processes in some cases Key: HADOOP-8326 URL: https://issues.apache.org/jira/browse/HADOOP-8326 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans test-patch.sh can leak processes in some cases. These leaked processes can cause subsequent tests to fail because they are holding resources, like ports that the others may need to execute correctly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8312) testpatch.sh should provide a simpler way to see which warnings changed
Robert Joseph Evans created HADOOP-8312: --- Summary: testpatch.sh should provide a simpler way to see which warnings changed Key: HADOOP-8312 URL: https://issues.apache.org/jira/browse/HADOOP-8312 Project: Hadoop Common Issue Type: Improvement Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans test-patch.sh reports that a specific number of warnings has changed but it does not provide an easy way to see which ones have changed. For at least the javac warnings we should be able to provide a diff of the warnings in addition to the total count, because we capture the full compile log both before and after applying the patch. For javadoc warnings it would be nice to be able to provide a filtered list of the warnings based off of the files that were modified in the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7670) test-patch should not -1 for just build changes
test-patch should not -1 for just build changes --- Key: HADOOP-7670 URL: https://issues.apache.org/jira/browse/HADOOP-7670 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 0.24.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Minor If all a patch changed was build scripts then there should be an exemption to the required test changes. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7598) smart-apply-patch.sh does not work on some older versions of BASH
smart-apply-patch.sh does not work on some older versions of BASH - Key: HADOOP-7598 URL: https://issues.apache.org/jira/browse/HADOOP-7598 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 0.23.0, 0.24.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Fix For: 0.23.0, 0.24.0 I don't really know why, but on some versions of bash (including the one I use, and is on the build servers) bash --version GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2005 Free Software Foundation, Inc. The line {code} elif [[ $PREFIX_DIRS =~ ^(hadoop-common-project|hadoop-hdfs-project|hadoop-mapreduce-project)$ ]]; then {code} evaluates to false but if the test is moved out of the elif statement then it works correctly. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7589) Prefer mvn test -DskipTests over mvn compile in test-patch.sh
Prefer mvn test -DskipTests over mvn compile in test-patch.sh - Key: HADOOP-7589 URL: https://issues.apache.org/jira/browse/HADOOP-7589 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 0.23.0, 0.24.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Fix For: 0.23.0, 0.24.0 I got a failure running test-patch with a clean .m2 directory. To quote Alejandro: {bq} The reason for this failure is because of how Maven reactor/dependency resolution works (IMO a bug). Maven reactor/dependency resolution is smart enough to create the classpath using the classes from all modules being built. However, this smartness falls short just a bit. The dependencies are resolved using the deepest maven phase used by current mvn invocation. If you are doing 'mvn compile' you don't get to the test compile phase. This means that the TEST classes are not resolved from the build but from the cache/repo. The solution is to run 'mvn test -DskipTests' instead 'mvn compile'. This will include the TEST classes from the build. {bq} So this is to replace mvn compile in test-patch.sh with mvn test -DskipTests -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira