[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net
[ https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9321: --- Attachment: HADOOP-9321-trunk-c.patch fix coverage org.apache.hadoop.net --- Key: HADOOP-9321 URL: https://issues.apache.org/jira/browse/HADOOP-9321 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Aleksey Gorshkov Assignee: Ivan A. Veselovsky Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, HADOOP-9321-trunk-c.patch, HADOOP-9321-trunk.patch fix coverage org.apache.hadoop.net HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HADOOP-9321) fix coverage org.apache.hadoop.net
[ https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13808064#comment-13808064 ] Ivan A. Veselovsky commented on HADOOP-9321: Hi, Mit, the patch updated. itens 1),2) fixed. Item 3) fixed by adding a timeout and checking of error absence during the server startup. Besides, the following is fixed: a) added check for exception in server thread; b) server start method in not @Before since the server thread is needed only for 2 testcases of 4. c) server runnable class made static and renamed accordingly to its type. d) server thread is now joined in @After fix coverage org.apache.hadoop.net --- Key: HADOOP-9321 URL: https://issues.apache.org/jira/browse/HADOOP-9321 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Aleksey Gorshkov Assignee: Ivan A. Veselovsky Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, HADOOP-9321-trunk-d.patch, HADOOP-9321-trunk.patch fix coverage org.apache.hadoop.net HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net
[ https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9321: --- Attachment: (was: HADOOP-9321-trunk-c.patch) fix coverage org.apache.hadoop.net --- Key: HADOOP-9321 URL: https://issues.apache.org/jira/browse/HADOOP-9321 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Aleksey Gorshkov Assignee: Ivan A. Veselovsky Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, HADOOP-9321-trunk-d.patch, HADOOP-9321-trunk.patch fix coverage org.apache.hadoop.net HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net
[ https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9321: --- Attachment: HADOOP-9321-trunk-d.patch fix coverage org.apache.hadoop.net --- Key: HADOOP-9321 URL: https://issues.apache.org/jira/browse/HADOOP-9321 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Aleksey Gorshkov Assignee: Ivan A. Veselovsky Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, HADOOP-9321-trunk-d.patch, HADOOP-9321-trunk.patch fix coverage org.apache.hadoop.net HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9166) Cover authentication with Kerberos ticket cache with unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9166: --- Affects Version/s: (was: 0.23.6) (was: 2.0.3-alpha) 2.3.0 Cover authentication with Kerberos ticket cache with unit tests Key: HADOOP-9166 URL: https://issues.apache.org/jira/browse/HADOOP-9166 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9166-branch-0.23--b.patch, HADOOP-9166-branch-0.23--c.patch, HADOOP-9166-branch-0.23--N1.patch, HADOOP-9166-branch-2--b.patch, HADOOP-9166-branch-2--c.patch, HADOOP-9166--branch-2-N11.patch, HADOOP-9166-branch-2--N1.patch, HADOOP-9166-trunk--b.patch, HADOOP-9166-trunk--c.patch, HADOOP-9166--trunk-N11.patch, HADOOP-9166-trunk--N2.patch, HADOOP-9166-trunk--N4(1).patch, HADOOP-9166-trunk--N7.patch, HADOOP-9166-trunk--N8.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9166) Cover authentication with Kerberos ticket cache with unit tests
[ https://issues.apache.org/jira/browse/HADOOP-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9166: --- Attachment: HADOOP-9166--trunk-N11.patch HADOOP-9166--branch-2-N11.patch patch reworked to MiniKdc. 1) apacheds version used in MiniKdc module advanced to 2.0.0-M16-SNAPSHOT because only this version has enough client part (Kinit). Dependencies are changed accordingly. 2) test checking ticket renewal thread in UserGroupInformation dropped because it's too difficult to make that functionality work with MiniKdc. Cover authentication with Kerberos ticket cache with unit tests Key: HADOOP-9166 URL: https://issues.apache.org/jira/browse/HADOOP-9166 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9166-branch-0.23--b.patch, HADOOP-9166-branch-0.23--c.patch, HADOOP-9166-branch-0.23--N1.patch, HADOOP-9166-branch-2--b.patch, HADOOP-9166-branch-2--c.patch, HADOOP-9166--branch-2-N11.patch, HADOOP-9166-branch-2--N1.patch, HADOOP-9166-trunk--b.patch, HADOOP-9166-trunk--c.patch, HADOOP-9166--trunk-N11.patch, HADOOP-9166-trunk--N2.patch, HADOOP-9166-trunk--N4(1).patch, HADOOP-9166-trunk--N7.patch, HADOOP-9166-trunk--N8.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9016) org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long) must never return negative value.
[ https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9016: --- Description: The patch fixes bug in method org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long): // the contract is described in java.io.InputStream.skip(long): // this method returns the number of bytes actually skipped, so, // the return value should never be negative. The patch adds tests for the fixed functionality + other tests to cover other uncovered methods of classes org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream. was: unit-test coverage of classes org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream is zero. Suggested to provide unit-tests covering these classes. Summary: org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long) must never return negative value. (was: Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream ) org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long) must never return negative value. - Key: HADOOP-9016 URL: https://issues.apache.org/jira/browse/HADOOP-9016 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9016--b.patch, HADOOP-9016-branch-0.23--d.patch, HADOOP-9016-branch-0.23--e.patch, HADOOP-9016--c.patch, HADOOP-9016--d.patch, HADOOP-9016--e.patch, HADOOP-9016--f.patch, HADOOP-9016.patch The patch fixes bug in method org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long): // the contract is described in java.io.InputStream.skip(long): // this method returns the number of bytes actually skipped, so, // the return value should never be negative. The patch adds tests for the fixed functionality + other tests to cover other uncovered methods of classes org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9016) org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long) must never return negative value.
[ https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9016: --- Issue Type: Bug (was: Test) org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long) must never return negative value. - Key: HADOOP-9016 URL: https://issues.apache.org/jira/browse/HADOOP-9016 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9016--b.patch, HADOOP-9016-branch-0.23--d.patch, HADOOP-9016-branch-0.23--e.patch, HADOOP-9016--c.patch, HADOOP-9016--d.patch, HADOOP-9016--e.patch, HADOOP-9016--f.patch, HADOOP-9016.patch The patch fixes bug in method org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long): // the contract is described in java.io.InputStream.skip(long): // this method returns the number of bytes actually skipped, so, // the return value should never be negative. The patch adds tests for the fixed functionality + other tests to cover other uncovered methods of classes org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HADOOP-9016) org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long) must never return negative value.
[ https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802824#comment-13802824 ] Ivan A. Veselovsky commented on HADOOP-9016: Hi, Jonathan, Splitting the patch is difficult because the 2 patches would change the same files and would be dependent on each other. I converted this Jira to bug, changed the title and description. Now it is positioned as a bugfix + tests to check the fix. org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long) must never return negative value. - Key: HADOOP-9016 URL: https://issues.apache.org/jira/browse/HADOOP-9016 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9016--b.patch, HADOOP-9016-branch-0.23--d.patch, HADOOP-9016-branch-0.23--e.patch, HADOOP-9016--c.patch, HADOOP-9016--d.patch, HADOOP-9016--e.patch, HADOOP-9016--f.patch, HADOOP-9016.patch The patch fixes bug in method org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream.skip(long): // the contract is described in java.io.InputStream.skip(long): // this method returns the number of bytes actually skipped, so, // the return value should never be negative. The patch adds tests for the fixed functionality + other tests to cover other uncovered methods of classes org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Attachment: HADOOP-9291--N8.patch enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291--N7.patch, HADOOP-9291--N8.patch, HADOOP-9291-trunk--N4.patch, HADOOP-9291-trunk--N5.patch, HADOOP-9291-trunk--N6.patch, HADOOP-9291-trunk--N6.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797174#comment-13797174 ] Ivan A. Veselovsky commented on HADOOP-9291: Nathan, thanks for review. The patch is updated with the corrections: HADOOP-9291--N8.patch enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291--N7.patch, HADOOP-9291--N8.patch, HADOOP-9291-trunk--N4.patch, HADOOP-9291-trunk--N5.patch, HADOOP-9291-trunk--N6.patch, HADOOP-9291-trunk--N6.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13789234#comment-13789234 ] Ivan A. Veselovsky commented on HADOOP-9078: Hi, Robert, please use patch HADOOP-9078-trunk--N8.patch for both branch-2 and trunk. In the past there was some difference, but now this patch equally applicable to both the branches. enhance unit-test coverage of class org.apache.hadoop.fs.FileContext Key: HADOOP-9078 URL: https://issues.apache.org/jira/browse/HADOOP-9078 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2--N2.patch, HADOOP-9078-branch-2--N3.patch, HADOOP-9078-branch-2--N4.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, HADOOP-9078-trunk--N1.patch, HADOOP-9078-trunk--N2.patch, HADOOP-9078-trunk--N6.patch, HADOOP-9078-trunk--N8.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9281) Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API
[ https://issues.apache.org/jira/browse/HADOOP-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9281: --- Attachment: HADOOP-9281-trunk--N6--MAPREDUCE-part.patch HADOOP-9281-trunk--N6--HADOOP-COMMON-part.patch Hi, Daryn, please see xxx--N6--yyy patches (apply to both trunk branch2). 1) I removed unnecessary changes from the patch. Some unnecessary changes were initially done to compensate pre-commit audit warnings (mostly deprecations, since the patch deprecates old MetricsServelet), so now the patch may have problems with the number of warnings. 2) I splitted the patch into 2 pieces (hadoop-common and mr part). Yarn part is unnecessary. 3) ServletSink.numInstances was not thread safe, right. It was so , because creation of several Servlet instances is quite rare case, though, this is possible in theory. I fixed the thread-safety issue. Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API Key: HADOOP-9281 URL: https://issues.apache.org/jira/browse/HADOOP-9281 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9281-branch-0.23--N5.patch, HADOOP-9281-trunk--N5.patch, HADOOP-9281-trunk--N6--HADOOP-COMMON-part.patch, HADOOP-9281-trunk--N6--MAPREDUCE-part.patch, HADOOP-9281-trunk--N6.patch The following done: 1) o.a.h.metrics.MetricsServlet reworked to o.a.h.metrics2.lib.MetricsServlet2 2) class org.apache.hadoop.mapreduce.task.reduce.ShuffleClientMetrics rewritten to use metrics2 API. 3) class org.apache.hadoop.mapred.LocalJobRunnerMetrics rewritten to use new metrics2 API. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Affects Version/s: (was: 2.0.4-alpha) (was: 0.23.7) 2.3.0 eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh, HADOOP-9470-branch-0.23.patch, HADOOP-9470-branch-2--N1.patch, HADOOP-9470-trunk--N1.patch, HADOOP-9470-trunk.patch In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Attachment: HADOOP-9470-trunk--N1.patch HADOOP-9470-branch-2--N1.patch Patches updated. HADOOP-9470-branch-2--N1.patch and HADOOP-9470-trunk--N1.patch are versions for branch-2 and trunk respectively. The attached script can be used as an audit tool to avoid such problems in future. eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh, HADOOP-9470-branch-0.23.patch, HADOOP-9470-branch-2--N1.patch, HADOOP-9470-trunk--N1.patch, HADOOP-9470-trunk.patch In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (HADOOP-9321) fix coverage org.apache.hadoop.net
[ https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky reassigned HADOOP-9321: -- Assignee: Ivan A. Veselovsky (was: Aleksey Gorshkov) fix coverage org.apache.hadoop.net --- Key: HADOOP-9321 URL: https://issues.apache.org/jira/browse/HADOOP-9321 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Aleksey Gorshkov Assignee: Ivan A. Veselovsky Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk.patch fix coverage org.apache.hadoop.net HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net
[ https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9321: --- Attachment: HADOOP-9321-trunk-b.patch Patch updated. It applies to both branch-2 and trunk. fix coverage org.apache.hadoop.net --- Key: HADOOP-9321 URL: https://issues.apache.org/jira/browse/HADOOP-9321 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Aleksey Gorshkov Assignee: Ivan A. Veselovsky Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, HADOOP-9321-trunk.patch fix coverage org.apache.hadoop.net HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net
[ https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9321: --- Affects Version/s: (was: 0.23.5) (was: 2.0.3-alpha) 2.3.0 fix coverage org.apache.hadoop.net --- Key: HADOOP-9321 URL: https://issues.apache.org/jira/browse/HADOOP-9321 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Aleksey Gorshkov Assignee: Ivan A. Veselovsky Attachments: HADOOP-9321-trunk-a.patch, HADOOP-9321-trunk-b.patch, HADOOP-9321-trunk.patch fix coverage org.apache.hadoop.net HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788498#comment-13788498 ] Ivan A. Veselovsky commented on HADOOP-9470: Hi, Daryn, no I don't have examples of problems caused by tests in different packages with the same basename. But we have examples of problems caused by tests with the same *fully qualified Java names* in different Hadoop modules (the test results, Clover coverage calculation). eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh, HADOOP-9470-branch-0.23.patch, HADOOP-9470-branch-2--N1.patch, HADOOP-9470-trunk--N1.patch, HADOOP-9470-trunk.patch In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9598) test coverage for org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin
[ https://issues.apache.org/jira/browse/HADOOP-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9598: --- Affects Version/s: (was: 0.23.8) test coverage for org.apache.hadoop.yarn.server.resourcemanager.tools.TestRMAdmin - Key: HADOOP-9598 URL: https://issues.apache.org/jira/browse/HADOOP-9598 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.5-alpha Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: HADOOP-9598-branch-0.23.patch, HADOOP-9598-branch-0.23-v1.patch, HADOOP-9598-trunk.patch, HADOOP-9598-trunk-v1.patch, HADOOP-9598-trunk-v2.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9016) Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream
[ https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9016: --- Attachment: HADOOP-9016--f.patch Hi, Robert, sure, the patch is updated (HADOOP-9016--f.patch). It is applicable equally to both trunk and branch-2. Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream - Key: HADOOP-9016 URL: https://issues.apache.org/jira/browse/HADOOP-9016 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9016--b.patch, HADOOP-9016-branch-0.23--d.patch, HADOOP-9016-branch-0.23--e.patch, HADOOP-9016--c.patch, HADOOP-9016--d.patch, HADOOP-9016--e.patch, HADOOP-9016--f.patch, HADOOP-9016.patch unit-test coverage of classes org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream is zero. Suggested to provide unit-tests covering these classes. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9016) Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream
[ https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9016: --- Affects Version/s: 2.3.0 3.0.0 Issue Type: Test (was: Improvement) Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream - Key: HADOOP-9016 URL: https://issues.apache.org/jira/browse/HADOOP-9016 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9016--b.patch, HADOOP-9016-branch-0.23--d.patch, HADOOP-9016-branch-0.23--e.patch, HADOOP-9016--c.patch, HADOOP-9016--d.patch, HADOOP-9016--e.patch, HADOOP-9016--f.patch, HADOOP-9016.patch unit-test coverage of classes org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream, org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream is zero. Suggested to provide unit-tests covering these classes. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Affects Version/s: (was: 0.23.7) (was: 2.0.3-alpha) 2.3.0 enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291--N7.patch, HADOOP-9291-trunk--N4.patch, HADOOP-9291-trunk--N5.patch, HADOOP-9291-trunk--N6.patch, HADOOP-9291-trunk--N6.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Attachment: HADOOP-9291--N7.patch Robert, sure, the patch is updated (HADOOP-9291--N7.patch). It is equally applicable to trunk and branch-2. enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.3.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291--N7.patch, HADOOP-9291-trunk--N4.patch, HADOOP-9291-trunk--N5.patch, HADOOP-9291-trunk--N6.patch, HADOOP-9291-trunk--N6.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
[ https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784968#comment-13784968 ] Ivan A. Veselovsky commented on HADOOP-9063: Hi, Jason, is that possible to commit it to branch-0.23 also? enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil - Key: HADOOP-9063 URL: https://issues.apache.org/jira/browse/HADOOP-9063 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Fix For: 3.0.0, 2.3.0 Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, HADOOP-9063-branch-0.23--c.patch, HADOOP-9063-branch-2--N1.patch, HADOOP-9063-branch-2--N2.patch, HADOOP-9063.patch, HADOOP-9063-trunk--c.patch, HADOOP-9063-trunk--c.patch, HADOOP-9063-trunk--N2.patch, HADOOP-9063-trunk--N3.patch, HADOOP-9063-trunk--N6.patch Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests poorly or not covered at all. Enhance the coverage. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9078: --- Attachment: HADOOP-9078-trunk--N8.patch HADOOP-9078-branch-2--N3.patch Patches for branch-2 and trunk are updated because of merge over incoming changes. enhance unit-test coverage of class org.apache.hadoop.fs.FileContext Key: HADOOP-9078 URL: https://issues.apache.org/jira/browse/HADOOP-9078 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2--N2.patch, HADOOP-9078-branch-2--N3.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, HADOOP-9078-trunk--N1.patch, HADOOP-9078-trunk--N2.patch, HADOOP-9078-trunk--N6.patch, HADOOP-9078-trunk--N8.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9046: --- Resolution: Cannot Reproduce Status: Resolved (was: Patch Available) Closing this request because: 1) After https://issues.apache.org/jira/browse/HADOOP-9549 coverage of the DelegationTokenRenewer is high enough in branch-2 and trunk. 2) In branch-0.23 class DelegationTokenRenewer resides in another package, so its coverage does not directly affect coverage of package o.a.h.fs . (The fix was initially done in order to elevate coverage of o.a.h.fs package). provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT -- Key: HADOOP-9046 URL: https://issues.apache.org/jira/browse/HADOOP-9046 Project: Hadoop Common Issue Type: Test Affects Versions: 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9046-branch-0.23--c.patch, HADOOP-9046-branch-0.23--d.patch, HADOOP-9046-branch-0.23-over-9049.patch, HADOOP-9046-branch-0.23--over-HDFS-4567.patch, HADOOP-9046-branch-0.23.patch, HADOOP-9046-branch-2--over-HDFS-4567.patch, HADOOP-9046--c.patch, HADOOP-9046--d.patch, HADOOP-9046--e.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch, HADOOP-9046-trunk--over-HDFS-4567.patch The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero coverage in entire cumulative test run. Provide test(s) to cover this class. Note: the request submitted to HDFS project because the class likely to be tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9046: --- Affects Version/s: (was: 2.0.3-alpha) (was: 3.0.0) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT -- Key: HADOOP-9046 URL: https://issues.apache.org/jira/browse/HADOOP-9046 Project: Hadoop Common Issue Type: Test Affects Versions: 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9046-branch-0.23--c.patch, HADOOP-9046-branch-0.23--d.patch, HADOOP-9046-branch-0.23-over-9049.patch, HADOOP-9046-branch-0.23--over-HDFS-4567.patch, HADOOP-9046-branch-0.23.patch, HADOOP-9046-branch-2--over-HDFS-4567.patch, HADOOP-9046--c.patch, HADOOP-9046--d.patch, HADOOP-9046--e.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch, HADOOP-9046-trunk--over-HDFS-4567.patch The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero coverage in entire cumulative test run. Provide test(s) to cover this class. Note: the request submitted to HDFS project because the class likely to be tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656918#comment-13656918 ] Ivan A. Veselovsky commented on HADOOP-9046: Since fix https://issues.apache.org/jira/browse/HADOOP-9549 the patch is no longer applicable to branches trunk and branch-2 (needs merge). It looks like there is no big reason to merge over 9549 because after the fix 9549 the coverage of the target classes is enough. So, the suggested patch is now applicable to branch-0.23 only. Modifying the Affected versions field accordingly. provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT -- Key: HADOOP-9046 URL: https://issues.apache.org/jira/browse/HADOOP-9046 Project: Hadoop Common Issue Type: Test Affects Versions: 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9046-branch-0.23--c.patch, HADOOP-9046-branch-0.23--d.patch, HADOOP-9046-branch-0.23-over-9049.patch, HADOOP-9046-branch-0.23--over-HDFS-4567.patch, HADOOP-9046-branch-0.23.patch, HADOOP-9046-branch-2--over-HDFS-4567.patch, HADOOP-9046--c.patch, HADOOP-9046--d.patch, HADOOP-9046--e.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch, HADOOP-9046-trunk--over-HDFS-4567.patch The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero coverage in entire cumulative test run. Provide test(s) to cover this class. Note: the request submitted to HDFS project because the class likely to be tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY
[ https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9481: --- Description: The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code compilation on Unix. Namely, let's see the diff HADOOP-8562 of the file (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) Broken conditional logic with HADOOP_SNAPPY_LIBRARY --- Key: HADOOP-9481 URL: https://issues.apache.org/jira/browse/HADOOP-9481 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Vadim Bondarev Priority: Minor Attachments: HADOOP-9481-trunk--N1.patch The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code compilation on Unix. Namely, let's see the diff HADOOP-8562 of the file (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY
[ https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9481: --- Description: The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code compilation on Unix. Namely, let's see the diff HADOOP-8562 of the file hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c : {noformat} --- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c +++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c @@ -16,12 +16,18 @@ * limitations under the License. */ -#include dlfcn.h + +#if defined HADOOP_SNAPPY_LIBRARY + #include stdio.h #include stdlib.h #include string.h +#ifdef UNIX +#include dlfcn.h #include config.h +#endif // UNIX + #include org_apache_hadoop_io_compress_snappy.h #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (uncompressed_bytes == 0) { -return 0; +return (jint)0; } // Get the output direct buffer @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (compressed_bytes == 0) { -return 0; +return (jint)0; } /* size_t should always be 4 bytes or larger. */ @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso (*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 0); return (jint)buf_len; } + +#endif //define HADOOP_SNAPPY_LIBRARY {noformat} Here we see that all the class implementation got enclosed into if defined HADOOP_SNAPPY_LIBRARY directive, and the point is that HADOOP_SNAPPY_LIBRARY is *not* defined. This causes the class implementation to be effectively empty, what, in turn, causes the UnsatisfiedLinkError to be thrown upon any attempt to invoke the native methods implemented there. The actual intention of the authors of HADOOP-8562 was (as we suppose) to invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But currently it is *not* included because it resides *inside* if defined HADOOP_SNAPPY_LIBRARY block. Similar situation with ifdef UNIX, because UNIX or WINDOWS are defined in org_apache_hadoop.h, which is indirectly defined through include org_apache_hadoop_io_compress_snappy.h, and in the current code this is done *after* code ifdef UNIX. The suggested patch fixes the described problems by reordering the include and if preprocessor directives, this way making the methods of class org.apache.hadoop.io.compress.snappy.SnappyCompressor to work again. Of course, Snappy native libraries must be installed to build and invoke snappy native methods. (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) was: The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code compilation on Unix. Namely, let's see the diff HADOOP-8562 of the file (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) Broken conditional logic with HADOOP_SNAPPY_LIBRARY --- Key: HADOOP-9481 URL: https://issues.apache.org/jira/browse/HADOOP-9481 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Vadim Bondarev
[jira] [Updated] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY
[ https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9481: --- Description: The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code work on Unix. Namely, let's see the diff HADOOP-8562 of the file hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c : {noformat} --- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c +++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c @@ -16,12 +16,18 @@ * limitations under the License. */ -#include dlfcn.h + +#if defined HADOOP_SNAPPY_LIBRARY + #include stdio.h #include stdlib.h #include string.h +#ifdef UNIX +#include dlfcn.h #include config.h +#endif // UNIX + #include org_apache_hadoop_io_compress_snappy.h #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (uncompressed_bytes == 0) { -return 0; +return (jint)0; } // Get the output direct buffer @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (compressed_bytes == 0) { -return 0; +return (jint)0; } /* size_t should always be 4 bytes or larger. */ @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso (*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 0); return (jint)buf_len; } + +#endif //define HADOOP_SNAPPY_LIBRARY {noformat} Here we see that all the class implementation got enclosed into if defined HADOOP_SNAPPY_LIBRARY directive, and the point is that HADOOP_SNAPPY_LIBRARY is *not* defined. This causes the class implementation to be effectively empty, what, in turn, causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt to invoke the native methods implemented there. The actual intention of the authors of HADOOP-8562 was (as we suppose) to invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But currently it is *not* included because it resides *inside* if defined HADOOP_SNAPPY_LIBRARY block. Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are defined in org_apache_hadoop.h, which is indirectly included through include org_apache_hadoop_io_compress_snappy.h, and in the current code this is done *after* code ifdef UNIX, so in the current code the block ifdef UNIX is *not* executed on UNIX. The suggested patch fixes the described problems by reordering the include and if preprocessor directives accordingly, bringing the methods of class org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again. Of course, Snappy native libraries must be installed to build and invoke snappy native methods. (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) was: The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code compilation on Unix. Namely, let's see the diff HADOOP-8562 of the file hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c : {noformat} --- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c +++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c @@ -16,12 +16,18 @@ * limitations under the License. */ -#include dlfcn.h + +#if defined HADOOP_SNAPPY_LIBRARY + #include stdio.h #include stdlib.h #include string.h +#ifdef UNIX +#include dlfcn.h #include config.h +#endif // UNIX + #include org_apache_hadoop_io_compress_snappy.h #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz,
[jira] [Commented] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY
[ https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636310#comment-13636310 ] Ivan A. Veselovsky commented on HADOOP-9481: The patch verification complains about not added/changed tests. We don't add tests there in this patch since comprehensive tests for Snappy are suggested in another patch: https://issues.apache.org/jira/browse/HADOOP-9225 . So, these 2 fixes can be reviewed and applied together. Broken conditional logic with HADOOP_SNAPPY_LIBRARY --- Key: HADOOP-9481 URL: https://issues.apache.org/jira/browse/HADOOP-9481 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Vadim Bondarev Priority: Minor Attachments: HADOOP-9481-trunk--N1.patch The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code work on Unix. Namely, let's see the diff HADOOP-8562 of the file hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c : {noformat} --- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c +++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c @@ -16,12 +16,18 @@ * limitations under the License. */ -#include dlfcn.h + +#if defined HADOOP_SNAPPY_LIBRARY + #include stdio.h #include stdlib.h #include string.h +#ifdef UNIX +#include dlfcn.h #include config.h +#endif // UNIX + #include org_apache_hadoop_io_compress_snappy.h #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (uncompressed_bytes == 0) { -return 0; +return (jint)0; } // Get the output direct buffer @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (compressed_bytes == 0) { -return 0; +return (jint)0; } /* size_t should always be 4 bytes or larger. */ @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso (*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 0); return (jint)buf_len; } + +#endif //define HADOOP_SNAPPY_LIBRARY {noformat} Here we see that all the class implementation got enclosed into if defined HADOOP_SNAPPY_LIBRARY directive, and the point is that HADOOP_SNAPPY_LIBRARY is *not* defined. This causes the class implementation to be effectively empty, what, in turn, causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt to invoke the native methods implemented there. The actual intention of the authors of HADOOP-8562 was (as we suppose) to invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But currently it is *not* included because it resides *inside* if defined HADOOP_SNAPPY_LIBRARY block. Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are defined in org_apache_hadoop.h, which is indirectly included through include org_apache_hadoop_io_compress_snappy.h, and in the current code this is done *after* code ifdef UNIX, so in the current code the block ifdef UNIX is *not* executed on UNIX. The suggested patch fixes the described problems by reordering the include and if preprocessor directives accordingly, bringing the methods of class org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again. Of course, Snappy native libraries must be installed to build and invoke snappy native methods. (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY
[ https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636596#comment-13636596 ] Ivan A. Veselovsky commented on HADOOP-9481: Hi, Chris, what version of cmake do you use? We use cmake 2.8.8. Broken conditional logic with HADOOP_SNAPPY_LIBRARY --- Key: HADOOP-9481 URL: https://issues.apache.org/jira/browse/HADOOP-9481 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Vadim Bondarev Priority: Minor Attachments: HADOOP-9481-trunk--N1.patch The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code work on Unix. Namely, let's see the diff HADOOP-8562 of the file hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c : {noformat} --- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c +++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c @@ -16,12 +16,18 @@ * limitations under the License. */ -#include dlfcn.h + +#if defined HADOOP_SNAPPY_LIBRARY + #include stdio.h #include stdlib.h #include string.h +#ifdef UNIX +#include dlfcn.h #include config.h +#endif // UNIX + #include org_apache_hadoop_io_compress_snappy.h #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (uncompressed_bytes == 0) { -return 0; +return (jint)0; } // Get the output direct buffer @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (compressed_bytes == 0) { -return 0; +return (jint)0; } /* size_t should always be 4 bytes or larger. */ @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso (*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 0); return (jint)buf_len; } + +#endif //define HADOOP_SNAPPY_LIBRARY {noformat} Here we see that all the class implementation got enclosed into if defined HADOOP_SNAPPY_LIBRARY directive, and the point is that HADOOP_SNAPPY_LIBRARY is *not* defined. This causes the class implementation to be effectively empty, what, in turn, causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt to invoke the native methods implemented there. The actual intention of the authors of HADOOP-8562 was (as we suppose) to invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But currently it is *not* included because it resides *inside* if defined HADOOP_SNAPPY_LIBRARY block. Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are defined in org_apache_hadoop.h, which is indirectly included through include org_apache_hadoop_io_compress_snappy.h, and in the current code this is done *after* code ifdef UNIX, so in the current code the block ifdef UNIX is *not* executed on UNIX. The suggested patch fixes the described problems by reordering the include and if preprocessor directives accordingly, bringing the methods of class org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again. Of course, Snappy native libraries must be installed to build and invoke snappy native methods. (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY
[ https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636607#comment-13636607 ] Ivan A. Veselovsky commented on HADOOP-9481: In our experiments config.h is generated by cmake and its content is okay. The problem is that config.h is not included into hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c because if defined HADOOP_SNAPPY_LIBRARY condition is false (see desription). Broken conditional logic with HADOOP_SNAPPY_LIBRARY --- Key: HADOOP-9481 URL: https://issues.apache.org/jira/browse/HADOOP-9481 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Vadim Bondarev Priority: Minor Attachments: HADOOP-9481-trunk--N1.patch The problem is a regression introduced by recent fix https://issues.apache.org/jira/browse/HADOOP-8562 . That fix makes some improvements for Windows platform, but breaks native code work on Unix. Namely, let's see the diff HADOOP-8562 of the file hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c : {noformat} --- hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c +++ hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c @@ -16,12 +16,18 @@ * limitations under the License. */ -#include dlfcn.h + +#if defined HADOOP_SNAPPY_LIBRARY + #include stdio.h #include stdlib.h #include string.h +#ifdef UNIX +#include dlfcn.h #include config.h +#endif // UNIX + #include org_apache_hadoop_io_compress_snappy.h #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (uncompressed_bytes == 0) { -return 0; +return (jint)0; } // Get the output direct buffer @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso UNLOCK_CLASS(env, clazz, SnappyCompressor); if (compressed_bytes == 0) { -return 0; +return (jint)0; } /* size_t should always be 4 bytes or larger. */ @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso (*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 0); return (jint)buf_len; } + +#endif //define HADOOP_SNAPPY_LIBRARY {noformat} Here we see that all the class implementation got enclosed into if defined HADOOP_SNAPPY_LIBRARY directive, and the point is that HADOOP_SNAPPY_LIBRARY is *not* defined. This causes the class implementation to be effectively empty, what, in turn, causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt to invoke the native methods implemented there. The actual intention of the authors of HADOOP-8562 was (as we suppose) to invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But currently it is *not* included because it resides *inside* if defined HADOOP_SNAPPY_LIBRARY block. Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are defined in org_apache_hadoop.h, which is indirectly included through include org_apache_hadoop_io_compress_snappy.h, and in the current code this is done *after* code ifdef UNIX, so in the current code the block ifdef UNIX is *not* executed on UNIX. The suggested patch fixes the described problems by reordering the include and if preprocessor directives accordingly, bringing the methods of class org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again. Of course, Snappy native libraries must be installed to build and invoke snappy native methods. (Note: there was a mistype in commit message: 8952 written in place of 8562: HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, Ramya Bharathi Nimmagadda. git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 13f79535-47bb-0310-9956-ffa450edef68 ) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Attachment: (was: HADOOP-9470-trunk.patch) eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Attachment: HADOOP-9470-trunk.patch HADOOP-9470-branch-0.23.patch Trunk patch is also applicable to branch-2. Separate version for branch-0.23. eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh, HADOOP-9470-branch-0.23.patch, HADOOP-9470-trunk.patch In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Status: Patch Available (was: Open) eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh, HADOOP-9470-branch-0.23.patch, HADOOP-9470-trunk.patch In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
[ https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9063: --- Attachment: HADOOP-9063-trunk--N6.patch HADOOP-9063-branch-2--N1.patch re-submitting the patches since branch-2 now needs a separate version. enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil - Key: HADOOP-9063 URL: https://issues.apache.org/jira/browse/HADOOP-9063 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, HADOOP-9063-branch-0.23--c.patch, HADOOP-9063-branch-2--N1.patch, HADOOP-9063.patch, HADOOP-9063-trunk--c.patch, HADOOP-9063-trunk--c.patch, HADOOP-9063-trunk--N2.patch, HADOOP-9063-trunk--N3.patch, HADOOP-9063-trunk--N6.patch Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests poorly or not covered at all. Enhance the coverage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Attachment: HADOOP-9291-trunk--N6.patch N6: trunk version of the patch updated because of merge over HADOOP-9467. enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291-trunk--N4.patch, HADOOP-9291-trunk--N5.patch, HADOOP-9291-trunk--N6.patch, HADOOP-9291-trunk--N6.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
Ivan A. Veselovsky created HADOOP-9470: -- Summary: eliminate duplicate FQN tests in different Hadoop modules Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Attachment: find-duplicate-fqns.sh eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Attachment: HADOOP-9470-trunk.patch Attaching atch for trunk. Only test renames are there, no other changes. eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh, HADOOP-9470-trunk.patch In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9470) eliminate duplicate FQN tests in different Hadoop modules
[ https://issues.apache.org/jira/browse/HADOOP-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9470: --- Affects Version/s: 2.0.4-alpha 0.23.7 3.0.0 eliminate duplicate FQN tests in different Hadoop modules - Key: HADOOP-9470 URL: https://issues.apache.org/jira/browse/HADOOP-9470 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: find-duplicate-fqns.sh, HADOOP-9470-trunk.patch In different modules of Hadoop project there are tests with identical FQNs (fully qualified name). For example, test with FQN org.apache.hadoop.util.TestRunJar is contained in 2 modules: ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/util/TestRunJar.java Such situation causes certain problems with test result reporting and other code analysis tools (such as Clover, e.g.) because almost all the tools identify the tests by their Java FQN. So, I suggest to rename all such test classes to avoid duplicate FQNs in different modules. I'm attaching simple shell script that can find all such problematic test classes. Currently Hadoop trunk has 9 such test classes, they are: $ ~/bin/find-duplicate-fqns.sh # Module [./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-classes] has 7 duplicate FQN tests: org.apache.hadoop.ipc.TestSocketFactory org.apache.hadoop.mapred.TestFileOutputCommitter org.apache.hadoop.mapred.TestJobClient org.apache.hadoop.mapred.TestJobConf org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter org.apache.hadoop.util.TestReflectionUtils org.apache.hadoop.util.TestRunJar # Module [./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/test-classes] has 2 duplicate FQN tests: org.apache.hadoop.yarn.TestRecordFactory org.apache.hadoop.yarn.TestRPCFactories -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9078: --- Attachment: HADOOP-9078-trunk--N6.patch new patch for trunk: 1) fixed merge conflict; 2) reverted changes complementary to fix H-9357 since that fix was rolled back from trunk; enhance unit-test coverage of class org.apache.hadoop.fs.FileContext Key: HADOOP-9078 URL: https://issues.apache.org/jira/browse/HADOOP-9078 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2--N2.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, HADOOP-9078-trunk--N1.patch, HADOOP-9078-trunk--N2.patch, HADOOP-9078-trunk--N6.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9078: --- Attachment: HADOOP-9078-trunk--N2.patch HADOOP-9078-branch-2--N2.patch Added new versions of the patches that fix problems in tests caused by fix HADOOP-9357. enhance unit-test coverage of class org.apache.hadoop.fs.FileContext Key: HADOOP-9078 URL: https://issues.apache.org/jira/browse/HADOOP-9078 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2--N2.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, HADOOP-9078-trunk--N1.patch, HADOOP-9078-trunk--N2.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
[ https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9200: --- Attachment: HADOOP-9200-trunk--N2.patch Hi, Kihval, I'm attaching a new version of the patch where I have rewritten the implementation of NetgroupCache class. Please review it. I believe, now the data access problems are solved without any blocking. enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache Key: HADOOP-9200 URL: https://issues.apache.org/jira/browse/HADOOP-9200 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9200-trunk--N2.patch, HADOOP-9200-trunk.patch The class org.apache.hadoop.security.NetgroupCache has poor unit-test coverage. Enhance it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
[ https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616171#comment-13616171 ] Ivan A. Veselovsky commented on HADOOP-9200: I see that you already fixed H-9436, so now we probably need to choose which fix is better or merge them somehow. enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache Key: HADOOP-9200 URL: https://issues.apache.org/jira/browse/HADOOP-9200 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9200-trunk--N2.patch, HADOOP-9200-trunk.patch The class org.apache.hadoop.security.NetgroupCache has poor unit-test coverage. Enhance it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9046: --- Attachment: HADOOP-9046-branch-2--over-HDFS-4567.patch Attaching also separate patch version for branch-2 since trunk version does not apply cleanly. provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT -- Key: HADOOP-9046 URL: https://issues.apache.org/jira/browse/HADOOP-9046 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9046-branch-0.23--c.patch, HADOOP-9046-branch-0.23--d.patch, HADOOP-9046-branch-0.23-over-9049.patch, HADOOP-9046-branch-0.23--over-HDFS-4567.patch, HADOOP-9046-branch-0.23.patch, HADOOP-9046-branch-2--over-HDFS-4567.patch, HADOOP-9046--c.patch, HADOOP-9046--d.patch, HADOOP-9046--e.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch, HADOOP-9046-trunk--over-HDFS-4567.patch The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero coverage in entire cumulative test run. Provide test(s) to cover this class. Note: the request submitted to HDFS project because the class likely to be tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
[ https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9046: --- Attachment: HADOOP-9046-trunk--over-HDFS-4567.patch HADOOP-9046-branch-0.23--over-HDFS-4567.patch Attaching new versions of patches merged over fix HDFS-4567. provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT -- Key: HADOOP-9046 URL: https://issues.apache.org/jira/browse/HADOOP-9046 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9046-branch-0.23--c.patch, HADOOP-9046-branch-0.23--d.patch, HADOOP-9046-branch-0.23-over-9049.patch, HADOOP-9046-branch-0.23--over-HDFS-4567.patch, HADOOP-9046-branch-0.23.patch, HADOOP-9046--c.patch, HADOOP-9046--d.patch, HADOOP-9046--e.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch, HADOOP-9046-trunk--over-HDFS-4567.patch The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero coverage in entire cumulative test run. Provide test(s) to cover this class. Note: the request submitted to HDFS project because the class likely to be tested by tests in that project. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Affects Version/s: 0.23.7 org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch, HADOOP-9337-branch-0.23--a.patch test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Attachment: HADOOP-9337-branch-0.23--a.patch Adding patch (a) version for branch-0.23 . org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.0.4-beta Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch, HADOOP-9337-branch-0.23--a.patch test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Attachment: HADOOP-9291-trunk--N6.patch added timeout for the added test to satisfy the patch verification requirement. enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291-trunk--N4.patch, HADOOP-9291-trunk--N5.patch, HADOOP-9291-trunk--N6.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13589454#comment-13589454 ] Ivan A. Veselovsky commented on HADOOP-9337: Hi, Andrew, regarding the way (a): -P option of df command is especially designed to provide standard POSIX format of the output. I cannot state about all Unix systems, but I experimentally confirmed that this option is supported on CentOS 6.3 and Mac 10.8. I don't have AIX box, but documantation of AIX 6.1 also say that this option is supported, see http://pic.dhe.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/df.htm . Okay, I will write the test. org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Attachment: (was: HADOOP-9337--b.patch) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Attachment: (was: HADOOP-9337--a.patch) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Attachment: HADOOP-9337--a.patch HADOOP-9337--b.patch Attaching patches with the fixed/added tests. The patches correspond to fix options (a) and (b): (a) uses -P df option and expects standard uniform POSIX output on all operating systems, while (b) does not use -P, instead providing special output handling for Mac. I would recommend solution (a) if we're able to test it on all supported OSs. If not, I would recommend to take solution (b). org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Affects Version/s: 3.0.0 Status: Patch Available (was: Open) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Affects Version/s: 2.0.4-beta org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.0.4-beta Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Attachment: HADOOP-9291-trunk--N5.patch small correction to the patch for trunk. enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291-trunk--N4.patch, HADOOP-9291-trunk--N5.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
Ivan A. Veselovsky created HADOOP-9337: -- Summary: org.apache.hadoop.fs.DF.getMount() does not work on Mac OS Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9067) provide test for method org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long)
[ https://issues.apache.org/jira/browse/HADOOP-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13588466#comment-13588466 ] Ivan A. Veselovsky commented on HADOOP-9067: Hi, Eric, i found the reason of the failure: o.a.h.fs.DF#getMount() does not work correctly on Mac OS. The Java version does not play a role there. I created issue https://issues.apache.org/jira/browse/HADOOP-9337 to address the problem. In the 9337's description I suggest 2 ways of fix: can you please share your opinion on which one seems to be preferable? provide test for method org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) -- Key: HADOOP-9067 URL: https://issues.apache.org/jira/browse/HADOOP-9067 Project: Hadoop Common Issue Type: Test Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Fix For: 3.0.0, 2.0.3-alpha, 0.23.7 Attachments: HADOOP-9067--b.patch, HADOOP-9067.patch this method is not covered by the existing unit tests. Provide a test to cover it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9337) org.apache.hadoop.fs.DF.getMount() does not work on Mac OS
[ https://issues.apache.org/jira/browse/HADOOP-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9337: --- Attachment: HADOOP-9337--b.patch HADOOP-9337--a.patch the attached patches --a and --b correspond to the fix possibilities (a) and (b) respectively. org.apache.hadoop.fs.DF.getMount() does not work on Mac OS -- Key: HADOOP-9337 URL: https://issues.apache.org/jira/browse/HADOOP-9337 Project: Hadoop Common Issue Type: Bug Environment: Mac OS 10.8 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9337--a.patch, HADOOP-9337--b.patch test org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure() (added in HADOOP-9067) appears to fail on MacOS because method org.apache.hadoop.fs.DF.getMount() does not work correctly. The problem is that df -k path command returns on MacOS output like the following: --- Filesystem  1024-blocks    Used Available Capacity  iused   ifree %iused  Mounted on /dev/disk0s4  194879828 100327120  94552708   52% 25081778 23638177  51%  /Volumes/Data --- while the following is expected: --- Filesystem 1024-blocks Used Available Capacity Mounted on /dev/mapper/vg_iveselovskyws-lv_home 420545160 15978372 383204308 5% /home --- So, we see that Mac's output has 3 additional tokens. I can suggest 2 ways to fix the problem. (a) use -P (POSIX) option when invoking df command. This will probably ensure unifirm output on all Unix systems; (b) move Mac branch to specific case branch and treat it specifically (like we currently have for AIX, DF.java, line 214) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9067) provide test for method org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long)
[ https://issues.apache.org/jira/browse/HADOOP-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13581252#comment-13581252 ] Ivan A. Veselovsky commented on HADOOP-9067: Hi, Eric, It looks like I cannot reproduce this failure. We have hadoop builds running on regular basis, and this test never fails. Also it passes in Apache builds, e.g. https://builds.apache.org/view/Hadoop/job/Hadoop-Common-trunk/689/consoleFull . Can you please provide more info on this failure: what OS/java/maven options you use? Also full log in debug level may be helpful. provide test for method org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) -- Key: HADOOP-9067 URL: https://issues.apache.org/jira/browse/HADOOP-9067 Project: Hadoop Common Issue Type: Test Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Fix For: 3.0.0, 2.0.3-alpha, 0.23.7 Attachments: HADOOP-9067--b.patch, HADOOP-9067.patch this method is not covered by the existing unit tests. Provide a test to cover it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Attachment: HADOOP-9291-trunk--N4.patch HADOOP-9291-branch-0.23--N4.patch patch for trunk is also applicable to branch branch-2. enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291-trunk--N4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
[ https://issues.apache.org/jira/browse/HADOOP-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9291: --- Affects Version/s: 0.23.7 2.0.3-alpha 3.0.0 Status: Patch Available (was: Open) enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9291-branch-0.23--N4.patch, HADOOP-9291-trunk--N4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9291) enhance unit-test coverage of package o.a.h.metrics2
Ivan A. Veselovsky created HADOOP-9291: -- Summary: enhance unit-test coverage of package o.a.h.metrics2 Key: HADOOP-9291 URL: https://issues.apache.org/jira/browse/HADOOP-9291 Project: Hadoop Common Issue Type: Test Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9281) Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API
Ivan A. Veselovsky created HADOOP-9281: -- Summary: Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API Key: HADOOP-9281 URL: https://issues.apache.org/jira/browse/HADOOP-9281 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky The following done: 1) o.a.h.metrics.MetricsServlet reworked to o.a.h.metrics2.lib.MetricsServlet2 2) class org.apache.hadoop.mapreduce.task.reduce.ShuffleClientMetrics rewritten to use metrics2 API. 3) class org.apache.hadoop.mapred.LocalJobRunnerMetrics rewritten to use new metrics2 API. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9281) Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API
[ https://issues.apache.org/jira/browse/HADOOP-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9281: --- Status: Patch Available (was: Open) Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API Key: HADOOP-9281 URL: https://issues.apache.org/jira/browse/HADOOP-9281 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9281-branch-0.23--N5.patch, HADOOP-9281-trunk--N5.patch The following done: 1) o.a.h.metrics.MetricsServlet reworked to o.a.h.metrics2.lib.MetricsServlet2 2) class org.apache.hadoop.mapreduce.task.reduce.ShuffleClientMetrics rewritten to use metrics2 API. 3) class org.apache.hadoop.mapred.LocalJobRunnerMetrics rewritten to use new metrics2 API. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9281) Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API
[ https://issues.apache.org/jira/browse/HADOOP-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9281: --- Attachment: HADOOP-9281-trunk--N5.patch HADOOP-9281-branch-0.23--N5.patch Patch HADOOP-9281-trunk--N5.patch is to be applied to trunk and branch-2. Rework all usages of o.a.h.metrics to o.a.h.metrics2 new metrics API Key: HADOOP-9281 URL: https://issues.apache.org/jira/browse/HADOOP-9281 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9281-branch-0.23--N5.patch, HADOOP-9281-trunk--N5.patch The following done: 1) o.a.h.metrics.MetricsServlet reworked to o.a.h.metrics2.lib.MetricsServlet2 2) class org.apache.hadoop.mapreduce.task.reduce.ShuffleClientMetrics rewritten to use metrics2 API. 3) class org.apache.hadoop.mapred.LocalJobRunnerMetrics rewritten to use new metrics2 API. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9052) fix 6 failing tests in hadoop-streaming
[ https://issues.apache.org/jira/browse/HADOOP-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13570105#comment-13570105 ] Ivan A. Veselovsky commented on HADOOP-9052: These tests now pass okay in trunk and branch-0.23, however, they still fail in branch-2: Caused by: java.lang.IllegalStateException: Queue configuration missing child queue names for root at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:328) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:255) fix 6 failing tests in hadoop-streaming --- Key: HADOOP-9052 URL: https://issues.apache.org/jira/browse/HADOOP-9052 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9052.patch The following 6 tests in hadoop-tools/hadoop-streaming are failing because of absence of the 2 yarn.scheduler.capacity... properties: # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestFileArgs.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleArchiveFiles.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleCachefiles.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingBadRecords.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestSymLink.java -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9052) fix 6 failing tests in hadoop-streaming
[ https://issues.apache.org/jira/browse/HADOOP-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13570108#comment-13570108 ] Ivan A. Veselovsky commented on HADOOP-9052: Application of patch https://issues.apache.org/jira/browse/MAPREDUCE-4884 to branch-2 resolves the problem. fix 6 failing tests in hadoop-streaming --- Key: HADOOP-9052 URL: https://issues.apache.org/jira/browse/HADOOP-9052 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9052.patch The following 6 tests in hadoop-tools/hadoop-streaming are failing because of absence of the 2 yarn.scheduler.capacity... properties: # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestFileArgs.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleArchiveFiles.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleCachefiles.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingBadRecords.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java # modified: hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestSymLink.java -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9269) o.a.h.metrics2: annotation @Metric(...,always=true) does not work as expected
Ivan A. Veselovsky created HADOOP-9269: -- Summary: o.a.h.metrics2: annotation @Metric(...,always=true) does not work as expected Key: HADOOP-9269 URL: https://issues.apache.org/jira/browse/HADOOP-9269 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky {noformat}Metrics2: if a metric defined via annotations, like this @Metric(,always=true), it should be snapshotted always, as defined by the always attribute description: /** * @return true to create a metric snapshot even if unchanged. */ boolean always() default false; However, that does not work in that way. The problem can be reproduced with the following test: public class TestBugDemo { @Metrics(name=record1, context=context1) static class MyMetrics1 { @Metric(value={annotatedMetric1, An integer gauge},always=true) MutableGaugeInt testMetric1; public MyMetrics1 registerWith(MetricsSystem ms) { return ms.register(annotated, annotated, this); } } private static class MySink implements MetricsSink { private final String sinkName; public MySink(String name) { sinkName = name; } @Override public void init(SubsetConfiguration conf) { } @Override public void flush() { } @Override public void putMetrics(MetricsRecord record) { if (!metricssystem.equals(record.context())) { for (AbstractMetric am: record.metrics()) { System.out.println(### METRIC: + am.name() + = + am.value()); } } } } private MetricsSystem ms; MyMetrics1 m1; @Before public void before() { ms = DefaultMetricsSystem.initialize(); // register annotated source: m1 = new MyMetrics1().registerWith(ms); // register not-annotated source: final MetricsInfo fooInfo = Interns.info(non-annotated metric foo, foo descrption); ms.register(not-annotatad, , new MetricsSource() { @Override public void getMetrics(MetricsCollector collector, boolean all) { collector .addRecord(testRecord) .addCounter(fooInfo, 88) .setContext(test1) .endRecord(); } }); ms.register(sink1, null, new MySink(sink1)); } @Test public void testAlways() { m1.testMetric1.set(5); System.out.println(First Pubishing: ===); ms.publishMetricsNow(); //m1.testMetric1.set(7); System.out.println(Second Pubishing: ===); ms.publishMetricsNow(); } } This test generates the following output: First Pubishing: === ### METRIC: annotatedMetric1 = 5 ### METRIC: non-annotated metric foo = 88 Second Pubishing: === ### METRIC: non-annotated metric foo = 88 That is, the metric annotatedMetric1 is absent in the 2nd snapshot. Once we uncomment the line //m1.testMetric1.set(7);, we observe expected behavior: First Pubishing: === ### METRIC: annotatedMetric1 = 5 ### METRIC: non-annotated metric foo = 88 Second Pubishing: === ### METRIC: annotatedMetric1 = 7 ### METRIC: non-annotated metric foo = 88 The expected behavior is that the metric annotatedMetric1 will be snapshotted even if it was not changed, because it is annotated with always=true. {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9269) o.a.h.metrics2: annotation @Metric(...,always=true) does not work as expected
[ https://issues.apache.org/jira/browse/HADOOP-9269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13567680#comment-13567680 ] Ivan A. Veselovsky commented on HADOOP-9269: afaik, the field metrics are created in method org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(Field, Metric, MetricsRegistry), and the value of @Metric#always is just ignored in case of MutableCounter/Gauge classes. Also, in case of MurableStat/Rate the always value is used as extended parameter, which has another meaning: produce extended stat (stdev, min/max etc.) if true.. It looks like there is a mess with this annotation parameter. o.a.h.metrics2: annotation @Metric(...,always=true) does not work as expected - Key: HADOOP-9269 URL: https://issues.apache.org/jira/browse/HADOOP-9269 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky {noformat}Metrics2: if a metric defined via annotations, like this @Metric(,always=true), it should be snapshotted always, as defined by the always attribute description: /** * @return true to create a metric snapshot even if unchanged. */ boolean always() default false; However, that does not work in that way. The problem can be reproduced with the following test: public class TestBugDemo { @Metrics(name=record1, context=context1) static class MyMetrics1 { @Metric(value={annotatedMetric1, An integer gauge},always=true) MutableGaugeInt testMetric1; public MyMetrics1 registerWith(MetricsSystem ms) { return ms.register(annotated, annotated, this); } } private static class MySink implements MetricsSink { private final String sinkName; public MySink(String name) { sinkName = name; } @Override public void init(SubsetConfiguration conf) { } @Override public void flush() { } @Override public void putMetrics(MetricsRecord record) { if (!metricssystem.equals(record.context())) { for (AbstractMetric am: record.metrics()) { System.out.println(### METRIC: + am.name() + = + am.value()); } } } } private MetricsSystem ms; MyMetrics1 m1; @Before public void before() { ms = DefaultMetricsSystem.initialize(); // register annotated source: m1 = new MyMetrics1().registerWith(ms); // register not-annotated source: final MetricsInfo fooInfo = Interns.info(non-annotated metric foo, foo descrption); ms.register(not-annotatad, , new MetricsSource() { @Override public void getMetrics(MetricsCollector collector, boolean all) { collector .addRecord(testRecord) .addCounter(fooInfo, 88) .setContext(test1) .endRecord(); } }); ms.register(sink1, null, new MySink(sink1)); } @Test public void testAlways() { m1.testMetric1.set(5); System.out.println(First Pubishing: ===); ms.publishMetricsNow(); //m1.testMetric1.set(7); System.out.println(Second Pubishing: ===); ms.publishMetricsNow(); } } This test generates the following output: First Pubishing: === ### METRIC: annotatedMetric1 = 5 ### METRIC: non-annotated metric foo = 88 Second Pubishing: === ### METRIC: non-annotated metric foo = 88 That is, the metric annotatedMetric1 is absent in the 2nd snapshot. Once we uncomment the line //m1.testMetric1.set(7);, we observe expected behavior: First Pubishing: === ### METRIC: annotatedMetric1 = 5 ### METRIC: non-annotated metric foo = 88 Second Pubishing: === ### METRIC: annotatedMetric1 = 7 ### METRIC: non-annotated metric foo = 88 The expected behavior is that the metric annotatedMetric1 will be snapshotted even if it was not changed, because it is annotated with always=true. {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties
[ https://issues.apache.org/jira/browse/HADOOP-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky resolved HADOOP-9256. Resolution: Duplicate Duplicate of YARN-361. A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties Key: HADOOP-9256 URL: https://issues.apache.org/jira/browse/HADOOP-9256 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky Newly added plugin VersionInfoMojo should calculate properties (like time, scm branch, etc.), and after that the resource plugin should make replacements in the following files: ./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties , that are read later in test run-time. But for some reason it does not do that. As a result, a bunch of tests are permanently failing because the code of these tests is veryfying the corresponding property files for correctness: org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault Some of these failures can be observed in Apache builds, e.g.: https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/ As far as I see the substitution does not happen because corresponding properties are set by the VersionInfoMojo plugin *after* the corresponding resource plugin task is executed. Workaround: manually change files ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties and set arbitrary reasonable non-${} string parameters as the values. After that the tests pass. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9235) Avoid Clover instrumentation of classes in module hadoop-maven-plugins
[ https://issues.apache.org/jira/browse/HADOOP-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9235: --- Resolution: Duplicate Status: Resolved (was: Patch Available) Duplicate of HADOOP-9249, which suggests better fix. Avoid Clover instrumentation of classes in module hadoop-maven-plugins - Key: HADOOP-9235 URL: https://issues.apache.org/jira/browse/HADOOP-9235 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9235-trunk.patch The module hadoop-maven-plugins was introduced by fix HADOOP-8924. After that fix the full build with Clover instrumentation fails because clover instruments all the modules, including classes from hadoop-maven-plugins, which are executed by maven without having the clover jar in the classpath. So, the following build sequence fails being executed in the root folder of the source tree: mvn clean install -DskipTests mvn -e -X install -Pclover -DskipTests ... [ERROR] - [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info (version-info) on project hadoop-common: Execution version-info of goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info failed: A required class was missing while executing org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info: com_cenqua_clover/CoverageRecorder -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover
[ https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564158#comment-13564158 ] Ivan A. Veselovsky commented on HADOOP-9249: This seems to be a duplicate of HADOOP-9235. hadoop-maven-plugins version-info goal causes build failure when running with Clover Key: HADOOP-9249 URL: https://issues.apache.org/jira/browse/HADOOP-9249 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-9249.1.patch Running Maven with the -Pclover option for code coverage causes the build to fail because of not finding a Clover class while running hadoop-maven-plugins version-info. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls
[ https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564160#comment-13564160 ] Ivan A. Veselovsky commented on HADOOP-9247: Hi, Chris, the problem com_cenqua_clover/CoverageRecorder you mentioned above addressed in HADOOP-9235. parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls - Key: HADOOP-9247 URL: https://issues.apache.org/jira/browse/HADOOP-9247 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Fix For: 3.0.0 Attachments: HADOOP-9247-trunk.patch The suggested parametrization is needed in order to be able to re-define these properties with -Dk=v maven options. For some reason the expressions declared in clover docs like ${maven.clover.generateHtml} (see http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not work in that way. However, the parametrized properties are confirmed to work: e.g. -DcloverGenHtml=false switches off the Html generation, if defined generateHtml${cloverGenHtml}/generateHtml. The default values provided here exactly correspond to Clover defaults, so the behavior is 100% backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls
[ https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564265#comment-13564265 ] Ivan A. Veselovsky commented on HADOOP-9247: Hi, Suresh, can you please also commit this patch to branch-2 and branch-0.23? thanks in advance. parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls - Key: HADOOP-9247 URL: https://issues.apache.org/jira/browse/HADOOP-9247 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Fix For: 3.0.0 Attachments: HADOOP-9247-trunk.patch The suggested parametrization is needed in order to be able to re-define these properties with -Dk=v maven options. For some reason the expressions declared in clover docs like ${maven.clover.generateHtml} (see http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not work in that way. However, the parametrized properties are confirmed to work: e.g. -DcloverGenHtml=false switches off the Html generation, if defined generateHtml${cloverGenHtml}/generateHtml. The default values provided here exactly correspond to Clover defaults, so the behavior is 100% backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover
[ https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564445#comment-13564445 ] Ivan A. Veselovsky commented on HADOOP-9249: Chris, yes, I absolutely agree with you. This fix is much better. hadoop-maven-plugins version-info goal causes build failure when running with Clover Key: HADOOP-9249 URL: https://issues.apache.org/jira/browse/HADOOP-9249 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-9249.1.patch Running Maven with the -Pclover option for code coverage causes the build to fail because of not finding a Clover class while running hadoop-maven-plugins version-info. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties
Ivan A. Veselovsky created HADOOP-9256: -- Summary: A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties Key: HADOOP-9256 URL: https://issues.apache.org/jira/browse/HADOOP-9256 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky Newly added plugin VersionInfoMojo should calculate properties (like time, scm branch, etc.), and after that the resource plugin should make replacements in the following files: ./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties , that are read later in test run-time. But for some reason it does not do that. As a result, a bunch of tests are permanently failing because the code of these tests is veryfying the corresponding property files for correctness: org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault Some of these failures can be observed in Apache builds, e.g.: https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/ As far as I see the substitution does not happen because corresponding properties are set by the VersionInfoMojo plugin *after* the corresponding resource plugin task is executed. Workaround: manually change files ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties and set arbitrary reasonable non-${} string parameters as the values. After that the tests pass. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls
Ivan A. Veselovsky created HADOOP-9247: -- Summary: parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls Key: HADOOP-9247 URL: https://issues.apache.org/jira/browse/HADOOP-9247 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor The suggested parametrization is needed in order to be able to re-define these properties with -Dk=v maven options. For some reason the expressions declared in clover docs like ${maven.clover.generateHtml} (see http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not work in that way. However, the parametrized properties are confirmed to work: e.g. -DcloverGenHtml=false switches off the Html generation, if defined generateHtml${cloverGenHtml}/generateHtml. The default values provided here exactly correspond to Clover defaults, so the behavior is 100% backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls
[ https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9247: --- Attachment: HADOOP-9247-trunk.patch the patch HADOOP-9247-trunk.patch is applicable to all 3 target branches: trunk, branch-2, branch-0.23 parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls - Key: HADOOP-9247 URL: https://issues.apache.org/jira/browse/HADOOP-9247 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9247-trunk.patch The suggested parametrization is needed in order to be able to re-define these properties with -Dk=v maven options. For some reason the expressions declared in clover docs like ${maven.clover.generateHtml} (see http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not work in that way. However, the parametrized properties are confirmed to work: e.g. -DcloverGenHtml=false switches off the Html generation, if defined generateHtml${cloverGenHtml}/generateHtml. The default values provided here exactly correspond to Clover defaults, so the behavior is 100% backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls
[ https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9247: --- Affects Version/s: 0.23.6 2.0.3-alpha 3.0.0 Status: Patch Available (was: Open) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls - Key: HADOOP-9247 URL: https://issues.apache.org/jira/browse/HADOOP-9247 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9247-trunk.patch The suggested parametrization is needed in order to be able to re-define these properties with -Dk=v maven options. For some reason the expressions declared in clover docs like ${maven.clover.generateHtml} (see http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not work in that way. However, the parametrized properties are confirmed to work: e.g. -DcloverGenHtml=false switches off the Html generation, if defined generateHtml${cloverGenHtml}/generateHtml. The default values provided here exactly correspond to Clover defaults, so the behavior is 100% backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls
[ https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562752#comment-13562752 ] Ivan A. Veselovsky commented on HADOOP-9247: the fix is only for pom.xml, so it does not include any test modifications. parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls - Key: HADOOP-9247 URL: https://issues.apache.org/jira/browse/HADOOP-9247 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9247-trunk.patch The suggested parametrization is needed in order to be able to re-define these properties with -Dk=v maven options. For some reason the expressions declared in clover docs like ${maven.clover.generateHtml} (see http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not work in that way. However, the parametrized properties are confirmed to work: e.g. -DcloverGenHtml=false switches off the Html generation, if defined generateHtml${cloverGenHtml}/generateHtml. The default values provided here exactly correspond to Clover defaults, so the behavior is 100% backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9235) Avoid Clover instrumentation of classes in module hadoop-maven-plugins
Ivan A. Veselovsky created HADOOP-9235: -- Summary: Avoid Clover instrumentation of classes in module hadoop-maven-plugins Key: HADOOP-9235 URL: https://issues.apache.org/jira/browse/HADOOP-9235 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky The module hadoop-maven-plugins was introduced by fix HADOOP-8924. After that fix the full build with Clover instrumentation fails because clover instruments all the modules, including classes from hadoop-maven-plugins, which are executed by maven without having the clover jar in the classpath. So, the following build sequence fails being executed in the root folder of the source tree: mvn clean install -DskipTests mvn -e -X install -Pclover -DskipTests ... [ERROR] - [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info (version-info) on project hadoop-common: Execution version-info of goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info failed: A required class was missing while executing org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info: com_cenqua_clover/CoverageRecorder -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9235) Avoid Clover instrumentation of classes in module hadoop-maven-plugins
[ https://issues.apache.org/jira/browse/HADOOP-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9235: --- Status: Patch Available (was: Open) Avoid Clover instrumentation of classes in module hadoop-maven-plugins - Key: HADOOP-9235 URL: https://issues.apache.org/jira/browse/HADOOP-9235 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9235-trunk.patch The module hadoop-maven-plugins was introduced by fix HADOOP-8924. After that fix the full build with Clover instrumentation fails because clover instruments all the modules, including classes from hadoop-maven-plugins, which are executed by maven without having the clover jar in the classpath. So, the following build sequence fails being executed in the root folder of the source tree: mvn clean install -DskipTests mvn -e -X install -Pclover -DskipTests ... [ERROR] - [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info (version-info) on project hadoop-common: Execution version-info of goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info failed: A required class was missing while executing org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info: com_cenqua_clover/CoverageRecorder -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9235) Avoid Clover instrumentation of classes in module hadoop-maven-plugins
[ https://issues.apache.org/jira/browse/HADOOP-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559712#comment-13559712 ] Ivan A. Veselovsky commented on HADOOP-9235: The patch does not need any unit tests: this is only configuration change. Avoid Clover instrumentation of classes in module hadoop-maven-plugins - Key: HADOOP-9235 URL: https://issues.apache.org/jira/browse/HADOOP-9235 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9235-trunk.patch The module hadoop-maven-plugins was introduced by fix HADOOP-8924. After that fix the full build with Clover instrumentation fails because clover instruments all the modules, including classes from hadoop-maven-plugins, which are executed by maven without having the clover jar in the classpath. So, the following build sequence fails being executed in the root folder of the source tree: mvn clean install -DskipTests mvn -e -X install -Pclover -DskipTests ... [ERROR] - [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info (version-info) on project hadoop-common: Execution version-info of goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info failed: A required class was missing while executing org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info: com_cenqua_clover/CoverageRecorder -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH
[ https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9205: --- Resolution: Invalid Status: Resolved (was: Patch Available) The described problem appears to be reproducible *only* on JDKs 7 patched in order to enable privileged ports (1024) usage by non-root users via linux capabilities. (The exact patching proc looks like the folowing: patchelf --set-rpath ${J7_HOME}/jre/lib/amd64/jli ${J7_HOME}/bin/java setcap cap_net_bind_service=+epi ${J7_HOME}/bin/java patchelf --set-rpath ${J7_HOME}/jre/lib/amd64/jli ${J7_HOME}/jre/bin/java setcap cap_net_bind_service=+epi ${J7_HOME}/jre/bin/java This patching is needed to run some security tests because they use 1024 ports, and there is no simple way to re-configure these ports to higher values.) So, the problem described in this issue appears to be a side effect of this patch. On a clean JDK 7 installed from scratch the problem is *not* reproducible, as both Thomas and Kihwal stated. The command to verify: mvn clean test -Pnative -Dtest=org.apache.hadoop.util.TestNativeCodeLoader -Drequire.test.libhadoop=true So, I'm closing this issue as invalid. Sorry for this mess and many thanks for the provided information. Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH - Key: HADOOP-9205 URL: https://issues.apache.org/jira/browse/HADOOP-9205 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9205.patch Currently the path to native libraries is passed to unit tests via environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not work for Java7, since Java7 ignores this environment variable. So, to run the tests with native implementation on Java7 one needs to pass the paths to native libs via -Djava.library.path system property rather than the LD_LIBRARY_PATH env variable. The suggested patch fixes the problem via setting the paths to native libs using both LD_LIBRARY_PATH and -Djava.library.path property. This way the tests work equally on both Java6 and Java7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9078: --- Attachment: (was: HADOOP-9078--b.patch) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext Key: HADOOP-9078 URL: https://issues.apache.org/jira/browse/HADOOP-9078 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9078-branch-0.23.patch, HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9078: --- Attachment: HADOOP-9078--b.patch HADOOP-9078-branch-2--c.patch Remaking patch for branch-2 (version c): merge with incoming changes. enhance unit-test coverage of class org.apache.hadoop.fs.FileContext Key: HADOOP-9078 URL: https://issues.apache.org/jira/browse/HADOOP-9078 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH
[ https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553707#comment-13553707 ] Ivan A. Veselovsky commented on HADOOP-9205: Shortest way to reproduce the issue: Run test org.apache.hadoop.util.TestNativeCodeLoader with -Drequire.test.libhadoop=true and env variable LD_LIBRARY_PATH=.../hadoop-common/hadoop-common-project/hadoop-common/target/native/target/usr/local/lib If you're running on Java 1.6, the test passes. If you're running on Java 1.7, the test fails. If we add parameter -Djava.library.path=.../hadoop-common/hadoop-common-project/hadoop-common/target/native/target/usr/local/lib , the test passes on both 1.6 and 1.7. This is reproducible with Oracle's JDK jdk1.7.0_07, jdk1.7.0_10, but is *not* reproducible with jdk1.7.0_05, so Kihwall's observation regarding 1.7.0_05 is correct. Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH - Key: HADOOP-9205 URL: https://issues.apache.org/jira/browse/HADOOP-9205 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9205.patch Currently the path to native libraries is passed to unit tests via environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not work for Java7, since Java7 ignores this environment variable. So, to run the tests with native implementation on Java7 one needs to pass the paths to native libs via -Djava.library.path system property rather than the LD_LIBRARY_PATH env variable. The suggested patch fixes the problem via setting the paths to native libs using both LD_LIBRARY_PATH and -Djava.library.path property. This way the tests work equally on both Java6 and Java7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH
[ https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553960#comment-13553960 ] Ivan A. Veselovsky commented on HADOOP-9205: Hi, Thomas, can you please provide more detail on your environment: what OS did you use? I experimented on CentOS release 6.3 (Final) and Ubuntu precise (12.04.1 LTS). BTW, the problem with missing symlink libhadoop.so - libhadoop.so.1.0.0 can be avoided in you install cmake utility of version = 2.8. On CentOS systems this version is installed as a separate package named cmake28, and corresponding executable is /usr/bin/cmake28. We create symlink cmake - cmake28, and after that the problem goes away. Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH - Key: HADOOP-9205 URL: https://issues.apache.org/jira/browse/HADOOP-9205 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9205.patch Currently the path to native libraries is passed to unit tests via environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not work for Java7, since Java7 ignores this environment variable. So, to run the tests with native implementation on Java7 one needs to pass the paths to native libs via -Djava.library.path system property rather than the LD_LIBRARY_PATH env variable. The suggested patch fixes the problem via setting the paths to native libs using both LD_LIBRARY_PATH and -Djava.library.path property. This way the tests work equally on both Java6 and Java7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
[ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-8849: --- Attachment: HADOOP-8849-trunk--5.patch The patch HADOOP-8849-trunk--5.patch implements the suggested change: the methods granting permissions before delete are extracted into a separate API. Also separate tests provided for them. Note: the imports and some other code especially arranged to avoid merge conflicts with the pending patch HADOOP-9063. FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them -- Key: HADOOP-8849 URL: https://issues.apache.org/jira/browse/HADOOP-8849 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File): 1) We should grant +rwx permissions the target directories before trying to delete them. The mentioned methods fail to delete directories that don't have read or execute permissions. Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted. 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code if (f.exists()) { // 1 return f.delete(); // 2 } if the file f was deleted by another thread or process between calls 1 and 2, this fragment will return false, while the file f does not exist upon the method return. So, better to write if (f.exists()) { f.delete(); return !f.exists(); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them
[ https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-8849: --- Affects Version/s: 0.23.6 2.0.3-alpha 3.0.0 FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them -- Key: HADOOP-8849 URL: https://issues.apache.org/jira/browse/HADOOP-8849 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch 2 improvements are suggested for implementation of methods org.apache.hadoop.fs.FileUtil.fullyDelete(File) and org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File): 1) We should grant +rwx permissions the target directories before trying to delete them. The mentioned methods fail to delete directories that don't have read or execute permissions. Actual problem appears if an hdfs-related test is timed out (with a short timeout like tens of seconds), and the forked test process is killed, some directories are left on disk that are not readable and/or executable. This prevents next tests from being executed properly because these directories cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. So, its recommended to grant the read, write, and execute permissions the directories whose content is to be deleted. 2) Generic reliability improvement: we shouldn't rely upon File#delete() return value, use File#exists() instead. FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but this is not reliable because File#delete() returns true only if the file was deleted as a result of the #delete() method invocation. E.g. in the following code if (f.exists()) { // 1 return f.delete(); // 2 } if the file f was deleted by another thread or process between calls 1 and 2, this fragment will return false, while the file f does not exist upon the method return. So, better to write if (f.exists()) { f.delete(); return !f.exists(); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
[ https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9063: --- Attachment: HADOOP-9063-trunk--c.patch HADOOP-9063-branch-0.23--c.patch The version c of the patches re-arranges some code to avoid merge conflicts with HADOOP-8849. enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil - Key: HADOOP-9063 URL: https://issues.apache.org/jira/browse/HADOOP-9063 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, HADOOP-9063-branch-0.23--c.patch, HADOOP-9063.patch, HADOOP-9063-trunk--c.patch Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests poorly or not covered at all. Enhance the coverage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
[ https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554164#comment-13554164 ] Ivan A. Veselovsky commented on HADOOP-9063: patch HADOOP-9063-trunk--c.patch is for trunk and branch-2. enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil - Key: HADOOP-9063 URL: https://issues.apache.org/jira/browse/HADOOP-9063 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, HADOOP-9063-branch-0.23--c.patch, HADOOP-9063.patch, HADOOP-9063-trunk--c.patch Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests poorly or not covered at all. Enhance the coverage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
[ https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9200: --- Status: Patch Available (was: Open) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache Key: HADOOP-9200 URL: https://issues.apache.org/jira/browse/HADOOP-9200 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9200-trunk.patch The class org.apache.hadoop.security.NetgroupCache has poor unit-test coverage. Enhance it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9204) fix apacheds distribution download link URL
Ivan A. Veselovsky created HADOOP-9204: -- Summary: fix apacheds distribution download link URL Key: HADOOP-9204 URL: https://issues.apache.org/jira/browse/HADOOP-9204 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs modules with startKdc profile. The build script downloads the server, unpacks it, configures, and runs. The problem is that used URL http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz does not work any more (returns 404). The suggested patch peremetrizes the URL, so that it can be set in single palce in the parent pom.xml, and sets it to the workable value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9204) fix apacheds distribution download link URL
[ https://issues.apache.org/jira/browse/HADOOP-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9204: --- Attachment: HADOOP-9204-trunk.patch The patch is for trunk branch only. fix apacheds distribution download link URL --- Key: HADOOP-9204 URL: https://issues.apache.org/jira/browse/HADOOP-9204 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Attachments: HADOOP-9204-trunk.patch Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs modules with startKdc profile. The build script downloads the server, unpacks it, configures, and runs. The problem is that used URL http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz does not work any more (returns 404). The suggested patch peremetrizes the URL, so that it can be set in single palce in the parent pom.xml, and sets it to the workable value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira