[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265151#comment-13265151 ] Suresh Srinivas commented on HADOOP-8230: - Eli, sorry for the late comment. I agree with the general direction of splitting hflush/hsync feature from append. Perhaps these features should be using two different flags. I have concerns with this change: # I thought the proposal from HDFS-3120 was to add dfs.support.sync. I do not see that flag in this patch. # There are installations where hsync/hflush is disabled, using dfs.support.append. That option should be preserved. # dfs.support.broken.append - why add this and not delete the tests that are testing append functionality? Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8331) Created patch that adds oracle support to DBInputFormat and solves a splitting duplication problem introduced with my last patch.
[ https://issues.apache.org/jira/browse/HADOOP-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265180#comment-13265180 ] Suresh Srinivas commented on HADOOP-8331: - BTW patch looks really huge? Is this correct patch? Created patch that adds oracle support to DBInputFormat and solves a splitting duplication problem introduced with my last patch. - Key: HADOOP-8331 URL: https://issues.apache.org/jira/browse/HADOOP-8331 Project: Hadoop Common Issue Type: Improvement Components: io Affects Versions: 1.0.0 Environment: Redhat x86_64 cluster Reporter: Joseph Doss Labels: package Fix For: 1.0.0, 1.0.2 Attachments: hadoop-1.0.0-20120426-DBInputFormat-stopDuplicatingSplits.patch This patch mainly resolves an overlap of records when splitting tasks in DBInputFormat, thereby removing duplication of records processed. Tested on 1.0.0 and 1.0.2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8331) Created patch that adds oracle support to DBInputFormat and solves a splitting duplication problem introduced with my last patch.
[ https://issues.apache.org/jira/browse/HADOOP-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265178#comment-13265178 ] Suresh Srinivas commented on HADOOP-8331: - Which jira was last patch from? Can please you link that jira to this jira? Created patch that adds oracle support to DBInputFormat and solves a splitting duplication problem introduced with my last patch. - Key: HADOOP-8331 URL: https://issues.apache.org/jira/browse/HADOOP-8331 Project: Hadoop Common Issue Type: Improvement Components: io Affects Versions: 1.0.0 Environment: Redhat x86_64 cluster Reporter: Joseph Doss Labels: package Fix For: 1.0.0, 1.0.2 Attachments: hadoop-1.0.0-20120426-DBInputFormat-stopDuplicatingSplits.patch This patch mainly resolves an overlap of records when splitting tasks in DBInputFormat, thereby removing duplication of records processed. Tested on 1.0.0 and 1.0.2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266700#comment-13266700 ] Suresh Srinivas commented on HADOOP-8230: - bq. Wrt #2 personally I don't think we should allow people to disable durable sync as that can result in data loss for people running HBase. See HADOOP-8230 for more info. I'm open to having an option to disable durable sync if you think that use case is important. There are installations where HBase is not used and sync was disabled. Now this patch has removed that option. When an installation upgrades to a release with this patch, suddenly sync is enabled and there is no way to disable it. bq. (1) there are tests that are using append not to test append per se but for the side effects and we'd lose sync test coverage by removing those tests and (2) per the description we're keeping the append code path in case someone wants to fix the data loss issues in which case it makes sense to keep the test coverage as well. For testing sync, with this patch, since it is enabled by default, you do not need the flag right? Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13267584#comment-13267584 ] Suresh Srinivas commented on HADOOP-8230: - bq. Would such an installation be using the sync call? No from what I know. From what I understand, the intention of this change is to: # Disable append, since 1.x has bugs in that implementation. # Enable sync by default. bq. Making sync actually work is a bug fix, it was a bug that we allowed people to call sync and unlike append there wasn't a flag to enable it that was disabled by default. Better to fix the default behavior (which allows you to sync). The implementation earlier used dfs.supports.append to support both durable sync and append. When this flag is off, whole bunch of code got turned off, related to sync functionality on how the blocks are stored, block reports etc. Now with this change, this code can no longer be turned off. I agree with enabling sync by default. However, for folks who chose not to enable the related code and not impacted by it, we need to add a flag to turn off that functionality. Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13267819#comment-13267819 ] Suresh Srinivas commented on HADOOP-8230: - bq. We've had sync on by default in hundreds of our customer clusters for almost two years now and have yet to see a related data-loss event. The only bugs we've seen have been bugs where sync() wouldn't provide the correct semantics, but for installs which don't use sync, that doesn't matter. That is great. Still, I think we should retain ability to turn it off, because I want to continue running my installation that way and this patch removes that ability. Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269785#comment-13269785 ] Suresh Srinivas commented on HADOOP-8230: - bq. what is that use case? I think I have explained it in the comments above. To repeat: Still, I think we should retain ability to turn it off, because I want to continue running my installation that way and this patch removes that ability. Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8365) Provide ability to disable working sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8365: Priority: Blocker (was: Major) Marking this as blocker. Provide ability to disable working sync --- Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Priority: Blocker Per HADOOP-8230 there's a request for a flag so sync can be disabled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269858#comment-13269858 ] Suresh Srinivas commented on HADOOP-8230: - bq. There may be a misunderstanding: the dfs.support.append flag never controlled whether sync was enabled. dfs.support.append turned off some code paths. These code paths are not just related to append. They enable durable sync. See the patch where it changes, if support append then do x else do y to do x without any check. That is the behavior I want a user to be able to turn off with a flag. Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader
[ https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8366: Affects Version/s: (was: HA Branch (HDFS-1623)) 0.2.0 0.3.0 Use ProtoBuf for RpcResponseHeader -- Key: HADOOP-8366 URL: https://issues.apache.org/jira/browse/HADOOP-8366 Project: Hadoop Common Issue Type: Improvement Affects Versions: 0.2.0, 0.3.0 Reporter: Sanjay Radia Assignee: Sanjay Radia Priority: Blocker Attachments: hadoop-8366-1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269920#comment-13269920 ] Suresh Srinivas commented on HADOOP-8230: - bq. Do you think there many users who'd want to do this Suresh? There are several clusters that I support that do not use sync, that currently runs with append turned off. bq. I'd think the number few and if there any still conscious this option even exists, they are probably suffering from the FUD that sync is buggy/broke. We should help them get over their misconception? I agree that the code that is being enabled has been stable for some time, which is the main reason why it was ported to 0.20.205. However I would like to retain the existing behavior and not enable a change unnecessarily on these clusters. This avoids having to worry about or spend time looking at any bugs/changed behavior that might crop up. For these kinds of changes (see several token related changes that happened in 1.x), I have always advocated adding a flag so existing deployments can stay unaffected. I am asking the same here. It is more important given this patch removed an option that existed to turn off new code. bq. if you feel strongly that we should have a config option that let's people keep the previous/broken sync behavior go for it The need for an option is a comment on the patch committed in this jira. Sorry I could not comment quickly enough, as this patch was committed with a short turn around time. I think it should be addressed as a subsequent patch for this jira and not a separate optional item. Alternatively we could revert this change and rework it to add a flag. Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-6546) BloomMapFile can return false negatives
[ https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-6546: Target Version/s: 1.1.0 BloomMapFile can return false negatives --- Key: HADOOP-6546 URL: https://issues.apache.org/jira/browse/HADOOP-6546 Project: Hadoop Common Issue Type: Bug Components: io Affects Versions: 0.20.1 Reporter: Clark Jefcoat Assignee: Clark Jefcoat Fix For: 0.21.0 Attachments: HADOOP-6546.patch BloomMapFile can return false negatives when using keys of varying sizes. If the amount of data written by the write() method of your key class differs between instance of your key, your BloomMapFile may return false negatives. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues
[ https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8371: Description: See the next comment for details. (was: h1.Test Setup All tests were done on a single node cluster, that runs namenode, secondarynamenode, datanode, all on one machine, running Ubuntu 12.04. /usr/local/hadoop/ is a soft link to /usr/local/hadoop-0.20.203.0/ /usr/local/hadoop-1.0.1 contains the upgrade version. h1.Version - 0.20.203.0 * Formatted name node. * Contents of {dfs.name.dir}/current/VERSION {quote} Tue May 08 08:08:57 EDT 2012 namespaceID=350250898 cTime=0 storageType=NAME_NODE layoutVersion=-31 {quote} * Contents of {dfs.name.dir}/previous.checkpoint/VERSION {quote} Tue May 08 08:03:35 EDT 2012 namespaceID=350250898 cTime=0 storageType=NAME_NODE layoutVersion=-31 {quote} * Copied a few test files into HDFS. * Output from fs -lsr / command {quote} hduser@ruff790:/usr/local/hadoop/bin$ ./hadoop dfs -lsr / drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test -rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 /test/rr_archive_1655003175_1660003165.gz -rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 /test/twonkyportal.log.2011-12-03.rr.gz drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser {quote} * Executed hadoop dfsadmin -finalizeUpgrade (I do not think this is required, but i do not think it should matter either). * Stopped DFS by executing stop-dfs.sh h1. Version - 1.0.1 h2. Upgrade * Tried starting DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh * As expected the name node start failed due to a version mismatch. {quote} 2012-05-08 08:22:38,166 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. java.io.IOException: File system image contains an old layout version -31. An upgrade to version -32 is required. Please restart NameNode with -upgrade option. {quote} * Ran /usr/local/hadoop-1.0.1/bin/stop-dfs.sh to stop datanode and secondarynamenode. * Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -upgrade * Checked upgrade status by calling /usr/local/hadoop-1.0.1/bin/hadoop dfsadmin -upgradeProgress status {quote} Upgrade for version -32 has been completed. Upgrade is not finalized. {quote} * Contents of {dfs.name.dir}/current/VERSION {quote} #Tue May 08 08:25:51 EDT 2012 namespaceID=350250898 cTime=1336479951669 storageType=NAME_NODE layoutVersion=-32 {quote} * Contents of {dfs.name.dir}/previous.checkpoint/VERSION {quote} Tue May 08 08:03:35 EDT 2012 namespaceID=350250898 cTime=0 storageType=NAME_NODE layoutVersion=-31 {quote} * Contents of {dfs.name.dir}/previous/VERSION {quote} #Tue May 08 08:08:57 EDT 2012 namespaceID=350250898 cTime=0 storageType=NAME_NODE layoutVersion=-31 {quote} * Checked to make sure i can list the contents of DFS * Stop DFS. h2.Rollback * Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -rollback * As per contents of hadoop-hduser-namenode-ruff790.log, rollback seems to have succeeded. {quote} 012-05-08 08:37:41,799 INFO org.apache.hadoop.hdfs.server.common.Storage: Rolling back storage directory /usr/local/app/hadoop/tmp/dfs/name. new LV = -31; new CTime = 0 2012-05-08 08:37:41,801 INFO org.apache.hadoop.hdfs.server.common.Storage: Rollback of /usr/local/app/hadoop/tmp/dfs/name is complete. {quote} * Contents of {dfs.name.dir}/current/VERSION {quote} Tue May 08 08:37:42 EDT 2012 namespaceID=350250898 cTime=0 storageType=NAME_NODE layoutVersion=-31 {quote} * Contents of {dfs.name.dir}/previous.checkpoint/VERSION {quote} #Tue May 08 08:08:57 EDT 2012 namespaceID=350250898 cTime=0 storageType=NAME_NODE layoutVersion=-31 {quote} * Checked to make sure i can list the contents of DFS {quote} hduser@ruff790:/usr/local/hadoop-1.0.1/bin$ ./hadoop dfs -lsr / drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test -rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 /test/rr_archive_1655003175_1660003165.gz -rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 /test/twonkyportal.log.2011-12-03.rr.gz drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser {quote} * However at this point i could not browse the file system from WebUI. Then i realized that data node is not really running. From the data node log file it seems like it had shut down during the rollback process. {quote} 012-05-08 08:37:57,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Unregistered data node: 127.0.0.1:50010 at org.apache.hadoop.hdfs.server.namenode.NameNode.verifyRequest(NameNode.java:1077) {quote} * So i ran stop-dfs.sh to shut down namnode and secondarynamenode. * Next start-dfs.sh fails to start
[jira] [Commented] (HADOOP-6546) BloomMapFile can return false negatives
[ https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13270731#comment-13270731 ] Suresh Srinivas commented on HADOOP-6546: - I committed this patch to branch-1. It should be available in release 1.1. BloomMapFile can return false negatives --- Key: HADOOP-6546 URL: https://issues.apache.org/jira/browse/HADOOP-6546 Project: Hadoop Common Issue Type: Bug Components: io Affects Versions: 0.20.1 Reporter: Clark Jefcoat Assignee: Clark Jefcoat Fix For: 0.21.0 Attachments: HADOOP-6546.patch BloomMapFile can return false negatives when using keys of varying sizes. If the amount of data written by the write() method of your key class differs between instance of your key, your BloomMapFile may return false negatives. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-6546) BloomMapFile can return false negatives
[ https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-6546: Fix Version/s: 1.1.0 BloomMapFile can return false negatives --- Key: HADOOP-6546 URL: https://issues.apache.org/jira/browse/HADOOP-6546 Project: Hadoop Common Issue Type: Bug Components: io Affects Versions: 0.20.1 Reporter: Clark Jefcoat Assignee: Clark Jefcoat Fix For: 1.1.0, 0.21.0 Attachments: HADOOP-6546.patch BloomMapFile can return false negatives when using keys of varying sizes. If the amount of data written by the write() method of your key class differs between instance of your key, your BloomMapFile may return false negatives. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8372: Status: Patch Available (was: Open) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 0.23.0, 1.0.0 Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271540#comment-13271540 ] Suresh Srinivas commented on HADOOP-8372: - I was concerned about performance implication of this change. However, in sun jdk, IPAddressUtil#isIPv4LiteralAddress() is called, which is doing more complete check for ip address before doing a look up. Please fix the tests. While at it, please indent the the code in the test correctly. Also, optionaly, you reduce a line by {{ return InetAddress.getByName(name).getHostAddress(); }} normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271561#comment-13271561 ] Suresh Srinivas commented on HADOOP-8372: - BTW can you describe the test better, especially cases 3w.org - xx.xx.xx.xx and UnknownHost - UnknownHost better. normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271592#comment-13271592 ] Suresh Srinivas commented on HADOOP-8372: - bq. 3w.org - xx.xx.xx.xx, it is a public website start with numeric that can be resolved by DNS. This could become an issue. But we could fix it later. Sorry I was not clear. What I meant in my previous comment was, you could add comments to make the test easier to understand. For example, you could method level comment to say {{ /** Test for {@link NetUtils#normalizeHostNames }}. Also you could add a comment saying, when ipaddress is normalized, same address is expected in return and for a resolvable hostname, ipaddress it resolved is expected in return. The reason why I am suggesting this is - our tests are poorly documented. When adding new features, lot more time goes into understanding tests and fixing them than implementing the feature itself. normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271594#comment-13271594 ] Suresh Srinivas commented on HADOOP-8372: - bq. This could become an issue. But we could fix it later. By this, I mean, if resolving the address using DNS fails for some reason, we could fix the test. So the code that you have added seems fine to me. normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271632#comment-13271632 ] Suresh Srinivas commented on HADOOP-8372: - Junping, the patch looks good. Could you please remove the TODO comment. Also can you please use two spaces for indenting instead of tab in the tests. normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8372-v2.patch, HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8372: Attachment: HADOOP-8372.patch Minor edit to the patch: # Removed unused imports in TestNetUtils.java (unrelated to the change from this patch) # Added missing } to {{Test for {@link NetUtils#normalizeHostNames}} normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8372-v2.patch, HADOOP-8372-v3.patch, HADOOP-8372.patch, HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8372: Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch. Thank you Junping. normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Fix For: 3.0.0 Attachments: HADOOP-8372-v2.patch, HADOOP-8372-v3.patch, HADOOP-8372.patch, HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
[ https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8372: Fix Version/s: 2.0.0 I committed the patch to branch-2 as well. normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 1.0.0, 0.23.0 Reporter: Junping Du Assignee: Junping Du Fix For: 2.0.0, 3.0.0 Attachments: HADOOP-8372-v2.patch, HADOOP-8372-v3.patch, HADOOP-8372.patch, HADOOP-8372.patch A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues
[ https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HADOOP-8371. - Resolution: Not A Problem Assignee: Suresh Srinivas Rollback is not a problem. However, I created a related bug HDFS-3393 to track the issue where rollback was allowed on the newer release. Hadoop 1.0.1 release - DFS rollback issues -- Key: HADOOP-8371 URL: https://issues.apache.org/jira/browse/HADOOP-8371 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.0.1 Environment: All tests were done on a single node cluster, that runs namenode, secondarynamenode, datanode, all on one machine, running Ubuntu 12.04 Reporter: Giri Assignee: Suresh Srinivas Priority: Minor Labels: hdfs See the next comment for details. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader
[ https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13272928#comment-13272928 ] Suresh Srinivas commented on HADOOP-8366: - Comments: # Minor: remove empty lines after int callId = response.getCallId(); # Minor: remove empty line in Server#setupResponse before if (status == RpcStatus.SUCCESS) # RpcPayloadHeader.proto #* Please name RpcStatus to RpcStatusProto. Also it would be nice to delete unnecessary lines. #* We should make both callId and status mandatory #* repsonse_ change to response Please remember to delete Status.java when you commit the code +1 for the patch with these changes. Use ProtoBuf for RpcResponseHeader -- Key: HADOOP-8366 URL: https://issues.apache.org/jira/browse/HADOOP-8366 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.0 Reporter: Sanjay Radia Assignee: Sanjay Radia Priority: Blocker Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch, hadoop-8366-3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8367) Better document the declaringClassProtocolName in the rpc headers better
[ https://issues.apache.org/jira/browse/HADOOP-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276902#comment-13276902 ] Suresh Srinivas commented on HADOOP-8367: - Minor comments: # ProtobufRpcEngine.java #* Typo differnt #* The newly added comment is not very clear. Can you please add more information about what you mean by metaProtocols. Also the sentense does not read right. It might make sense to capture the same comments from hadoop_rpc.proto in here. # hadoop_rpc.proto some lines are going beyond 80 characters. Also the last sentence in the newly added comment does not read right. Better document the declaringClassProtocolName in the rpc headers better Key: HADOOP-8367 URL: https://issues.apache.org/jira/browse/HADOOP-8367 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.0 Reporter: Sanjay Radia Assignee: Sanjay Radia Attachments: hadoop-8367-1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8402) Add support for generating pdf clover report to 1.1 release
Suresh Srinivas created HADOOP-8402: --- Summary: Add support for generating pdf clover report to 1.1 release Key: HADOOP-8402 URL: https://issues.apache.org/jira/browse/HADOOP-8402 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 1.0.0 Reporter: Suresh Srinivas Priority: Minor Add support for generating clover PDF report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8402) Add support for generating pdf clover report in 1.1 release
[ https://issues.apache.org/jira/browse/HADOOP-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8402: Summary: Add support for generating pdf clover report in 1.1 release (was: Add support for generating pdf clover report to 1.1 release) Add support for generating pdf clover report in 1.1 release --- Key: HADOOP-8402 URL: https://issues.apache.org/jira/browse/HADOOP-8402 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 1.0.0 Reporter: Suresh Srinivas Priority: Minor Attachments: HADOOP-8402.txt Add support for generating clover PDF report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8402) Add support for generating pdf clover report in 1.1 release
[ https://issues.apache.org/jira/browse/HADOOP-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8402: Attachment: HADOOP-8402.txt Add support for generating pdf clover report in 1.1 release --- Key: HADOOP-8402 URL: https://issues.apache.org/jira/browse/HADOOP-8402 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 1.0.0 Reporter: Suresh Srinivas Priority: Minor Attachments: HADOOP-8402.txt Add support for generating clover PDF report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HADOOP-8402) Add support for generating pdf clover report in 1.1 release
[ https://issues.apache.org/jira/browse/HADOOP-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276938#comment-13276938 ] Suresh Srinivas edited comment on HADOOP-8402 at 5/16/12 5:59 PM: -- I manually tested and ensured that clover_coverage.pdf is generated with clover report summary. was (Author: sureshms): I manually tested and ensure that clover_coverage.pdf is generated with clover report summary. Add support for generating pdf clover report in 1.1 release --- Key: HADOOP-8402 URL: https://issues.apache.org/jira/browse/HADOOP-8402 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 1.0.0 Reporter: Suresh Srinivas Priority: Minor Attachments: HADOOP-8402.txt Add support for generating clover PDF report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8409) Address Hadoop path related issues on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287428#comment-13287428 ] Suresh Srinivas commented on HADOOP-8409: - bq. In line with this, I spent quite a bit of time thinking about pros and cons to having Path object support backslash VS not. Both approaches have legitimate pros and cons. Once I sum them up on my end, I'll reply back. @Please also look at the issues raised in HADOOP-8139 and the reasons why we did not support windows paths on HDFS. Address Hadoop path related issues on Windows - Key: HADOOP-8409 URL: https://issues.apache.org/jira/browse/HADOOP-8409 Project: Hadoop Common Issue Type: Bug Components: fs, test, util Affects Versions: 1.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Attachments: HADOOP-8409-branch-1-win.patch Original Estimate: 168h Remaining Estimate: 168h There are multiple places in prod and test code where Windows paths are not handled properly. From a high level this could be summarized with: 1. Windows paths are not necessarily valid DFS paths (while Unix paths are) 2. Windows paths are not necessarily valid URIs (while Unix paths are) #1 causes a number of tests to fail because they implicitly assume that local paths are valid DFS paths (by extracting the DFS test path from for example test.build.data property) #2 causes issues when URIs are directly created on path strings passed in by the user -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8485) Don't hardcode Apache Hadoop 0.23 in the docs
[ https://issues.apache.org/jira/browse/HADOOP-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8485: Summary: Don't hardcode Apache Hadoop 0.23 in the docs (was: Don't harcode Apache Hadoop 0.23 in the docs) Don't hardcode Apache Hadoop 0.23 in the docs --- Key: HADOOP-8485 URL: https://issues.apache.org/jira/browse/HADOOP-8485 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Minor Attachments: hadoop-8485.txt The docs currently hardcode the string Apache Hadoop 0.23 and hadoop-0.20.205 in the main page. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8510) Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp
[ https://issues.apache.org/jira/browse/HADOOP-8510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295392#comment-13295392 ] Suresh Srinivas commented on HADOOP-8510: - Good idea. bq. I am pretty confused by which version to patch Submit the patch for the trunk - svn repo: https://svn.apache.org/repos/asf/hadoop/common/trunk. The jsps to change are in: hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/ hadoop-mapreduce-project/src/webapps/job Once this is committed, we can put this change into 1.1 release. Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp --- Key: HADOOP-8510 URL: https://issues.apache.org/jira/browse/HADOOP-8510 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.1 Reporter: Lewis John McGibbney Priority: Trivial A simple auto refresh switch would be nice from within the webapp. I am pretty confused by which version to patch, I've looked in trunk and find myself even more confused. If someone would be kind enough to point out where I can check out code and patch to include this issue then I'll happily submit the trivial patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8510) Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp
[ https://issues.apache.org/jira/browse/HADOOP-8510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295755#comment-13295755 ] Suresh Srinivas commented on HADOOP-8510: - bq. MAPREDUCE-3842 This is related to new Yarn MapReduce and unrelated to the files you wanted to change. It should be okay to change the jsps Lewis pointed out. Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp --- Key: HADOOP-8510 URL: https://issues.apache.org/jira/browse/HADOOP-8510 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.1 Reporter: Lewis John McGibbney Priority: Trivial A simple auto refresh switch would be nice from within the webapp. I am pretty confused by which version to patch, I've looked in trunk and find myself even more confused. If someone would be kind enough to point out where I can check out code and patch to include this issue then I'll happily submit the trivial patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8533) Remove Parallel Call in IPC
Suresh Srinivas created HADOOP-8533: --- Summary: Remove Parallel Call in IPC Key: HADOOP-8533 URL: https://issues.apache.org/jira/browse/HADOOP-8533 Project: Hadoop Common Issue Type: Bug Components: ipc Reporter: Suresh Srinivas Assignee: Suresh Srinivas Fix For: 3.0.0 From what I know, I do not think any one uses Parallel Call. I also think it is not tested very well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8530) Potential deadlock in IPC
[ https://issues.apache.org/jira/browse/HADOOP-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13401663#comment-13401663 ] Suresh Srinivas commented on HADOOP-8530: - Parallel call is not used. So this may not be an important problem to fix. Created HADOOP-8533 to fix this issue. Potential deadlock in IPC - Key: HADOOP-8530 URL: https://issues.apache.org/jira/browse/HADOOP-8530 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 1.0.3, 2.0.0-alpha Reporter: Tom White Attachments: 1_jcarder_result_0.dot.png This cycle (see attached image, and explanation here: http://www.jcarder.org/manual.html#analysis) was found with jcarder in branch-1 (affects trunk too). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8059) Add javadoc to InterfaceAudience and InterfaceStability
[ https://issues.apache.org/jira/browse/HADOOP-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13402358#comment-13402358 ] Suresh Srinivas commented on HADOOP-8059: - Brandon, I as thinking of capturing information along following lines: We should add some of the following comments: # InterfaceAudience #* All public classes must have InterfaceAudience annotation. Public classes that are not marked with this annotation must be considered by default as InterfaceAudience#Private. #* External applications must only use classes that are marked InterfaceAudience#Public. Avoid using non public classes as these classes could be removed or change in incompatible ways. #* Internal projects must only use classes that are marked InterfaceAudience#LimitedPrivate or InterfaceAudience#Public. #* Methods may have a different annotation that it is more restrictive compared to the audience classification of the class. Example: A class might be InterfaceAudience#Publice, but a method may be InterfaceAudience#LimtedPrivate # Interface stability #* All classes that are annotated with InterfaceAudience#Public or LimitedPrivate must have InterfaceStability annotation. #* Classes that are InterfaceAudience#Private are to be considered unstable unless a different InterfaceStability annotation states otherwise. #* Incompatible changes must not be made to classes marked as stable. Add javadoc to InterfaceAudience and InterfaceStability --- Key: HADOOP-8059 URL: https://issues.apache.org/jira/browse/HADOOP-8059 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: 0.24.0 Reporter: Suresh Srinivas Assignee: Brandon Li Attachments: HADOOP-8059.patch InterfaceAudience and InterfaceStability javadoc is incomplete. The details from HADOOP-5073. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8059) Add javadoc to InterfaceAudience and InterfaceStability
[ https://issues.apache.org/jira/browse/HADOOP-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8059: Affects Version/s: (was: 0.24.0) 3.0.0 2.0.0-alpha Status: Patch Available (was: Open) Add javadoc to InterfaceAudience and InterfaceStability --- Key: HADOOP-8059 URL: https://issues.apache.org/jira/browse/HADOOP-8059 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Suresh Srinivas Assignee: Brandon Li Attachments: HADOOP-8059.patch, HADOOP-8059.patch InterfaceAudience and InterfaceStability javadoc is incomplete. The details from HADOOP-5073. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8059) Add javadoc to InterfaceAudience and InterfaceStability
[ https://issues.apache.org/jira/browse/HADOOP-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8059: Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) +1 for the patch. I committed it. Thank you Brandon. Add javadoc to InterfaceAudience and InterfaceStability --- Key: HADOOP-8059 URL: https://issues.apache.org/jira/browse/HADOOP-8059 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Suresh Srinivas Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8059.patch, HADOOP-8059.patch InterfaceAudience and InterfaceStability javadoc is incomplete. The details from HADOOP-5073. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8533) Remove Parallel Call in IPC
[ https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13405383#comment-13405383 ] Suresh Srinivas commented on HADOOP-8533: - +1 for the patch. Remove Parallel Call in IPC --- Key: HADOOP-8533 URL: https://issues.apache.org/jira/browse/HADOOP-8533 Project: Hadoop Common Issue Type: Bug Components: ipc Reporter: Suresh Srinivas Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8533.patch From what I know, I do not think any one uses Parallel Call. I also think it is not tested very well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8533) Remove Parallel Call in IPC
[ https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8533: Description: From what I know, I do not think anyone uses Parallel Call. I also think it is not tested very well. (was: From what I know, I do not think any one uses Parallel Call. I also think it is not tested very well.) Affects Version/s: 3.0.0 1.0.0 2.0.0-alpha Issue Type: Improvement (was: Bug) Remove Parallel Call in IPC --- Key: HADOOP-8533 URL: https://issues.apache.org/jira/browse/HADOOP-8533 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 1.0.0, 2.0.0-alpha, 3.0.0 Reporter: Suresh Srinivas Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8533.patch From what I know, I do not think anyone uses Parallel Call. I also think it is not tested very well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8533) Remove Parallel Call in IPC
[ https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8533: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch. Thank you Brandon. Remove Parallel Call in IPC --- Key: HADOOP-8533 URL: https://issues.apache.org/jira/browse/HADOOP-8533 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 1.0.0, 2.0.0-alpha, 3.0.0 Reporter: Suresh Srinivas Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8533.patch From what I know, I do not think anyone uses Parallel Call. I also think it is not tested very well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8533) Remove Parallel Call in IPC
[ https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8533: Fix Version/s: 2.0.1-alpha Release Note: Merged the change to branch-2 Remove Parallel Call in IPC --- Key: HADOOP-8533 URL: https://issues.apache.org/jira/browse/HADOOP-8533 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 1.0.0, 2.0.0-alpha, 3.0.0 Reporter: Suresh Srinivas Assignee: Brandon Li Fix For: 2.0.1-alpha, 3.0.0 Attachments: HADOOP-8533.patch From what I know, I do not think anyone uses Parallel Call. I also think it is not tested very well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods
[ https://issues.apache.org/jira/browse/HADOOP-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8434: Attachment: HADOOP-8434-2.patch TestConfiguration currently has no tests for direct setter methods -- Key: HADOOP-8434 URL: https://issues.apache.org/jira/browse/HADOOP-8434 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Harsh J Assignee: madhukara phatak Labels: newbie Attachments: HADOOP-8434-1.patch, HADOOP-8434-2.patch, HADOOP-8434.patch Jan van der Lugt noticed this on HADOOP-8415. bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be better to add all of those at the same time. Would be good to have (coverage-wise first, regression-wise second) explicit tests for the each of the setter methods, although other projects' tests do test this extensively. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods
[ https://issues.apache.org/jira/browse/HADOOP-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8434: Issue Type: Test (was: Bug) TestConfiguration currently has no tests for direct setter methods -- Key: HADOOP-8434 URL: https://issues.apache.org/jira/browse/HADOOP-8434 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0 Reporter: Harsh J Assignee: madhukara phatak Labels: newbie Attachments: HADOOP-8434-1.patch, HADOOP-8434-2.patch, HADOOP-8434.patch Jan van der Lugt noticed this on HADOOP-8415. bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be better to add all of those at the same time. Would be good to have (coverage-wise first, regression-wise second) explicit tests for the each of the setter methods, although other projects' tests do test this extensively. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods
[ https://issues.apache.org/jira/browse/HADOOP-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8434: Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch. Thank you Madhukara for providing the patch. TestConfiguration currently has no tests for direct setter methods -- Key: HADOOP-8434 URL: https://issues.apache.org/jira/browse/HADOOP-8434 Project: Hadoop Common Issue Type: Test Affects Versions: 3.0.0 Reporter: Harsh J Assignee: madhukara phatak Labels: newbie Fix For: 3.0.0 Attachments: HADOOP-8434-1.patch, HADOOP-8434-2.patch, HADOOP-8434.patch Jan van der Lugt noticed this on HADOOP-8415. bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be better to add all of those at the same time. Would be good to have (coverage-wise first, regression-wise second) explicit tests for the each of the setter methods, although other projects' tests do test this extensively. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7818) DiskChecker#checkDir should fail if the directory is not executable
[ https://issues.apache.org/jira/browse/HADOOP-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406027#comment-13406027 ] Suresh Srinivas commented on HADOOP-7818: - Eli, since you have the context of this jira, can you please review and commit the patch, if you have time? If you are busy, I will commit the patch. DiskChecker#checkDir should fail if the directory is not executable --- Key: HADOOP-7818 URL: https://issues.apache.org/jira/browse/HADOOP-7818 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 0.20.205.0, 0.23.0, 0.24.0 Reporter: Eli Collins Assignee: madhukara phatak Priority: Minor Attachments: HADOOP-7818-1.patch, HADOOP-7818-2.patch, HADOOP-7818.patch DiskChecker#checkDir fails if a directory can't be created, read, or written but does not fail if the directory exists and is not executable. This causes subsequent code to think the directory is OK but later fail due to an inability to access the directory (eg see MAPREDUCE-2921). I propose checkDir fails if the directory is not executable. Looking at the uses, this should be fine, I think it was ignored because checkDir is often used to create directories and it creates executable directories. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8362) Improve exception message when Configuration.set() is called with a null key or value
[ https://issues.apache.org/jira/browse/HADOOP-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8362: Attachment: HADOOP-8362.9.patch Minor changes to fix indentation issues. Madhukara, in future, please follow the coding guidelines as Colin had suggested. Improve exception message when Configuration.set() is called with a null key or value - Key: HADOOP-8362 URL: https://issues.apache.org/jira/browse/HADOOP-8362 Project: Hadoop Common Issue Type: Improvement Components: conf Affects Versions: 2.0.0-alpha Reporter: Todd Lipcon Assignee: madhukara phatak Priority: Trivial Labels: newbie Attachments: HADOOP-8362-1.patch, HADOOP-8362-2.patch, HADOOP-8362-3.patch, HADOOP-8362-4.patch, HADOOP-8362-5.patch, HADOOP-8362-6.patch, HADOOP-8362-7.patch, HADOOP-8362-8.patch, HADOOP-8362.9.patch, HADOOP-8362.patch Currently, calling Configuration.set(...) with a null value results in a NullPointerException within Properties.setProperty. We should check for null key/value and throw a better exception. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8362) Improve exception message when Configuration.set() is called with a null key or value
[ https://issues.apache.org/jira/browse/HADOOP-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8362: Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed the patch. Thank you Madhukara. Improve exception message when Configuration.set() is called with a null key or value - Key: HADOOP-8362 URL: https://issues.apache.org/jira/browse/HADOOP-8362 Project: Hadoop Common Issue Type: Improvement Components: conf Affects Versions: 2.0.0-alpha Reporter: Todd Lipcon Assignee: madhukara phatak Priority: Trivial Labels: newbie Fix For: 3.0.0 Attachments: HADOOP-8362-1.patch, HADOOP-8362-2.patch, HADOOP-8362-3.patch, HADOOP-8362-4.patch, HADOOP-8362-5.patch, HADOOP-8362-6.patch, HADOOP-8362-7.patch, HADOOP-8362-8.patch, HADOOP-8362.9.patch, HADOOP-8362.patch Currently, calling Configuration.set(...) with a null value results in a NullPointerException within Properties.setProperty. We should check for null key/value and throw a better exception. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7818) DiskChecker#checkDir should fail if the directory is not executable
[ https://issues.apache.org/jira/browse/HADOOP-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406118#comment-13406118 ] Suresh Srinivas commented on HADOOP-7818: - Please follow the coding conventions - http://wiki.apache.org/hadoop/CodeReviewChecklist . I fixed it for other patches. Some examples: # Please fix the indentation - {{if (!dir.canExecute)}} has an extra white space preceding it # please name _checkDirs as checkDirs # Please use space before and after + # Catch should be in the same line as try blocks enclosing { DiskChecker#checkDir should fail if the directory is not executable --- Key: HADOOP-7818 URL: https://issues.apache.org/jira/browse/HADOOP-7818 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 0.20.205.0, 0.23.0, 0.24.0 Reporter: Eli Collins Assignee: madhukara phatak Priority: Minor Attachments: HADOOP-7818-1.patch, HADOOP-7818-2.patch, HADOOP-7818.patch DiskChecker#checkDir fails if a directory can't be created, read, or written but does not fail if the directory exists and is not executable. This causes subsequent code to think the directory is OK but later fail due to an inability to access the directory (eg see MAPREDUCE-2921). I propose checkDir fails if the directory is not executable. Looking at the uses, this should be fine, I think it was ignored because checkDir is often used to create directories and it creates executable directories. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.
[ https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406123#comment-13406123 ] Suresh Srinivas commented on HADOOP-8552: - Usename is in the log entries right. Can you describe the problem better? Conflict: Same security.log.file for multiple users. - Key: HADOOP-8552 URL: https://issues.apache.org/jira/browse/HADOOP-8552 Project: Hadoop Common Issue Type: Bug Components: conf, security Affects Versions: 1.0.3, 2.0.0-alpha Reporter: Karthik Kambatla In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. In the presence of multiple users, this can lead to a potential conflict. Adding username to the log file would avoid this scenario. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8564) Create a Windows native InputStream class to address datanode concurrent reading and writing issue
[ https://issues.apache.org/jira/browse/HADOOP-8564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13407491#comment-13407491 ] Suresh Srinivas commented on HADOOP-8564: - +1 for the second option. This will also allow adding future optimization at the stream level on Windows, similar to the ones done for Linux. Create a Windows native InputStream class to address datanode concurrent reading and writing issue -- Key: HADOOP-8564 URL: https://issues.apache.org/jira/browse/HADOOP-8564 Project: Hadoop Common Issue Type: Bug Components: io Affects Versions: 1-win Reporter: Chuan Liu Assignee: Chuan Liu HDFS files are made up of blocks. First, let’s look at writing. When the data is written to datanode, an active or temporary file is created to receive packets. After the last packet for the block is received, we will finalize the block. One step during finalization is to rename the block file to a new directory. The relevant code can be found via the call sequence: FSDataSet.finalizeBlockInternal - FSDir.addBlock. {code} if ( ! metaData.renameTo( newmeta ) || ! src.renameTo( dest ) ) { throw new IOException( could not move files for + b + from tmp to + dest.getAbsolutePath() ); } {code} Let’s then switch to reading. On HDFS, it is expected the client can also read these unfinished blocks. So when the read calls from client reach datanode, the datanode will open an input stream on the unfinished block file. The problem comes in when the file is opened for reading while the datanode receives last packet from client and try to rename the finished block file. This operation will succeed on Linux, but not on Windows . The behavior can be modified on Windows to open the file with FILE_SHARE_DELETE flag on, i.e. sharing the delete (including renaming) permission with other processes while opening the file. There is also a Java bug ([id 6357433|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6357433]) reported a while back on this. However, since this behavior exists for Java on Windows since JDK 1.0, the Java developers do not want to break the backward compatibility on this behavior. Instead, a new file system API is proposed in JDK 7. As outlined in the [Java forum|http://www.java.net/node/645421] by the Java developer (kbr), there are three ways to fix the problem: # Use different mechanism in the application in dealing with files. # Create a new implementation of InputStream abstract class using Windows native code. # Patch JDK with a private patch that alters FileInputStream behavior. For the third option, it cannot fix the problem for users using Oracle JDK. We discussed some options for the first approach. For example one option is to use two phase renaming, i.e. first hardlink; then remove the old hardlink when read is finished. This option was thought to be rather pervasive. Another option discussed is to change the HDFS behavior on Windows by not allowing client reading unfinished blocks. However this behavior change is thought to be problematic and may affect other application build on top of HDFS. For all the reasons discussed above, we will use the second approach to address the problem. If there are better options to fix the problem, we would also like to hear about them. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append
[ https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408193#comment-13408193 ] Suresh Srinivas commented on HADOOP-8230: - I had marked HADOOP-8365 as a blocker for 1.1.0. Since HADOOP-8365 has not been fixed yet for 1.1.0, I am -1 on this patch. If HADOOP-8365 gets fixed, I will remove my -1. Enable sync by default and disable append - Key: HADOOP-8230 URL: https://issues.apache.org/jira/browse/HADOOP-8230 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Assignee: Eli Collins Fix For: 1.1.0 Attachments: hadoop-8230.txt Per HDFS-3120 for 1.x let's: - Always enable the sync path, which is currently only enabled if dfs.support.append is set - Remove the dfs.support.append configuration option. We'll keep the code paths though in case we ever fix append on branch-1, in which case we can add the config option back -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8567) Backport conf servlet with dump running configuration to branch 1.x
[ https://issues.apache.org/jira/browse/HADOOP-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408394#comment-13408394 ] Suresh Srinivas commented on HADOOP-8567: - +1 for backport. This will be very useful feature on stable release. Backport conf servlet with dump running configuration to branch 1.x --- Key: HADOOP-8567 URL: https://issues.apache.org/jira/browse/HADOOP-8567 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 1.0.3 Reporter: Junping Du Assignee: Junping Du Fix For: 0.21.1, 2.0.1-alpha HADOOP-6408 provide conf servlet that can dump running configuration which great helps admin to trouble shooting the configuration issue. However, that patch works on branch after 0.21 only and should be backport to branch 1.x. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8365) Provide ability to disable working sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13409766#comment-13409766 ] Suresh Srinivas commented on HADOOP-8365: - Comments: # The check added into FSNamesystem.java also needs to be added to DataNode.java, FSDataset.java, where earlier support for append was checked. # Not sure why you are calling it *broken* sync. Can you remove broken from variable and configuration name. Provide ability to disable working sync --- Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Assignee: Eli Collins Priority: Blocker Attachments: hadoop-8365.txt Per HADOOP-8230 there's a request for a flag to disable the sync code paths that dfs.support.append used to enable. The sync method itself will still be available and have a broken implementation as that was the behavior before HADOOP-8230. This config flag should default to false as the primary motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8579) Websites for HDFS and MapReduce both send users to video training resource which is non-public
[ https://issues.apache.org/jira/browse/HADOOP-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13409968#comment-13409968 ] Suresh Srinivas commented on HADOOP-8579: - Harsh, please ensure the link (if you are retaining it) follows the directions given in HADOOP-5754. Websites for HDFS and MapReduce both send users to video training resource which is non-public -- Key: HADOOP-8579 URL: https://issues.apache.org/jira/browse/HADOOP-8579 Project: Hadoop Common Issue Type: Bug Environment: website Reporter: David L. Willson Assignee: Harsh J Priority: Minor Original Estimate: 2h Remaining Estimate: 2h Main pages for HDFS and MapReduce send new user to unavailable training resource. These two pages: http://hadoop.apache.org/mapreduce/ http://hadoop.apache.org/hdfs/ Link to this page: http://vimeo.com/3584536 That page is not public, and not shared to all registered Vimeo users, and I see nothing indicating how to ask for access to the resource. Please make the vids public, or remove the link of disappointment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8579) Websites for HDFS and MapReduce both send users to video training resource which is non-public
[ https://issues.apache.org/jira/browse/HADOOP-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13409971#comment-13409971 ] Suresh Srinivas commented on HADOOP-8579: - BTW my vote is to remove that link altogether, since it is hard to make sure that the video adheres to the guidelines from HADOOP-5754. Websites for HDFS and MapReduce both send users to video training resource which is non-public -- Key: HADOOP-8579 URL: https://issues.apache.org/jira/browse/HADOOP-8579 Project: Hadoop Common Issue Type: Bug Environment: website Reporter: David L. Willson Assignee: Harsh J Priority: Minor Original Estimate: 2h Remaining Estimate: 2h Main pages for HDFS and MapReduce send new user to unavailable training resource. These two pages: http://hadoop.apache.org/mapreduce/ http://hadoop.apache.org/hdfs/ Link to this page: http://vimeo.com/3584536 That page is not public, and not shared to all registered Vimeo users, and I see nothing indicating how to ask for access to the resource. Please make the vids public, or remove the link of disappointment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8365) Provide ability to disable working sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13411248#comment-13411248 ] Suresh Srinivas commented on HADOOP-8365: - +1 for the patch. Provide ability to disable working sync --- Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Assignee: Eli Collins Priority: Blocker Attachments: hadoop-8365.txt, hadoop-8365.txt Per HADOOP-8230 there's a request for a flag to disable the sync code paths that dfs.support.append used to enable. The sync method itself will still be available and have a broken implementation as that was the behavior before HADOOP-8230. This config flag should default to false as the primary motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7753) Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class
[ https://issues.apache.org/jira/browse/HADOOP-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13412275#comment-13412275 ] Suresh Srinivas commented on HADOOP-7753: - Brandon, +1 for the change. What tests did you run? Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class Key: HADOOP-7753 URL: https://issues.apache.org/jira/browse/HADOOP-7753 Project: Hadoop Common Issue Type: Sub-task Components: io, native, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.23.0 Attachments: HADOOP-7753.branch-1.patch, HADOOP-7753.branch-1.patch, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt This JIRA adds JNI wrappers for sync_data_range and posix_fadvise. It also implements a ReadaheadPool class for future use from HDFS and MapReduce. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8365) Add flag to disable durable sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HADOOP-8365. - Resolution: Fixed Release Note: This patch enables durable sync by default. Installation where HBase was not used, that used to run without setting {{dfs.support.append}} or setting it to false in the configurate, must set {{dfs.durable.sync}} to false to preserve the previous semantics. Hadoop Flags: Incompatible change,Reviewed (was: Reviewed) Add flag to disable durable sync Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Assignee: Eli Collins Priority: Blocker Fix For: 1.1.0 Attachments: hadoop-8365.txt, hadoop-8365.txt Per HADOOP-8230 there's a request for a flag to disable the sync code paths that dfs.support.append used to enable. The sync method itself will still be available and have a broken implementation as that was the behavior before HADOOP-8230. This config flag should default to false as the primary motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8365) Add flag to disable durable sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8365: Release Note: This patch enables durable sync by default. Installation where HBase was not used, that used to run without setting dfs.support.append or setting it to false explicitly in the configuration, must add a new flag dfs.durable.sync and set it to false to preserve the previous semantics. (was: This patch enables durable sync by default. Installation where HBase was not used, that used to run without setting {{dfs.support.append}} or setting it to false in the configurate, must set {{dfs.durable.sync}} to false to preserve the previous semantics.) Add flag to disable durable sync Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Assignee: Eli Collins Priority: Blocker Fix For: 1.1.0 Attachments: hadoop-8365.txt, hadoop-8365.txt Per HADOOP-8230 there's a request for a flag to disable the sync code paths that dfs.support.append used to enable. The sync method itself will still be available and have a broken implementation as that was the behavior before HADOOP-8230. This config flag should default to false as the primary motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8593) add the missed @Override to methods in Metric/Metric2 package
[ https://issues.apache.org/jira/browse/HADOOP-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8593: Fix Version/s: 3.0.0 Status: Patch Available (was: Open) add the missed @Override to methods in Metric/Metric2 package -- Key: HADOOP-8593 URL: https://issues.apache.org/jira/browse/HADOOP-8593 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Brandon Li Assignee: Brandon Li Priority: Minor Fix For: 3.0.0 Attachments: HADOOP-8593.patch Adding @Override to the proper methods to take advantage of the compiler checking and make the code more readable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8593) add the missed @Override to methods in Metric/Metric2 package
[ https://issues.apache.org/jira/browse/HADOOP-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8593: Affects Version/s: 3.0.0 1.0.0 add the missed @Override to methods in Metric/Metric2 package -- Key: HADOOP-8593 URL: https://issues.apache.org/jira/browse/HADOOP-8593 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 1.0.0, 3.0.0 Reporter: Brandon Li Assignee: Brandon Li Priority: Minor Fix For: 3.0.0 Attachments: HADOOP-8593.patch Adding @Override to the proper methods to take advantage of the compiler checking and make the code more readable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8593) add the missed @Override to methods in Metric/Metric2 package
[ https://issues.apache.org/jira/browse/HADOOP-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8593: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch. Thank you Brandon. add the missed @Override to methods in Metric/Metric2 package -- Key: HADOOP-8593 URL: https://issues.apache.org/jira/browse/HADOOP-8593 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 1.0.0, 3.0.0 Reporter: Brandon Li Assignee: Brandon Li Priority: Minor Fix For: 3.0.0 Attachments: HADOOP-8593.patch Adding @Override to the proper methods to take advantage of the compiler checking and make the code more readable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.
[ https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13416598#comment-13416598 ] Suresh Srinivas commented on HADOOP-8552: - Alejandro, when committing incompatible changes, could you please add the change description in CHANGES.txt under INCOMPATIBLE CHANGES section. Also could you please add release notes on what is incompatible here and how to get around it. Conflict: Same security.log.file for multiple users. - Key: HADOOP-8552 URL: https://issues.apache.org/jira/browse/HADOOP-8552 Project: Hadoop Common Issue Type: Bug Components: conf, security Affects Versions: 1.0.3, 2.0.0-alpha Reporter: Karthik Kambatla Assignee: Karthik Kambatla Fix For: 1.1.0, 2.0.1-alpha Attachments: HADOOP-8552_branch1.patch, HADOOP-8552_branch2.patch In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. In the presence of multiple users, this can lead to a potential conflict. Adding username to the log file would avoid this scenario. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8269) Fix some javadoc warnings on branch-1
[ https://issues.apache.org/jira/browse/HADOOP-8269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8269: Priority: Trivial (was: Major) Fix some javadoc warnings on branch-1 - Key: HADOOP-8269 URL: https://issues.apache.org/jira/browse/HADOOP-8269 Project: Hadoop Common Issue Type: Bug Components: documentation Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Fix For: 1.1.0 Attachments: hadoop-8269.txt There are some javadoc warnings on branch-1, let's fix them. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.
[ https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13416620#comment-13416620 ] Suresh Srinivas commented on HADOOP-8552: - I also added this change in CHANGES.txt in branch 1.1. Conflict: Same security.log.file for multiple users. - Key: HADOOP-8552 URL: https://issues.apache.org/jira/browse/HADOOP-8552 Project: Hadoop Common Issue Type: Bug Components: conf, security Affects Versions: 1.0.3, 2.0.0-alpha Reporter: Karthik Kambatla Assignee: Karthik Kambatla Fix For: 1.1.0, 2.0.1-alpha Attachments: HADOOP-8552_branch1.patch, HADOOP-8552_branch2.patch In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. In the presence of multiple users, this can lead to a potential conflict. Adding username to the log file would avoid this scenario. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7753) Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class
[ https://issues.apache.org/jira/browse/HADOOP-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-7753: Fix Version/s: (was: 1.2.0) 1.1.0 Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class Key: HADOOP-7753 URL: https://issues.apache.org/jira/browse/HADOOP-7753 Project: Hadoop Common Issue Type: Sub-task Components: io, native, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 1.1.0, 0.23.0 Attachments: HADOOP-7753.branch-1.patch, HADOOP-7753.branch-1.patch, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt This JIRA adds JNI wrappers for sync_data_range and posix_fadvise. It also implements a ReadaheadPool class for future use from HDFS and MapReduce. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8605) TestReflectionUtils.testCacheDoesntLeak() can't illustrate ReflectionUtils don't generate memory leak
[ https://issues.apache.org/jira/browse/HADOOP-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8605: Priority: Minor (was: Major) TestReflectionUtils.testCacheDoesntLeak() can't illustrate ReflectionUtils don't generate memory leak - Key: HADOOP-8605 URL: https://issues.apache.org/jira/browse/HADOOP-8605 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 1.0.3 Reporter: Yang Jiandan Priority: Minor TestReflectionUtils.testCacheDoesntLeak() uses different urlClassLoader to load TestReflectionUtils$LoadedInChild in a for cycle: {code} int iterations=; for (int i=0; iiterations; i++) { URLClassLoader loader = new URLClassLoader(new URL[0], getClass().getClassLoader()); Class cl = Class.forName(org.apache.hadoop.util.TestReflectionUtils$LoadedInChild, false, loader); Object o = ReflectionUtils.newInstance(cl, null); assertEquals(cl, o.getClass()); } {code} but every time it generate the same class,so in ReflectionUtils.CONSTRUCTOR_CACHE only include one class. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8607) Replace references to Dr Who in codebase with @BigDataBorat
[ https://issues.apache.org/jira/browse/HADOOP-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13417914#comment-13417914 ] Suresh Srinivas commented on HADOOP-8607: - @Sanjay you should put it in your TODO list - https://twitter.com/DEVOPS_BORAT/status/223556198866235393 Replace references to Dr Who in codebase with @BigDataBorat - Key: HADOOP-8607 URL: https://issues.apache.org/jira/browse/HADOOP-8607 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 1.0.3, 2.0.0-alpha Reporter: Steve Loughran Assignee: Sanjay Radia Priority: Minor Original Estimate: 0.5h Remaining Estimate: 0.5h People complain that having Dr Who in the code causes confusion and isn't appropriate in Hadoop now that it has matured. I propose that we replace this anonymous user ID with {{@BigDataBorat}}. This will # Increase brand awareness of @BigDataBorat and their central role in the Big Data ecosystem. # Drive traffic to twitter, and increase their revenue. As contributors to the Hadoop platform, this will fund further Hadoop development. Patching the code is straightforward; no easy tests, though we could monitor twitter followers to determine rollout of the patch in the field. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8607) Replace references to Dr Who in codebase with @BigDataBorat
[ https://issues.apache.org/jira/browse/HADOOP-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13417926#comment-13417926 ] Suresh Srinivas commented on HADOOP-8607: - Small suggestion - key name should be {{hadoop.borat.id}} Replace references to Dr Who in codebase with @BigDataBorat - Key: HADOOP-8607 URL: https://issues.apache.org/jira/browse/HADOOP-8607 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 1.0.3, 2.0.0-alpha Reporter: Steve Loughran Assignee: Sanjay Radia Priority: Minor Original Estimate: 0.5h Remaining Estimate: 0.5h People complain that having Dr Who in the code causes confusion and isn't appropriate in Hadoop now that it has matured. I propose that we replace this anonymous user ID with {{@BigDataBorat}}. This will # Increase brand awareness of @BigDataBorat and their central role in the Big Data ecosystem. # Drive traffic to twitter, and increase their revenue. As contributors to the Hadoop platform, this will fund further Hadoop development. Patching the code is straightforward; no easy tests, though we could monitor twitter followers to determine rollout of the patch in the field. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7446) Implement CRC32C native code using SSE4.2 instructions
[ https://issues.apache.org/jira/browse/HADOOP-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-7446: Component/s: performance Implement CRC32C native code using SSE4.2 instructions -- Key: HADOOP-7446 URL: https://issues.apache.org/jira/browse/HADOOP-7446 Project: Hadoop Common Issue Type: Improvement Components: native, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.23.0 Attachments: crc-pipeline-fix.txt, hadoop-7446.txt, hadoop-7446.txt, hadoop-7446.txt, pipelined-crc.patch.txt, pipelined_on_todds_patch.txt, pipelined_with_todds_patch.txt Once HADOOP-7445 is implemented, we can get further performance improvements by implementing CRC32C using the hardware support available in SSE4.2. This support should be dynamically enabled based on CPU feature flags, and of course should be ifdeffed properly so that it doesn't break the build on architectures/platforms where it's not available. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7333) Performance improvement in PureJavaCrc32
[ https://issues.apache.org/jira/browse/HADOOP-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-7333: Component/s: performance Performance improvement in PureJavaCrc32 Key: HADOOP-7333 URL: https://issues.apache.org/jira/browse/HADOOP-7333 Project: Hadoop Common Issue Type: Improvement Components: performance, util Affects Versions: 0.21.0 Environment: Linux x64 Reporter: Eric Caspole Assignee: Eric Caspole Priority: Minor Fix For: 0.23.0 Attachments: HADOOP-7333.patch, c7333_20110526.patch I would like to propose a small patch to org.apache.hadoop.util.PureJavaCrc32.update(byte[] b, int off, int len) Currently the method stores the intermediate result back into the data member crc. I noticed this method gets inlined into DataChecksum.update() and that method appears as one of the hotter methods in a simple hprof profile collected while running terasort and gridmix. If the code is modified to save the temporary result into a local and just once store the final result back into the data member, it results in slightly more efficient hotspot codegen. I tested this change using the the org.apache.hadoop.util.TestPureJavaCrc32$PerformanceTest which is embedded in the existing unit test for this class, TestPureJavaCrc32 on a variety of linux x64 AMD and Intel multi-socket and multi-core systems I have available to test. The patch removes several stores of the intermediate result to memory yielding a 0%-10% speedup in the org.apache.hadoop.util.TestPureJavaCrc32$PerformanceTest which is embedded in the existing unit test for this class, TestPureJavaCrc32. If you use a debug hotspot JVM with -XX:+PrintOptoAssembly, you can see the intermediate stores such as: 414 movqR9, [rsp + #24] # spill 419 movl[R9 + #12 (8-bit)], RDX # int ! Field PureJavaCrc32.crc 41d xorlR10, RDX# int The patch results in just one final store of the fully computed value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-6148) Implement a pure Java CRC32 calculator
[ https://issues.apache.org/jira/browse/HADOOP-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-6148: Component/s: performance Implement a pure Java CRC32 calculator -- Key: HADOOP-6148 URL: https://issues.apache.org/jira/browse/HADOOP-6148 Project: Hadoop Common Issue Type: Improvement Components: performance, util Reporter: Owen O'Malley Assignee: Scott Carey Fix For: 0.21.0 Attachments: PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32New.java, PureJavaCrc32NewInner.java, PureJavaCrc32NewLoop.java, TestCrc32Performance.java, TestCrc32Performance.java, TestCrc32Performance.java, TestCrc32Performance.java, TestPureJavaCrc32.java, benchmarks20090714.txt, benchmarks20090715.txt, crc32-results.txt, hadoop-5598-evil.txt, hadoop-5598-hybrid.txt, hadoop-5598.txt, hadoop-5598.txt, hadoop-6148.txt, hadoop-6148.txt, hadoop-6148.txt, hdfs-297.txt We've seen a reducer writing 200MB to HDFS with replication = 1 spending a long time in crc calculation. In particular, it was spending 5 seconds in crc calculation out of a total of 6 for the write. I suspect that it is the java-jni border that is causing us grief. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-6166) Improve PureJavaCrc32
[ https://issues.apache.org/jira/browse/HADOOP-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-6166: Component/s: performance Improve PureJavaCrc32 - Key: HADOOP-6166 URL: https://issues.apache.org/jira/browse/HADOOP-6166 Project: Hadoop Common Issue Type: Improvement Components: performance, util Affects Versions: 0.21.0 Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE Fix For: 0.21.0 Attachments: Rplots-laptop.pdf, Rplots-nehalem32.pdf, Rplots-nehalem64.pdf, Rplots.pdf, Rplots.pdf, Rplots.pdf, c6166_20090722.patch, c6166_20090722_benchmark_32VM.txt, c6166_20090722_benchmark_64VM.txt, c6166_20090727.patch, c6166_20090728.patch, c6166_20090810.patch, c6166_20090811.patch, c6166_20090819.patch, c6166_20090819review.patch, graph.r, graph.r Got some ideas to improve CRC32 calculation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8606) FileSystem.get may return the wrong filesystem
[ https://issues.apache.org/jira/browse/HADOOP-8606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13418544#comment-13418544 ] Suresh Srinivas commented on HADOOP-8606: - +1 for the patch. FileSystem.get may return the wrong filesystem -- Key: HADOOP-8606 URL: https://issues.apache.org/jira/browse/HADOOP-8606 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.0.0, 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Attachments: HADOOP-8606.branch-1.patch, HADOOP-8606.patch {{FileSystem.get(URI, conf)}} will return the default fs if the scheme is null, regardless of whether the authority is null too. This causes URIs of //authority/path to _always_ refer to /path on the default fs. To the user, this appears to work if the authority in the null-scheme URI matches the authority of the default fs. When the authorities don't match, the user is very surprised that the default fs is used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8618) Windows build failing after 1.0.3 got merged into branch-1-win
[ https://issues.apache.org/jira/browse/HADOOP-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8618: Affects Version/s: 1-win Windows build failing after 1.0.3 got merged into branch-1-win -- Key: HADOOP-8618 URL: https://issues.apache.org/jira/browse/HADOOP-8618 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Bikas Saha Assignee: Bikas Saha Attachments: HADOOP-8618-1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8618) Windows build failing after 1.0.3 got merged into branch-1-win
[ https://issues.apache.org/jira/browse/HADOOP-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8618: Issue Type: Sub-task (was: Bug) Parent: HADOOP-8079 Windows build failing after 1.0.3 got merged into branch-1-win -- Key: HADOOP-8618 URL: https://issues.apache.org/jira/browse/HADOOP-8618 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 1-win Reporter: Bikas Saha Assignee: Bikas Saha Attachments: HADOOP-8618-1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8618) Windows build failing after 1.0.3 got merged into branch-1-win
[ https://issues.apache.org/jira/browse/HADOOP-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421784#comment-13421784 ] Suresh Srinivas commented on HADOOP-8618: - I ran native build on Linux. Both the targets you are invoking using antcall are run under the target compile-core-native. One minor comment - change the order of antcalls - do compile-core-classes first. +1 with that change. Windows build failing after 1.0.3 got merged into branch-1-win -- Key: HADOOP-8618 URL: https://issues.apache.org/jira/browse/HADOOP-8618 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 1-win Reporter: Bikas Saha Assignee: Bikas Saha Attachments: HADOOP-8618-1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8618) Windows build failing after 1.0.3 got merged into branch-1-win
[ https://issues.apache.org/jira/browse/HADOOP-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HADOOP-8618. - Resolution: Fixed Hadoop Flags: Reviewed I committed the patch. Windows build failing after 1.0.3 got merged into branch-1-win -- Key: HADOOP-8618 URL: https://issues.apache.org/jira/browse/HADOOP-8618 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 1-win Reporter: Bikas Saha Assignee: Bikas Saha Attachments: HADOOP-8618-1.patch, HADOOP-8618-2.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8618) Windows build failing after 1.0.3 got merged into branch-1-win
[ https://issues.apache.org/jira/browse/HADOOP-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8618: Fix Version/s: 1-win Windows build failing after 1.0.3 got merged into branch-1-win -- Key: HADOOP-8618 URL: https://issues.apache.org/jira/browse/HADOOP-8618 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 1-win Reporter: Bikas Saha Assignee: Bikas Saha Fix For: 1-win Attachments: HADOOP-8618-1.patch, HADOOP-8618-2.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8365) Add flag to disable durable sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423257#comment-13423257 ] Suresh Srinivas commented on HADOOP-8365: - Look at HDFS-3731. 2.x upgrades does not handle this functionality well. For people who did not need durable sync, by turning the feature on by default, we are causing unnecessary upgrade issues. Add flag to disable durable sync Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Assignee: Eli Collins Priority: Blocker Fix For: 1.1.0 Attachments: hadoop-8365.txt, hadoop-8365.txt Per HADOOP-8230 there's a request for a flag to disable the sync code paths that dfs.support.append used to enable. The sync method itself will still be available and have a broken implementation as that was the behavior before HADOOP-8230. This config flag should default to false as the primary motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8623) hadoop jar command should respect HADOOP_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8623: Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for contributing the patch Steven. I have added you as a contributor to Hadoop common. You can now assign the jiras to yourself. I committed the patch to trunk. Will merge it into 2.x next. hadoop jar command should respect HADOOP_OPTS - Key: HADOOP-8623 URL: https://issues.apache.org/jira/browse/HADOOP-8623 Project: Hadoop Common Issue Type: Bug Components: scripts Affects Versions: 0.23.1, 2.0.0-alpha Reporter: Steven Willis Fix For: 3.0.0 Attachments: HADOOP-8623.patch The jar command to the hadoop script should use any set HADOOP_OPTS and HADOOP_CLIENT_OPTS environment variables like all the other commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8623) hadoop jar command should respect HADOOP_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8623: Priority: Minor (was: Major) Fix Version/s: 2.1.0-alpha Assignee: Steven Willis Issue Type: Improvement (was: Bug) hadoop jar command should respect HADOOP_OPTS - Key: HADOOP-8623 URL: https://issues.apache.org/jira/browse/HADOOP-8623 Project: Hadoop Common Issue Type: Improvement Components: scripts Affects Versions: 0.23.1, 2.0.0-alpha Reporter: Steven Willis Assignee: Steven Willis Priority: Minor Fix For: 2.1.0-alpha, 3.0.0 Attachments: HADOOP-8623.patch The jar command to the hadoop script should use any set HADOOP_OPTS and HADOOP_CLIENT_OPTS environment variables like all the other commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs
[ https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429440#comment-13429440 ] Suresh Srinivas commented on HADOOP-8581: - Early comments. I prefer splitting this into separate patches instead of one single patch that Jenkins cannot use. # There are unnecessary white space changes (e.g: WebAppProxyServlet.java). Indentation in some places is incorrect as well (4 spaces instead of two spaces). # core-site.xml - typo SSL for for the HTTP. Can you please add more/better description for the new parameter added. # HttpServer. java - please do not turn checked exception GeneralSecurityException into RTE. Perhaps you could throw it as IOException # Add brief comments to TestSSLHttpServer.java # Not sure you needed to make getTaskLogsUrl() non-static add support for HTTPS to the web UIs Key: HADOOP-8581 URL: https://issues.apache.org/jira/browse/HADOOP-8581 Project: Hadoop Common Issue Type: New Feature Components: security Affects Versions: 2.0.0-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.1.0-alpha Attachments: HADOOP-8581.patch HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is hardcoded. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs
[ https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429532#comment-13429532 ] Suresh Srinivas commented on HADOOP-8581: - bq. Why can't Jenkins use it? Cross-project patches should work now. That is good! I was not aware of it. add support for HTTPS to the web UIs Key: HADOOP-8581 URL: https://issues.apache.org/jira/browse/HADOOP-8581 Project: Hadoop Common Issue Type: New Feature Components: security Affects Versions: 2.0.0-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.1.0-alpha Attachments: HADOOP-8581.patch HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is hardcoded. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory
[ https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429534#comment-13429534 ] Suresh Srinivas commented on HADOOP-8644: - Alejandro, can you please wait for Jenkins to +1 before committing the patch. In this case I do not see +1 for your updated patch. AuthenticatedURL should be able to use SSLFactory - Key: HADOOP-8644 URL: https://issues.apache.org/jira/browse/HADOOP-8644 Project: Hadoop Common Issue Type: New Feature Components: security Affects Versions: 2.2.0-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Priority: Critical Fix For: 2.2.0-alpha Attachments: HADOOP-8644.patch, HADOOP-8644.patch This is required to enable the use of HTTPS with SPNEGO using Hadoop configured keystores. This is required by HADOOP-8581. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory
[ https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429606#comment-13429606 ] Suresh Srinivas commented on HADOOP-8644: - @Alejandro, sounds good :) AuthenticatedURL should be able to use SSLFactory - Key: HADOOP-8644 URL: https://issues.apache.org/jira/browse/HADOOP-8644 Project: Hadoop Common Issue Type: New Feature Components: security Affects Versions: 2.2.0-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Priority: Critical Fix For: 2.2.0-alpha Attachments: HADOOP-8644.patch, HADOOP-8644.patch This is required to enable the use of HTTPS with SPNEGO using Hadoop configured keystores. This is required by HADOOP-8581. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8441) Build bot timeout is too small
[ https://issues.apache.org/jira/browse/HADOOP-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8441: Priority: Minor (was: Blocker) Build bot timeout is too small -- Key: HADOOP-8441 URL: https://issues.apache.org/jira/browse/HADOOP-8441 Project: Hadoop Common Issue Type: Bug Reporter: Radim Kolar Priority: Minor Labels: build-failure, qa QA Build bot timeout is set too low. It fails to make build in time and then no results are posted to JIRA. See example https://builds.apache.org/job/PreCommit-HADOOP-Build/1040/console -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8468) Umbrella of enhancements to support different failure and locality topologies
[ https://issues.apache.org/jira/browse/HADOOP-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8468: Priority: Major (was: Critical) Issue Type: Improvement (was: Bug) Umbrella of enhancements to support different failure and locality topologies - Key: HADOOP-8468 URL: https://issues.apache.org/jira/browse/HADOOP-8468 Project: Hadoop Common Issue Type: Improvement Components: ha, io Affects Versions: 1.0.0, 2.0.0-alpha Reporter: Junping Du Assignee: Junping Du Attachments: HADOOP-8468-total-v3.patch, HADOOP-8468-total.patch, Proposal for enchanced failure and locality topologies (revised-1.0).pdf, Proposal for enchanced failure and locality topologies.pdf The current hadoop network topology (described in some previous issues like: Hadoop-692) works well in classic three-tiers network when it comes out. However, it does not take into account other failure models or changes in the infrastructure that can affect network bandwidth efficiency like: virtualization. Virtualized platform has following genes that shouldn't been ignored by hadoop topology in scheduling tasks, placing replica, do balancing or fetching block for reading: 1. VMs on the same physical host are affected by the same hardware failure. In order to match the reliability of a physical deployment, replication of data across two virtual machines on the same host should be avoided. 2. The network between VMs on the same physical host has higher throughput and lower latency and does not consume any physical switch bandwidth. Thus, we propose to make hadoop network topology extend-able and introduce a new level in the hierarchical topology, a node group level, which maps well onto an infrastructure that is based on a virtualized environment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues
[ https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431219#comment-13431219 ] Suresh Srinivas commented on HADOOP-8661: - Should we be fixing this in Hadoop - the content of exception stack trace? Also not clear why Oozie stores exceptions. Stack Trace in Exception.getMessage causing oozie DB to have issues --- Key: HADOOP-8661 URL: https://issues.apache.org/jira/browse/HADOOP-8661 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans It looks like all exceptions produced by RemoteException include the full stack trace of the original exception in the message. This is causing issues for oozie because they store the message in their database and it is getting very large. This appears to be a regression from 1.0 behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues
[ https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431276#comment-13431276 ] Suresh Srinivas commented on HADOOP-8661: - On 2.0, the stack trace may be a little different, since we have switched to protobuf RPC engine. This change was done in HADOOP-6686. So any solution needs to consider some of the discussions from that jira. Stack Trace in Exception.getMessage causing oozie DB to have issues --- Key: HADOOP-8661 URL: https://issues.apache.org/jira/browse/HADOOP-8661 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Critical It looks like all exceptions produced by RemoteException include the full stack trace of the original exception in the message. This is causing issues for oozie because they store the message in their database and it is getting very large. This appears to be a regression from 1.0 behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues
[ https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431281#comment-13431281 ] Suresh Srinivas commented on HADOOP-8661: - Echoing what Jason said, it is a good idea to handle this in Oozie. Making an assumption about the length of exception message and expecting that it would not change in upstream project is not a good idea. That said, we could change the message back to shorter length given the thrown exception has been set with initCause() from where the stack trace can be derived. Stack Trace in Exception.getMessage causing oozie DB to have issues --- Key: HADOOP-8661 URL: https://issues.apache.org/jira/browse/HADOOP-8661 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Critical It looks like all exceptions produced by RemoteException include the full stack trace of the original exception in the message. This is causing issues for oozie because they store the message in their database and it is getting very large. This appears to be a regression from 1.0 behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues
[ https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431344#comment-13431344 ] Suresh Srinivas commented on HADOOP-8661: - bq. I have spent a little time to write some code that can parse the stack trace and insert it back into the generated exception Not sure what you mean here. You could still shorten the message in this jira. The cause of the exception is already in the thrown exception from where stack trace can be obtained. Stack Trace in Exception.getMessage causing oozie DB to have issues --- Key: HADOOP-8661 URL: https://issues.apache.org/jira/browse/HADOOP-8661 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Critical It looks like all exceptions produced by RemoteException include the full stack trace of the original exception in the message. This is causing issues for oozie because they store the message in their database and it is getting very large. This appears to be a regression from 1.0 behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues
[ https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431460#comment-13431460 ] Suresh Srinivas commented on HADOOP-8661: - bq. But in 23, the message doesn't have the Exception class name (org.apache.hadoop.security.AccessControlException). We removed this redundant information from message in HADOOP-6686. You can get back to previous message on Oozie with: {{exception.getClass().getName() + : + exception.getMessage()}} Stack Trace in Exception.getMessage causing oozie DB to have issues --- Key: HADOOP-8661 URL: https://issues.apache.org/jira/browse/HADOOP-8661 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Critical It looks like all exceptions produced by RemoteException include the full stack trace of the original exception in the message. This is causing issues for oozie because they store the message in their database and it is getting very large. This appears to be a regression from 1.0 behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions
[ https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438297#comment-13438297 ] Suresh Srinivas commented on HADOOP-8711: - Comments: # DFSUtil method seems unnecessary. # Name terseException to terseExceptions. It should be made volatile. # It would be good add unit test. For that reason it may be good to organize the code, where you could have an inner class TerseExceptions with methods, add(), isTerse() etc. provide an option for IPC server users to avoid printing stack information for certain exceptions - Key: HADOOP-8711 URL: https://issues.apache.org/jira/browse/HADOOP-8711 Project: Hadoop Common Issue Type: Improvement Components: ipc Reporter: Brandon Li Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8711.patch, HADOOP-8711.patch Currently it's hard coded in the server that it doesn't print the exception stack for StandbyException. Similarly, other components may have their own exceptions which don't need to save the stack trace in log. One example is HDFS-3817. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions
[ https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8711: Attachment: (was: HADOOP-8711.patch) provide an option for IPC server users to avoid printing stack information for certain exceptions - Key: HADOOP-8711 URL: https://issues.apache.org/jira/browse/HADOOP-8711 Project: Hadoop Common Issue Type: Improvement Components: ipc Reporter: Brandon Li Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch Currently it's hard coded in the server that it doesn't print the exception stack for StandbyException. Similarly, other components may have their own exceptions which don't need to save the stack trace in log. One example is HDFS-3817. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions
[ https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438715#comment-13438715 ] Suresh Srinivas commented on HADOOP-8711: - Couple of comments: # Please add brief javadoc for ExceptionsHandler class. Also please make the class package private instead of public. # Please do not make Server#exceptionsHandler public. Instead add a method Server#addTerseExceptions(). provide an option for IPC server users to avoid printing stack information for certain exceptions - Key: HADOOP-8711 URL: https://issues.apache.org/jira/browse/HADOOP-8711 Project: Hadoop Common Issue Type: Improvement Components: ipc Reporter: Brandon Li Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch Currently it's hard coded in the server that it doesn't print the exception stack for StandbyException. Similarly, other components may have their own exceptions which don't need to save the stack trace in log. One example is HDFS-3817. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8711) provide an option for IPC server users to avoid printing stack information for certain exceptions
[ https://issues.apache.org/jira/browse/HADOOP-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8711: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch. Thank you Brandon. provide an option for IPC server users to avoid printing stack information for certain exceptions - Key: HADOOP-8711 URL: https://issues.apache.org/jira/browse/HADOOP-8711 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 3.0.0 Reporter: Brandon Li Assignee: Brandon Li Fix For: 3.0.0 Attachments: HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch, HADOOP-8711.patch Currently it's hard coded in the server that it doesn't print the exception stack for StandbyException. Similarly, other components may have their own exceptions which don't need to save the stack trace in log. One example is HDFS-3817. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira