[jira] [Updated] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9347: Status: Patch Available (was: Open) Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, HBASE-9347_trunk.01.patch, HBASE-9347_trunk.02.patch, HBASE-9347_trunk.03.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.05.patch, HBASE-9347_trunk.05.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9364) Get request with multiple columns returns partial results
[ https://issues.apache.org/jira/browse/HBASE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9364: Status: Open (was: Patch Available) Get request with multiple columns returns partial results - Key: HBASE-9364 URL: https://issues.apache.org/jira/browse/HBASE-9364 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9364.00.patch, HBASE-9364.01.patch, hbase-9364_trunk.00.patch, HBASE-9364_trunk.01.patch, HBASE-9364_trunk.02.patch When a GET request is issue for a table row with multiple columns and columns have empty qualifier like f1: , results for empty qualifiers is being ignored. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9375) [REST] Querying row data gives all the available versions of a column
[ https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9375: Status: Open (was: Patch Available) [REST] Querying row data gives all the available versions of a column - Key: HBASE-9375 URL: https://issues.apache.org/jira/browse/HBASE-9375 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch, HBASE-9375_trunk.01.patch In the hbase shell, when a user tries to get a value related to a column, hbase returns only the latest value. But using the REST API returns HColumnDescriptor.DEFAULT_VERSIONS versions by default. The behavior should be consistent with the hbase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9364) Get request with multiple columns returns partial results
[ https://issues.apache.org/jira/browse/HBASE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9364: Hadoop Flags: Reviewed Status: Patch Available (was: Open) Get request with multiple columns returns partial results - Key: HBASE-9364 URL: https://issues.apache.org/jira/browse/HBASE-9364 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9364.00.patch, HBASE-9364.01.patch, hbase-9364_trunk.00.patch, HBASE-9364_trunk.01.patch, HBASE-9364_trunk.02.patch, HBASE-9364_trunk.02.patch When a GET request is issue for a table row with multiple columns and columns have empty qualifier like f1: , results for empty qualifiers is being ignored. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9364) Get request with multiple columns returns partial results
[ https://issues.apache.org/jira/browse/HBASE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9364: Attachment: HBASE-9364_trunk.02.patch poking buildbot. Get request with multiple columns returns partial results - Key: HBASE-9364 URL: https://issues.apache.org/jira/browse/HBASE-9364 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9364.00.patch, HBASE-9364.01.patch, hbase-9364_trunk.00.patch, HBASE-9364_trunk.01.patch, HBASE-9364_trunk.02.patch, HBASE-9364_trunk.02.patch When a GET request is issue for a table row with multiple columns and columns have empty qualifier like f1: , results for empty qualifiers is being ignored. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9496) make sure HBase APIs are compatible between 0.94 and 0.96
[ https://issues.apache.org/jira/browse/HBASE-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765194#comment-13765194 ] Jonathan Hsieh edited comment on HBASE-9496 at 9/12/13 6:02 AM: Should having a 3rd party MR job that instantiates Results that don't come from HBase be encouraged? Would our Export job and whatever inputformat handler for the Import be sufficient and allow us to keep the Result ctors more hidden? In API's exposing less is safer -- exposing too much will make us sorry again in the future. :( was (Author: jmhsieh): Should having a 3rd party MR job that instantiates Results that don't come from HBase be encouraged? Would our Export job and whatever inputformat handler for the Import be sufficient and allow us to keep the Result ctors more hidden? make sure HBase APIs are compatible between 0.94 and 0.96 - Key: HBASE-9496 URL: https://issues.apache.org/jira/browse/HBASE-9496 Project: HBase Issue Type: Sub-task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9496-v0-96.patch Follow-up for HBASE-9477. Some other methods are now different between 94 and 96 (Result::getColumnLatest, Put::get, anything that takes a collection of Cell, e.g. Result ctor, Mutation::setFamilyMap etc.). I am assuming things that accept Cell (Increment::add, Delete::addDeleteMarker) don't need to change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9496) make sure HBase APIs are compatible between 0.94 and 0.96
[ https://issues.apache.org/jira/browse/HBASE-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765194#comment-13765194 ] Jonathan Hsieh edited comment on HBASE-9496 at 9/12/13 6:01 AM: Should having a 3rd party MR job that instantiates Results that don't come from HBase be encouraged? Would our Export job and whatever inputformat handler for the Import be sufficient and allow us to keep the Result ctors more hidden? was (Author: jmhsieh): Should having an MR job that instantiates Results that don't come from HBase be encouraged? Would our Export job and whatever inputformat handler for the Import be sufficient and allow us to keep it hidden? make sure HBase APIs are compatible between 0.94 and 0.96 - Key: HBASE-9496 URL: https://issues.apache.org/jira/browse/HBASE-9496 Project: HBase Issue Type: Sub-task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9496-v0-96.patch Follow-up for HBASE-9477. Some other methods are now different between 94 and 96 (Result::getColumnLatest, Put::get, anything that takes a collection of Cell, e.g. Result ctor, Mutation::setFamilyMap etc.). I am assuming things that accept Cell (Increment::add, Delete::addDeleteMarker) don't need to change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9375) [REST] Querying row data gives all the available versions of a column
[ https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9375: Attachment: HBASE-9375_trunk.01.patch seeking green lights. [REST] Querying row data gives all the available versions of a column - Key: HBASE-9375 URL: https://issues.apache.org/jira/browse/HBASE-9375 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch, HBASE-9375_trunk.01.patch, HBASE-9375_trunk.01.patch In the hbase shell, when a user tries to get a value related to a column, hbase returns only the latest value. But using the REST API returns HColumnDescriptor.DEFAULT_VERSIONS versions by default. The behavior should be consistent with the hbase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9496) make sure HBase APIs are compatible between 0.94 and 0.96
[ https://issues.apache.org/jira/browse/HBASE-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765194#comment-13765194 ] Jonathan Hsieh commented on HBASE-9496: --- Should having an MR job that instantiates Results that don't come from HBase be encouraged? Would our Export job and whatever inputformat handler for the Import be sufficient and allow us to keep it hidden? make sure HBase APIs are compatible between 0.94 and 0.96 - Key: HBASE-9496 URL: https://issues.apache.org/jira/browse/HBASE-9496 Project: HBase Issue Type: Sub-task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9496-v0-96.patch Follow-up for HBASE-9477. Some other methods are now different between 94 and 96 (Result::getColumnLatest, Put::get, anything that takes a collection of Cell, e.g. Result ctor, Mutation::setFamilyMap etc.). I am assuming things that accept Cell (Increment::add, Delete::addDeleteMarker) don't need to change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9375) [REST] Querying row data gives all the available versions of a column
[ https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9375: Hadoop Flags: Reviewed Status: Patch Available (was: Open) [REST] Querying row data gives all the available versions of a column - Key: HBASE-9375 URL: https://issues.apache.org/jira/browse/HBASE-9375 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch, HBASE-9375_trunk.01.patch, HBASE-9375_trunk.01.patch In the hbase shell, when a user tries to get a value related to a column, hbase returns only the latest value. But using the REST API returns HColumnDescriptor.DEFAULT_VERSIONS versions by default. The behavior should be consistent with the hbase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9495) Sanity check visiblity and audience for hbase-client and hbase-common apis.
[ https://issues.apache.org/jira/browse/HBASE-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765201#comment-13765201 ] Jonathan Hsieh commented on HBASE-9495: --- bq. Should an abstract class be public/stable? I don't think it should be -- maybe some flavor of LimitedPrivate -- might be for coprocs. I think we expect users to use ResultScanner. (its the only scanner in HTableInterface). I think agree on all the others in Stack's list. Sanity check visiblity and audience for hbase-client and hbase-common apis. --- Key: HBASE-9495 URL: https://issues.apache.org/jira/browse/HBASE-9495 Project: HBase Issue Type: Task Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Priority: Critical Fix For: 0.98.0, 0.96.0 This is a task to audit and enumerate places where hbase-common and hbase-client should narrow or widen the exposed user program supported api. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9495) Sanity check visiblity and audience for hbase-client and hbase-common apis.
[ https://issues.apache.org/jira/browse/HBASE-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765201#comment-13765201 ] Jonathan Hsieh edited comment on HBASE-9495 at 9/12/13 6:10 AM: bq. Should an abstract class be public/stable? I don't think this one should be (though others could be) -- for this one maybe some flavor of LimitedPrivate (might be for coprocs). I think we expect users to use ResultScanner. (its the only scanner in HTableInterface). I think agree on all the others in Stack's list. was (Author: jmhsieh): bq. Should an abstract class be public/stable? I don't think it should be -- maybe some flavor of LimitedPrivate -- might be for coprocs. I think we expect users to use ResultScanner. (its the only scanner in HTableInterface). I think agree on all the others in Stack's list. Sanity check visiblity and audience for hbase-client and hbase-common apis. --- Key: HBASE-9495 URL: https://issues.apache.org/jira/browse/HBASE-9495 Project: HBase Issue Type: Task Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Priority: Critical Fix For: 0.98.0, 0.96.0 This is a task to audit and enumerate places where hbase-common and hbase-client should narrow or widen the exposed user program supported api. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-9517) Exclude Private elements from generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh reassigned HBASE-9517: - Assignee: Jonathan Hsieh Exclude Private elements from generated Javadoc --- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.96.0 We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9517) Exclude Private elements from generated Javadoc
Jonathan Hsieh created HBASE-9517: - Summary: Exclude Private elements from generated Javadoc Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Fix For: 0.96.0 We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9517) Exclude Private elements from generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765211#comment-13765211 ] Jonathan Hsieh commented on HBASE-9517: --- This should be able to use the hadoop 2.x's ExcludePrivateAnnotationsStandardDoclet.java found in hadoop-common. Exclude Private elements from generated Javadoc --- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.96.0 We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9272) A simple parallel, unordered scanner
[ https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9272: - Attachment: 9272-0.94-v4.txt Parking another version. Back to a dedicated threadpool and handling thread interruption correctly. A simple parallel, unordered scanner Key: HBASE-9272 URL: https://issues.apache.org/jira/browse/HBASE-9272 Project: HBase Issue Type: New Feature Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Attachments: 9272-0.94.txt, 9272-0.94-v2.txt, 9272-0.94-v3.txt, 9272-0.94-v4.txt, ParallelClientScanner.java, ParallelClientScanner.java The contract of ClientScanner is to return rows in sort order. That limits the order in which region can be scanned. I propose a simple ParallelScanner that does not have this requirement and queries regions in parallel, return whatever gets returned first. This is generally useful for scans that filter a lot of data on the server, or in cases where the client can very quickly react to the returned data. I have a simple prototype (doesn't do error handling right, and might be a bit heavy on the synchronization side - it used a BlockingQueue to hand data between the client using the scanner and the threads doing the scanning, it also could potentially starve some scanners long enugh to time out at the server). On the plus side, it's only a 130 lines of code. :) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9502) HStore.seekToScanner should handle magic value
[ https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765226#comment-13765226 ] Liang Xie commented on HBASE-9502: -- I changed the case, to: 1) let block size to smaller, such that each block could have 2 kv at least, else walkForwardInSingleRow will take effect:) 2) let the seekTo can reture -2(I just realize our internal generate faked key algo seems more aggressive, so it's easy to repro, i'll file another jira for that, and after that patch, you'll find without this change, the case will be fail always) HStore.seekToScanner should handle magic value -- Key: HBASE-9502 URL: https://issues.apache.org/jira/browse/HBASE-9502 Project: HBase Issue Type: Bug Components: regionserver, Scanners Affects Versions: 0.98.0, 0.95.2, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9502.txt due to faked key, the seekTo probably reture -2, and HStore.seekToScanner should handle this corner case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9364) Get request with multiple columns returns partial results
[ https://issues.apache.org/jira/browse/HBASE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765230#comment-13765230 ] Hadoop QA commented on HBASE-9364: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602739/HBASE-9364_trunk.02.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7175//console This message is automatically generated. Get request with multiple columns returns partial results - Key: HBASE-9364 URL: https://issues.apache.org/jira/browse/HBASE-9364 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9364.00.patch, HBASE-9364.01.patch, hbase-9364_trunk.00.patch, HBASE-9364_trunk.01.patch, HBASE-9364_trunk.02.patch, HBASE-9364_trunk.02.patch When a GET request is issue for a table row with multiple columns and columns have empty qualifier like f1: , results for empty qualifiers is being ignored. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9517) Exclude Private elements from generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765233#comment-13765233 ] Jonathan Hsieh commented on HBASE-9517: --- Yay! Got it to work modifying the main javadoc build. Now to figure out how to get it to do multiple builds. This will be super useful for checking that we did the annotations reasonably. Exclude Private elements from generated Javadoc --- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.96.0 We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765231#comment-13765231 ] Hadoop QA commented on HBASE-9347: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602738/HBASE-9347_trunk.05.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 10 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7176//console This message is automatically generated. Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, HBASE-9347_trunk.01.patch, HBASE-9347_trunk.02.patch, HBASE-9347_trunk.03.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.05.patch, HBASE-9347_trunk.05.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9518) getFakedKey() improvement
Liang Xie created HBASE-9518: Summary: getFakedKey() improvement Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9518: - Description: make generating faked key algo more aggressive getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9502) HStore.seekToScanner should handle magic value
[ https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765241#comment-13765241 ] Liang Xie commented on HBASE-9502: -- see HBASE-9518 HStore.seekToScanner should handle magic value -- Key: HBASE-9502 URL: https://issues.apache.org/jira/browse/HBASE-9502 Project: HBase Issue Type: Bug Components: regionserver, Scanners Affects Versions: 0.98.0, 0.95.2, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9502.txt due to faked key, the seekTo probably reture -2, and HStore.seekToScanner should handle this corner case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions
[ https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765246#comment-13765246 ] Jeffrey Zhong commented on HBASE-9480: -- I reviewed the 1.2 patch. The newly introduced RegionAlreadyInTransition exception makes the TestAssignmentManagerOnCluster flaky. It depends on if the retry loop(once RegionAlreadyInTransition is raised) can hit the old code path. I think you can safely revert the code in HRegionServer because the newly added following code resumes region transition after zk node deletion. The rest looks good to me though I'm wondering if it's possible that you can move the following code inside unsign itself immediately after {code}regionOffline(region);{code} so caller don't need call the following explicitly. {code} + if (regionStates.isRegionOffline(region)) { +new ClosedRegionHandler(server, this, region).process(); + } {code} Regions are unexpectedly made offline in certain failure conditions --- Key: HBASE-9480 URL: https://issues.apache.org/jira/browse/HBASE-9480 Project: HBase Issue Type: Bug Reporter: Devaraj Das Assignee: Jimmy Xiang Priority: Blocker Fix For: 0.96.0 Attachments: 9480-1.txt, trunk-9480.patch, trunk-9480_v1.1.patch, trunk-9480_v1.2.patch Came across this issue (HBASE-9338 test): 1. Client issues a request to move a region from ServerA to ServerB 2. ServerA is compacting that region and doesn't close region immediately. In fact, it takes a while to complete the request. 3. The master in the meantime, sends another close request. 4. ServerA sends it a NotServingRegionException 5. Master handles the exception, deletes the znode, and invokes regionOffline for the said region. 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is deleted. The region is permanently offline. There are potentially other situations where when a RegionServer is offline and the client asks for a region move off from that server, the master makes the region offline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9518: - Assignee: Liang Xie Status: Patch Available (was: Open) getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9518.txt make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9518: - Attachment: HBASE-9518.txt getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Attachments: HBASE-9518.txt make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9486) NPE in HTable.close() with AsyncProcess
[ https://issues.apache.org/jira/browse/HBASE-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9486: --- Attachment: 9486.v1.patch NPE in HTable.close() with AsyncProcess --- Key: HBASE-9486 URL: https://issues.apache.org/jira/browse/HBASE-9486 Project: HBase Issue Type: Bug Components: Client Reporter: Enis Soztutar Assignee: Nicolas Liochon Fix For: 0.96.0 Attachments: 9486.v1.patch When running {code} hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey slowDeterministic {code} One task failed with the following stack trace: {code} 2013-09-10 01:56:03,115 WARN [htable-pool1-t134] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 3 ops on server02,60020,1378776046122 NOT resubmitting.region=IntegrationTestBigLinkedList,\xA6\x10\x9C\x85,1378776439065.766ab62aa30fa94c9014f09738698922., hostname=server02,60020,1378776046122, seqNum=16146143 2013-09-10 01:56:03,115 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 6 ops on server02,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\x9D\x95\xDB\xCB\xD5\xE2\xAD\x7F\xCB\x1D\xBCN~\xF2U,1378774537592.b2534e273feecba91db43496efa1cd12., hostname=server02,60020,1378775896233, seqNum=14890994 2013-09-10 01:56:03,655 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 9 ops on server01,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\xB8\x0B.\x8C\x12Px\x88\x10\xA4\x07\x9FJ\x97\xD0,1378775167749.7c0f1c17bc5f02e41e02939187304976., hostname=server01,60020,1378775896233, seqNum=15863492 2013-09-10 01:56:03,818 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.NullPointerException at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:289) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:234) at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:894) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1275) at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1313) at org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.cleanup(IntegrationTestBigLinkedList.java:352) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:148) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) {code} Seems worth investigating. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765255#comment-13765255 ] Liang Xie commented on HBASE-9518: -- TestCacheOnWrite case is updated due to the index size is smaller after applied the more aggressive change. getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9518.txt make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9486) NPE in HTable.close() with AsyncProcess
[ https://issues.apache.org/jira/browse/HBASE-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765256#comment-13765256 ] Nicolas Liochon commented on HBASE-9486: I found a bug: BatchErrors is not thread safe but is used in a MT context. It could be the issue, but I haven't reproduced the issue so I can be sure. The patch should be harmless. I set the jira status to critical as it should make it to the .96 imho. NPE in HTable.close() with AsyncProcess --- Key: HBASE-9486 URL: https://issues.apache.org/jira/browse/HBASE-9486 Project: HBase Issue Type: Bug Components: Client Reporter: Enis Soztutar Assignee: Nicolas Liochon Fix For: 0.96.0 Attachments: 9486.v1.patch When running {code} hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey slowDeterministic {code} One task failed with the following stack trace: {code} 2013-09-10 01:56:03,115 WARN [htable-pool1-t134] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 3 ops on server02,60020,1378776046122 NOT resubmitting.region=IntegrationTestBigLinkedList,\xA6\x10\x9C\x85,1378776439065.766ab62aa30fa94c9014f09738698922., hostname=server02,60020,1378776046122, seqNum=16146143 2013-09-10 01:56:03,115 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 6 ops on server02,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\x9D\x95\xDB\xCB\xD5\xE2\xAD\x7F\xCB\x1D\xBCN~\xF2U,1378774537592.b2534e273feecba91db43496efa1cd12., hostname=server02,60020,1378775896233, seqNum=14890994 2013-09-10 01:56:03,655 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 9 ops on server01,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\xB8\x0B.\x8C\x12Px\x88\x10\xA4\x07\x9FJ\x97\xD0,1378775167749.7c0f1c17bc5f02e41e02939187304976., hostname=server01,60020,1378775896233, seqNum=15863492 2013-09-10 01:56:03,818 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.NullPointerException at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:289) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:234) at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:894) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1275) at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1313) at org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.cleanup(IntegrationTestBigLinkedList.java:352) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:148) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) {code} Seems worth investigating. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9486) NPE in HTable.close() with AsyncProcess
[ https://issues.apache.org/jira/browse/HBASE-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9486: --- Status: Patch Available (was: Open) NPE in HTable.close() with AsyncProcess --- Key: HBASE-9486 URL: https://issues.apache.org/jira/browse/HBASE-9486 Project: HBase Issue Type: Bug Components: Client Reporter: Enis Soztutar Assignee: Nicolas Liochon Priority: Critical Fix For: 0.96.0 Attachments: 9486.v1.patch When running {code} hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey slowDeterministic {code} One task failed with the following stack trace: {code} 2013-09-10 01:56:03,115 WARN [htable-pool1-t134] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 3 ops on server02,60020,1378776046122 NOT resubmitting.region=IntegrationTestBigLinkedList,\xA6\x10\x9C\x85,1378776439065.766ab62aa30fa94c9014f09738698922., hostname=server02,60020,1378776046122, seqNum=16146143 2013-09-10 01:56:03,115 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 6 ops on server02,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\x9D\x95\xDB\xCB\xD5\xE2\xAD\x7F\xCB\x1D\xBCN~\xF2U,1378774537592.b2534e273feecba91db43496efa1cd12., hostname=server02,60020,1378775896233, seqNum=14890994 2013-09-10 01:56:03,655 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 9 ops on server01,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\xB8\x0B.\x8C\x12Px\x88\x10\xA4\x07\x9FJ\x97\xD0,1378775167749.7c0f1c17bc5f02e41e02939187304976., hostname=server01,60020,1378775896233, seqNum=15863492 2013-09-10 01:56:03,818 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.NullPointerException at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:289) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:234) at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:894) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1275) at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1313) at org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.cleanup(IntegrationTestBigLinkedList.java:352) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:148) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) {code} Seems worth investigating. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9486) NPE in HTable.close() with AsyncProcess
[ https://issues.apache.org/jira/browse/HBASE-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9486: --- Priority: Critical (was: Major) NPE in HTable.close() with AsyncProcess --- Key: HBASE-9486 URL: https://issues.apache.org/jira/browse/HBASE-9486 Project: HBase Issue Type: Bug Components: Client Reporter: Enis Soztutar Assignee: Nicolas Liochon Priority: Critical Fix For: 0.96.0 Attachments: 9486.v1.patch When running {code} hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey slowDeterministic {code} One task failed with the following stack trace: {code} 2013-09-10 01:56:03,115 WARN [htable-pool1-t134] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 3 ops on server02,60020,1378776046122 NOT resubmitting.region=IntegrationTestBigLinkedList,\xA6\x10\x9C\x85,1378776439065.766ab62aa30fa94c9014f09738698922., hostname=server02,60020,1378776046122, seqNum=16146143 2013-09-10 01:56:03,115 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 6 ops on server02,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\x9D\x95\xDB\xCB\xD5\xE2\xAD\x7F\xCB\x1D\xBCN~\xF2U,1378774537592.b2534e273feecba91db43496efa1cd12., hostname=server02,60020,1378775896233, seqNum=14890994 2013-09-10 01:56:03,655 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 9 ops on server01,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\xB8\x0B.\x8C\x12Px\x88\x10\xA4\x07\x9FJ\x97\xD0,1378775167749.7c0f1c17bc5f02e41e02939187304976., hostname=server01,60020,1378775896233, seqNum=15863492 2013-09-10 01:56:03,818 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.NullPointerException at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:289) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:234) at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:894) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1275) at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1313) at org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.cleanup(IntegrationTestBigLinkedList.java:352) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:148) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) {code} Seems worth investigating. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9519) fix NPE in EncodedScannerV2.getFirstKeyInBlock()
Liang Xie created HBASE-9519: Summary: fix NPE in EncodedScannerV2.getFirstKeyInBlock() Key: HBASE-9519 URL: https://issues.apache.org/jira/browse/HBASE-9519 Project: HBase Issue Type: Improvement Components: HFile Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie we observed a reproducable NPE while scanning special table under special condition in our IntegratedTesting scenario, it was fixed by appling the attached patch. org.apache.hadoop.hbase.client.ScannerCallable@67ee75a5, java.io.IOException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1186) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1175) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2391) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:456) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1071) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:547) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:159) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:142) at org.apache.hadoop.hbase.io.HalfStoreFileReader.getLastKey(HalfStoreFileReader.java:267) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesKeyRangeFilter(StoreFile.java:1543) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:375) at org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:298) at org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:262) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:149) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2122) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3460) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1645) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1635) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1610) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2377) ... 5 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9519) fix NPE in EncodedScannerV2.getFirstKeyInBlock()
[ https://issues.apache.org/jira/browse/HBASE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9519: - Attachment: HBASE-9519.txt fix NPE in EncodedScannerV2.getFirstKeyInBlock() Key: HBASE-9519 URL: https://issues.apache.org/jira/browse/HBASE-9519 Project: HBase Issue Type: Improvement Components: HFile Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9519.txt we observed a reproducable NPE while scanning special table under special condition in our IntegratedTesting scenario, it was fixed by appling the attached patch. org.apache.hadoop.hbase.client.ScannerCallable@67ee75a5, java.io.IOException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1186) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1175) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2391) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:456) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1071) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:547) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:159) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:142) at org.apache.hadoop.hbase.io.HalfStoreFileReader.getLastKey(HalfStoreFileReader.java:267) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesKeyRangeFilter(StoreFile.java:1543) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:375) at org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:298) at org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:262) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:149) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2122) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3460) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1645) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1635) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1610) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2377) ... 5 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9519) fix NPE in EncodedScannerV2.getFirstKeyInBlock()
[ https://issues.apache.org/jira/browse/HBASE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9519: - Status: Patch Available (was: Open) fix NPE in EncodedScannerV2.getFirstKeyInBlock() Key: HBASE-9519 URL: https://issues.apache.org/jira/browse/HBASE-9519 Project: HBase Issue Type: Improvement Components: HFile Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9519.txt we observed a reproducable NPE while scanning special table under special condition in our IntegratedTesting scenario, it was fixed by appling the attached patch. org.apache.hadoop.hbase.client.ScannerCallable@67ee75a5, java.io.IOException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1186) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1175) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2391) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:456) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1071) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:547) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:159) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:142) at org.apache.hadoop.hbase.io.HalfStoreFileReader.getLastKey(HalfStoreFileReader.java:267) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesKeyRangeFilter(StoreFile.java:1543) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:375) at org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:298) at org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:262) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:149) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2122) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3460) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1645) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1635) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1610) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2377) ... 5 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9516) Mark hbase-common classes missing @InterfaceAudience annotation as Private
[ https://issues.apache.org/jira/browse/HBASE-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765276#comment-13765276 ] Hadoop QA commented on HBASE-9516: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602737/hbase-9516.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7177//console This message is automatically generated. Mark hbase-common classes missing @InterfaceAudience annotation as Private -- Key: HBASE-9516 URL: https://issues.apache.org/jira/browse/HBASE-9516 Project: HBase Issue Type: Sub-task Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9516.patch Files from hbase-common missing InterfaceAudience jon@swoop:~/proj/hbase-trunk/hbase-common/src/main/java$ grep -R -L InterfaceAudience . ./org/apache/hadoop/hbase/CellScannable.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java ./org/apache/hadoop/hbase/types/package-info.java ./org/apache/hadoop/hbase/codec/KeyValueCodec.java ./org/apache/hadoop/hbase/codec/CellCodec.java ./org/apache/hadoop/hbase/codec/CodecException.java ./org/apache/hadoop/hbase/codec/BaseEncoder.java ./org/apache/hadoop/hbase/codec/Codec.java ./org/apache/hadoop/hbase/codec/BaseDecoder.java ./org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java ./org/apache/hadoop/hbase/util/test/LoadTestKVGenerator.java ./org/apache/hadoop/hbase/util/test/LoadTestDataGenerator.java ./org/apache/hadoop/hbase/util/CollectionUtils.java ./org/apache/hadoop/hbase/util/DrainBarrier.java ./org/apache/hadoop/hbase/util/ReflectionUtils.java ./org/apache/hadoop/hbase/util/Triple.java ./org/apache/hadoop/hbase/util/IterableUtils.java ./org/apache/hadoop/hbase/util/ArrayUtils.java ./org/apache/hadoop/hbase/util/KeyLocker.java -- This message is automatically generated by JIRA. If you think it was sent incorrectly,
[jira] [Commented] (HBASE-9375) [REST] Querying row data gives all the available versions of a column
[ https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765278#comment-13765278 ] Hadoop QA commented on HBASE-9375: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602741/HBASE-9375_trunk.01.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7178//console This message is automatically generated. [REST] Querying row data gives all the available versions of a column - Key: HBASE-9375 URL: https://issues.apache.org/jira/browse/HBASE-9375 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch, HBASE-9375_trunk.01.patch, HBASE-9375_trunk.01.patch In the hbase shell, when a user tries to get a value related to a column, hbase returns only the latest value. But using the REST API returns HColumnDescriptor.DEFAULT_VERSIONS versions by default. The behavior should be consistent with the hbase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9520) shortcut split asap while requested splitPoint equals with region's startKey
Liang Xie created HBASE-9520: Summary: shortcut split asap while requested splitPoint equals with region's startKey Key: HBASE-9520 URL: https://issues.apache.org/jira/browse/HBASE-9520 Project: HBase Issue Type: Improvement Components: Client Reporter: Liang Xie Assignee: Liang Xie Priority: Minor we can shortcut this corener case from client side at all, w/o any traffic within RS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9520) shortcut split asap while requested splitPoint equals with region's startKey
[ https://issues.apache.org/jira/browse/HBASE-9520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9520: - Attachment: HBASE-9520.txt shortcut split asap while requested splitPoint equals with region's startKey Key: HBASE-9520 URL: https://issues.apache.org/jira/browse/HBASE-9520 Project: HBase Issue Type: Improvement Components: Client Reporter: Liang Xie Assignee: Liang Xie Priority: Minor Attachments: HBASE-9520.txt we can shortcut this corener case from client side at all, w/o any traffic within RS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9520) shortcut split asap while requested splitPoint equals with region's startKey
[ https://issues.apache.org/jira/browse/HBASE-9520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9520: - Status: Patch Available (was: Open) shortcut split asap while requested splitPoint equals with region's startKey Key: HBASE-9520 URL: https://issues.apache.org/jira/browse/HBASE-9520 Project: HBase Issue Type: Improvement Components: Client Reporter: Liang Xie Assignee: Liang Xie Priority: Minor Attachments: HBASE-9520.txt we can shortcut this corener case from client side at all, w/o any traffic within RS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9502) HStore.seekToScanner should handle magic value
[ https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765287#comment-13765287 ] Liang Xie commented on HBASE-9502: -- [~stack] w/o this patch, we probably couldn't get the target row, e.g. for the updated case, w/o the change, we'll get NPE which means can't get any suitable row. HStore.seekToScanner should handle magic value -- Key: HBASE-9502 URL: https://issues.apache.org/jira/browse/HBASE-9502 Project: HBase Issue Type: Bug Components: regionserver, Scanners Affects Versions: 0.98.0, 0.95.2, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9502.txt due to faked key, the seekTo probably reture -2, and HStore.seekToScanner should handle this corner case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765288#comment-13765288 ] Hadoop QA commented on HBASE-9518: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602750/HBASE-9518.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.io.TestHalfStoreFileReader Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7180//console This message is automatically generated. getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9518.txt make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
Nicolas Liochon created HBASE-9521: -- Summary: clean clearBufferOnFail behavior and deprecate it Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one. - the javadoc about writeBuffer beeing involved in the retry process was may be true 5 years ago, but now it's wrong I'm pretty sure that most people don't know this. You need to go to the implementation to learn it, as the javadoc says the opposite. a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - change setAutoFlush(boolean) to make it change only the autoFlush, w/o activating clearBufferOnFail It won't change the interface; but would change the behavior. I would like to put this into the next 0.96 release. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9486) NPE in HTable.close() with AsyncProcess
[ https://issues.apache.org/jira/browse/HBASE-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765307#comment-13765307 ] Hadoop QA commented on HBASE-9486: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602751/9486.v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7179//console This message is automatically generated. NPE in HTable.close() with AsyncProcess --- Key: HBASE-9486 URL: https://issues.apache.org/jira/browse/HBASE-9486 Project: HBase Issue Type: Bug Components: Client Reporter: Enis Soztutar Assignee: Nicolas Liochon Priority: Critical Fix For: 0.96.0 Attachments: 9486.v1.patch When running {code} hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey slowDeterministic {code} One task failed with the following stack trace: {code} 2013-09-10 01:56:03,115 WARN [htable-pool1-t134] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 3 ops on server02,60020,1378776046122 NOT resubmitting.region=IntegrationTestBigLinkedList,\xA6\x10\x9C\x85,1378776439065.766ab62aa30fa94c9014f09738698922., hostname=server02,60020,1378776046122, seqNum=16146143 2013-09-10 01:56:03,115 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 6 ops on server02,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\x9D\x95\xDB\xCB\xD5\xE2\xAD\x7F\xCB\x1D\xBCN~\xF2U,1378774537592.b2534e273feecba91db43496efa1cd12., hostname=server02,60020,1378775896233, seqNum=14890994 2013-09-10 01:56:03,655 WARN [htable-pool1-t119] org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 9 ops on server01,60020,1378775896233 NOT resubmitting.region=IntegrationTestBigLinkedList,\xB8\x0B.\x8C\x12Px\x88\x10\xA4\x07\x9FJ\x97\xD0,1378775167749.7c0f1c17bc5f02e41e02939187304976., hostname=server01,60020,1378775896233, seqNum=15863492 2013-09-10 01:56:03,818 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.NullPointerException at
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Patch Available (was: Open) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Description: The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. was: The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one. - the javadoc about writeBuffer beeing involved in the retry process was may be true 5 years ago, but now it's wrong I'm pretty sure that most people don't know this. You need to go to the implementation to learn it, as the javadoc says the opposite. a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - change setAutoFlush(boolean) to make it change only the autoFlush, w/o activating clearBufferOnFail It won't change the interface; but would change the behavior. I would like to put this into the next 0.96 release. clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier.
[jira] [Commented] (HBASE-9504) Backport HBASE-1212 to 0.94
[ https://issues.apache.org/jira/browse/HBASE-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765332#comment-13765332 ] Jean-Marc Spaggiari commented on HBASE-9504: Results : Tests run: 1380, Failures: 0, Errors: 0, Skipped: 13 [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 52:39.539s [INFO] Finished at: Wed Sep 11 22:16:00 EDT 2013 [INFO] Final Memory: 24M/371M [INFO] Backport HBASE-1212 to 0.94 --- Key: HBASE-9504 URL: https://issues.apache.org/jira/browse/HBASE-9504 Project: HBase Issue Type: Bug Affects Versions: 0.94.11 Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Attachments: HBASE-9504-v0-0.94.patch HBASE-1212: merge tool expects regions all have different sequence ids We need to backport this small change into 0.94. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9519) fix NPE in EncodedScannerV2.getFirstKeyInBlock()
[ https://issues.apache.org/jira/browse/HBASE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765338#comment-13765338 ] Hadoop QA commented on HBASE-9519: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602753/HBASE-9519.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7181//console This message is automatically generated. fix NPE in EncodedScannerV2.getFirstKeyInBlock() Key: HBASE-9519 URL: https://issues.apache.org/jira/browse/HBASE-9519 Project: HBase Issue Type: Improvement Components: HFile Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9519.txt we observed a reproducable NPE while scanning special table under special condition in our IntegratedTesting scenario, it was fixed by appling the attached patch. org.apache.hadoop.hbase.client.ScannerCallable@67ee75a5, java.io.IOException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1186) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1175) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2391) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:456) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1071) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:547) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:159) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:142) at
[jira] [Commented] (HBASE-9520) shortcut split asap while requested splitPoint equals with region's startKey
[ https://issues.apache.org/jira/browse/HBASE-9520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765339#comment-13765339 ] Hadoop QA commented on HBASE-9520: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602756/HBASE-9520.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7182//console This message is automatically generated. shortcut split asap while requested splitPoint equals with region's startKey Key: HBASE-9520 URL: https://issues.apache.org/jira/browse/HBASE-9520 Project: HBase Issue Type: Improvement Components: Client Reporter: Liang Xie Assignee: Liang Xie Priority: Minor Attachments: HBASE-9520.txt we can shortcut this corener case from client side at all, w/o any traffic within RS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9328) Table web UI is corrupted sometime
[ https://issues.apache.org/jira/browse/HBASE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765349#comment-13765349 ] Jean-Marc Spaggiari commented on HBASE-9328: Tested HBASE-9328-v2-0.94.patch locally: [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 22:10.565s [INFO] Finished at: Thu Sep 12 07:27:35 EDT 2013 [INFO] Final Memory: 66M/528M [INFO] Table web UI is corrupted sometime -- Key: HBASE-9328 URL: https://issues.apache.org/jira/browse/HBASE-9328 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.95.2, 0.94.11 Reporter: Jimmy Xiang Assignee: Jean-Marc Spaggiari Labels: web-ui Attachments: HBASE-9328-v0-trunk.patch, HBASE-9328-v1-trunk.patch, HBASE-9328-v2-0.94.patch, HBASE-9328-v2-trunk.patch, HBASE-9328-v3-trunk.patch, HBASE-9328-v4-trunk.patch, HBASE-9328-v4-trunk.patch, HBASE-9328-v5-trunk.patch, table.png The web UI page source is like below: {noformat} h2Table Attributes/h2 table class=table table-striped tr thAttribute Name/th thValue/th thDescription/th /tr tr tdEnabled/td tdtrue/td tdIs the table enabled/td /tr tr tdCompaction/td td phr//p {noformat} No sure if it is a HBase issue, or a network/browser issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9518: - Attachment: HBASE-9518-v2.txt seems it depends on a little other change, i just port the related from our internal codebase, let's try another QA run for v2 getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9518.txt, HBASE-9518-v2.txt make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Attachment: 9521.v1.patch clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Open (was: Patch Available) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Patch Available (was: Open) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765405#comment-13765405 ] Hadoop QA commented on HBASE-9521: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602775/9521.v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 48 new or modified tests. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7186//console This message is automatically generated. clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Open (was: Patch Available) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Patch Available (was: Open) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Attachment: 9521.v1.patch clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Open (was: Patch Available) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Attachment: 9521.v2.patch clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch, 9521.v2.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Patch Available (was: Open) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch, 9521.v2.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765409#comment-13765409 ] Hadoop QA commented on HBASE-9521: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602776/9521.v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 48 new or modified tests. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7187//console This message is automatically generated. clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765428#comment-13765428 ] Hadoop QA commented on HBASE-9521: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602784/9521.v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 48 new or modified tests. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7188//console This message is automatically generated. clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch, 9521.v2.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Patch Available (was: Open) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch, 9521.v2.patch, 9521.v3.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Attachment: 9521.v3.patch clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch, 9521.v2.patch, 9521.v3.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9521) clean clearBufferOnFail behavior and deprecate it
[ https://issues.apache.org/jira/browse/HBASE-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9521: --- Status: Open (was: Patch Available) clean clearBufferOnFail behavior and deprecate it - Key: HBASE-9521 URL: https://issues.apache.org/jira/browse/HBASE-9521 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9521.v1.patch, 9521.v1.patch, 9521.v2.patch, 9521.v3.patch The behavior with clearBufferOnFail is very fishy. {code} /** * When you turn {@link #autoFlush} off, you should also consider the * {@link #clearBufferOnFail} option. By default, asynchronous {@link Put} * requests will be retried on failure until successful. However, this can * pollute the writeBuffer and slow down batching performance. Additionally, * you may want to issue a number of Put requests and call * {@link #flushCommits()} as a barrier. In both use cases, consider setting * clearBufferOnFail to true to erase the buffer after {@link #flushCommits()} * has been called, regardless of success. * * @param autoFlush * Whether or not to enable 'auto-flush'. * @param clearBufferOnFail * Whether to keep Put failures in the writeBuffer * @see #flushCommits */ public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { this.autoFlush = autoFlush; this.clearBufferOnFail = autoFlush || clearBufferOnFail; yo man } {code} {code} public void setAutoFlush(boolean autoFlush) { setAutoFlush(autoFlush, autoFlush); more yo } {code} So by default, a HTable has - autoflush == true - clearBufferOnFail == true BUT, if you call setAutoFlush(false), you have - autoflush == false - clearBufferOnFail == false So: - you're setting two parameters instead of only one, without being told so. - a side effect is that failed operations will be tried twice: - one in the standard process - one in the table close, as we're flushing the buffer again I would like to: - deprecate clearBufferOnFail. - deprecate setAutoFlush(boolean), to make things clear about what we're doing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9375) [REST] Querying row data gives all the available versions of a column
[ https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9375: Resolution: Fixed Fix Version/s: 0.98.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks Vandana. [REST] Querying row data gives all the available versions of a column - Key: HBASE-9375 URL: https://issues.apache.org/jira/browse/HBASE-9375 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Fix For: 0.98.0 Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch, HBASE-9375_trunk.01.patch, HBASE-9375_trunk.01.patch In the hbase shell, when a user tries to get a value related to a column, hbase returns only the latest value. But using the REST API returns HColumnDescriptor.DEFAULT_VERSIONS versions by default. The behavior should be consistent with the hbase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765468#comment-13765468 ] Ted Yu commented on HBASE-9518: --- {code} + System.out.println(midkey: + midKV + or: + Bytes.toStringBinary(midkey)); + System.out.println(beforeMidKey: + beforeMidKey); {code} Can you replace the above with LOG ? getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9518.txt, HBASE-9518-v2.txt make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9364) Get request with multiple columns returns partial results
[ https://issues.apache.org/jira/browse/HBASE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765462#comment-13765462 ] Nick Dimiduk commented on HBASE-9364: - Trunk has shifted. [~avandana] would you mind rebasing? Thanks. Get request with multiple columns returns partial results - Key: HBASE-9364 URL: https://issues.apache.org/jira/browse/HBASE-9364 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9364.00.patch, HBASE-9364.01.patch, hbase-9364_trunk.00.patch, HBASE-9364_trunk.01.patch, HBASE-9364_trunk.02.patch, HBASE-9364_trunk.02.patch When a GET request is issue for a table row with multiple columns and columns have empty qualifier like f1: , results for empty qualifiers is being ignored. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9347: Resolution: Fixed Fix Version/s: 0.98.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks Vandana. Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Fix For: 0.98.0 Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, HBASE-9347_trunk.01.patch, HBASE-9347_trunk.02.patch, HBASE-9347_trunk.03.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.05.patch, HBASE-9347_trunk.05.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9518) getFakedKey() improvement
[ https://issues.apache.org/jira/browse/HBASE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765459#comment-13765459 ] Hadoop QA commented on HBASE-9518: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602773/HBASE-9518-v2.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7185//console This message is automatically generated. getFakedKey() improvement - Key: HBASE-9518 URL: https://issues.apache.org/jira/browse/HBASE-9518 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9518.txt, HBASE-9518-v2.txt make generating faked key algo more aggressive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9462) HBaseAdmin#isTableEnabled() should throw exception for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9462: -- Attachment: 9462-0.96-v4.patch Patch for 0.96, running test suite now. HBaseAdmin#isTableEnabled() should throw exception for non-existent table - Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0, 0.96.0, 0.94.13 Attachments: 9462-0.94.txt, 9462-0.96-v4.patch, 9462.patch, 9462-trunk.txt, 9462-trunk-v2.txt, 9462-v2.patch, 9462-v3.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9101) Addendum to pluggable RpcScheduler
[ https://issues.apache.org/jira/browse/HBASE-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765585#comment-13765585 ] Hadoop QA commented on HBASE-9101: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602799/hbase-9101-v5.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 26 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7191//console This message is automatically generated. Addendum to pluggable RpcScheduler -- Key: HBASE-9101 URL: https://issues.apache.org/jira/browse/HBASE-9101 Project: HBase Issue Type: Improvement Components: IPC/RPC Reporter: Chao Shi Assignee: Chao Shi Fix For: 0.98.0 Attachments: hbase-9101.patch, hbase-9101-v2.patch, hbase-9101-v3.patch, hbase-9101-v4.patch, hbase-9101-v5.patch This patch fixes the review comments from [~stack] and a small fix: - Make RpcScheduler fully pluggable. One can write his/her own implementation and add it to classpath and specify it by config hbase.region.server.rpc.scheduler.factory.class. - Add unit tests and fix that RpcScheduler.stop is not called (discovered by tests) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9519) fix NPE in EncodedScannerV2.getFirstKeyInBlock()
[ https://issues.apache.org/jira/browse/HBASE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765587#comment-13765587 ] Ted Yu commented on HBASE-9519: --- There seems to be code duplication with updateCurrentBlock(). Can you extract the common code ? What encoding was used that led to NPE ? Thanks fix NPE in EncodedScannerV2.getFirstKeyInBlock() Key: HBASE-9519 URL: https://issues.apache.org/jira/browse/HBASE-9519 Project: HBase Issue Type: Improvement Components: HFile Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9519.txt we observed a reproducable NPE while scanning special table under special condition in our IntegratedTesting scenario, it was fixed by appling the attached patch. org.apache.hadoop.hbase.client.ScannerCallable@67ee75a5, java.io.IOException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1186) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1175) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2391) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:456) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1071) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:547) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:159) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:142) at org.apache.hadoop.hbase.io.HalfStoreFileReader.getLastKey(HalfStoreFileReader.java:267) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesKeyRangeFilter(StoreFile.java:1543) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:375) at org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:298) at org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:262) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:149) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2122) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3460) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1645) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1635) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1610) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2377) ... 5 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9516) Mark hbase-common classes missing @InterfaceAudience annotation as Private
[ https://issues.apache.org/jira/browse/HBASE-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765590#comment-13765590 ] stack commented on HBASE-9516: -- +1 I thought there was a convention that no audience annotation implied private? Mark hbase-common classes missing @InterfaceAudience annotation as Private -- Key: HBASE-9516 URL: https://issues.apache.org/jira/browse/HBASE-9516 Project: HBase Issue Type: Sub-task Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9516.patch Files from hbase-common missing InterfaceAudience jon@swoop:~/proj/hbase-trunk/hbase-common/src/main/java$ grep -R -L InterfaceAudience . ./org/apache/hadoop/hbase/CellScannable.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java ./org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java ./org/apache/hadoop/hbase/types/package-info.java ./org/apache/hadoop/hbase/codec/KeyValueCodec.java ./org/apache/hadoop/hbase/codec/CellCodec.java ./org/apache/hadoop/hbase/codec/CodecException.java ./org/apache/hadoop/hbase/codec/BaseEncoder.java ./org/apache/hadoop/hbase/codec/Codec.java ./org/apache/hadoop/hbase/codec/BaseDecoder.java ./org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java ./org/apache/hadoop/hbase/util/test/LoadTestKVGenerator.java ./org/apache/hadoop/hbase/util/test/LoadTestDataGenerator.java ./org/apache/hadoop/hbase/util/CollectionUtils.java ./org/apache/hadoop/hbase/util/DrainBarrier.java ./org/apache/hadoop/hbase/util/ReflectionUtils.java ./org/apache/hadoop/hbase/util/Triple.java ./org/apache/hadoop/hbase/util/IterableUtils.java ./org/apache/hadoop/hbase/util/ArrayUtils.java ./org/apache/hadoop/hbase/util/KeyLocker.java -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9508) Restore some API mistakenly removed in client, mapred*, and common
[ https://issues.apache.org/jira/browse/HBASE-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765594#comment-13765594 ] stack commented on HBASE-9508: -- bq. The equivalent of removing them would be to just not commit the deprecated methods to trunk right? Yes. And then there is the suggestion that since 0.98 is just after 0.96, that we leave the deprecated in till the release after. And after removal, we'd then have getTable which returns a TableName and no getTableName method. Would make users wonder. At least if there is a getTableName still in place, there'd be no API hole (and a pointer to getTable). Restore some API mistakenly removed in client, mapred*, and common -- Key: HBASE-9508 URL: https://issues.apache.org/jira/browse/HBASE-9508 Project: HBase Issue Type: Bug Components: Usability Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9508.txt, 9508v2.txt Here is contrib to the API compatibility story. I went over Aleks' compatibility report and restored removed or overriden methods and constructors, stuff that was in 0.94 non-deprecated and removed in 0.96. This patch is not comprehensive because some removals cannot be restored as in those that used take Writables (more on this later from Jon). The changes included here are mostly restore of methods that took a table name as a byte array replaced by a TableName object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9515) Intermittent TestZKSecretWatcher#testKeyUpdate failure
[ https://issues.apache.org/jira/browse/HBASE-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765543#comment-13765543 ] Hadoop QA commented on HBASE-9515: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602731/9515-v1.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7190//console This message is automatically generated. Intermittent TestZKSecretWatcher#testKeyUpdate failure -- Key: HBASE-9515 URL: https://issues.apache.org/jira/browse/HBASE-9515 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Attachments: 9515-v1.txt From https://builds.apache.org/job/hbase-0.96-hadoop2/19/testReport/org.apache.hadoop.hbase.security.token/TestZKSecretWatcher/testKeyUpdate/ : {code} java.lang.AssertionError: expected null, but was:AuthenticationKey[ id=2, expiration=9223372036854775807 ] at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotNull(Assert.java:664) at org.junit.Assert.assertNull(Assert.java:646) at org.junit.Assert.assertNull(Assert.java:656) at org.apache.hadoop.hbase.security.token.TestZKSecretWatcher.testKeyUpdate(TestZKSecretWatcher.java:149) {code} It failed here: {code} // verify that the expired key has been removed assertNull(KEY_SLAVE.getKey(key1.getKeyId())); {code} Normally key1 should be removed by AuthenticationTokenSecretManager#removeKey(): {code} allKeys.remove(keyId); {code} A search in the test output for 'Removing key ' yielded nothing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9492) hdfs-site.xml is not excluded from the it-test jar
[ https://issues.apache.org/jira/browse/HBASE-9492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765656#comment-13765656 ] Hudson commented on HBASE-9492: --- FAILURE: Integrated in HBase-TRUNK #4496 (See [https://builds.apache.org/job/HBase-TRUNK/4496/]) HBASE-9492 hdfs-site.xml is not excluded from the it-test jar (mbertozzi: rev 1522600) * /hbase/trunk/pom.xml hdfs-site.xml is not excluded from the it-test jar -- Key: HBASE-9492 URL: https://issues.apache.org/jira/browse/HBASE-9492 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.98.0, 0.96.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9492-v0.patch if hbase-it-tests.jar is in the classpath before the hadoop dir conf the user hdfs-site.xml is ignored. A fix was already done with HBASE-8510, but that exclude was applied only to hbase-server. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765654#comment-13765654 ] Hudson commented on HBASE-9347: --- FAILURE: Integrated in HBase-TRUNK #4496 (See [https://builds.apache.org/job/HBase-TRUNK/4496/]) HBASE-9347 Support for enabling servlet filters for REST service (Vandana Ayyalasomayajula) (ndimiduk: rev 1522586) * /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Fix For: 0.98.0 Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, HBASE-9347_trunk.01.patch, HBASE-9347_trunk.02.patch, HBASE-9347_trunk.03.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.04.patch, HBASE-9347_trunk.05.patch, HBASE-9347_trunk.05.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9375) [REST] Querying row data gives all the available versions of a column
[ https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765655#comment-13765655 ] Hudson commented on HBASE-9375: --- FAILURE: Integrated in HBase-TRUNK #4496 (See [https://builds.apache.org/job/HBase-TRUNK/4496/]) HBASE-9375 [REST] Querying row data gives all the available versions of a column (Vandana Ayyalasomayajula) (ndimiduk: rev 1522590) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java [REST] Querying row data gives all the available versions of a column - Key: HBASE-9375 URL: https://issues.apache.org/jira/browse/HBASE-9375 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Fix For: 0.98.0 Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch, HBASE-9375_trunk.01.patch, HBASE-9375_trunk.01.patch In the hbase shell, when a user tries to get a value related to a column, hbase returns only the latest value. But using the REST API returns HColumnDescriptor.DEFAULT_VERSIONS versions by default. The behavior should be consistent with the hbase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9496) make sure HBase APIs are compatible between 0.94 and 0.96
[ https://issues.apache.org/jira/browse/HBASE-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765635#comment-13765635 ] Sergey Shelukhin commented on HBASE-9496: - Hive code, at least in 0.11 as far as I see, creates Result with kvs in ResultWritable. Other places create empty results. My point is that if there's any usage of these APIs, it creates tons of pain for people using them - they have to create HBase shim and everything that comes with that, 2 builds (4 with Hadoop1-2), etc. Whereas for us it's not so hard to not break the APIs. make sure HBase APIs are compatible between 0.94 and 0.96 - Key: HBASE-9496 URL: https://issues.apache.org/jira/browse/HBASE-9496 Project: HBase Issue Type: Sub-task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9496-v0-96.patch Follow-up for HBASE-9477. Some other methods are now different between 94 and 96 (Result::getColumnLatest, Put::get, anything that takes a collection of Cell, e.g. Result ctor, Mutation::setFamilyMap etc.). I am assuming things that accept Cell (Increment::add, Delete::addDeleteMarker) don't need to change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9517) Include only InterfaceAudiencePublic elements in generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-9517: -- Summary: Include only InterfaceAudiencePublic elements in generated Javadoc (was: Exclude Private elements from generated Javadoc) Include only InterfaceAudiencePublic elements in generated Javadoc -- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.96.0 We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765740#comment-13765740 ] Francis Liu commented on HBASE-9510: Since FSTableDescriptors.get() returns null on non-existent. Shouldn't we be consistent and do the same. At least for the non-rpc apis I believe it should return null to be consistent? Regarding listTablesByNS api. I think it should return non-existent namespace exceptions. Else it seems to be masking an error? On a slightly related note can you remove change the delimiter used by tableNameFoo to use the constant, seems I missed that :-). It's in createDoubleTest. Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: hbase-9510_v1.patch, hbase-9510_v2.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9492) hdfs-site.xml is not excluded from the it-test jar
[ https://issues.apache.org/jira/browse/HBASE-9492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765727#comment-13765727 ] Hudson commented on HBASE-9492: --- SUCCESS: Integrated in hbase-0.96 #38 (See [https://builds.apache.org/job/hbase-0.96/38/]) HBASE-9492 hdfs-site.xml is not excluded from the it-test jar (mbertozzi: rev 1522599) * /hbase/branches/0.96/pom.xml hdfs-site.xml is not excluded from the it-test jar -- Key: HBASE-9492 URL: https://issues.apache.org/jira/browse/HBASE-9492 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.98.0, 0.96.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9492-v0.patch if hbase-it-tests.jar is in the classpath before the hadoop dir conf the user hdfs-site.xml is ignored. A fix was already done with HBASE-8510, but that exclude was applied only to hbase-server. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions
[ https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765703#comment-13765703 ] Jimmy Xiang commented on HBASE-9480: I am looking the flaky test, will fix it soon. bq. I think you can safely revert the code in HRegionServer because the newly added following code resumes region transition after zk node deletion I really worried about double-assignment. That's why I want to make sure the region is closed before we start to assign it somewhere else. I think it is a right thing to differentiate not serving from still closing. We can fix new issues caused by this, right? bq. I'm wondering if it's possible that you can move the following code inside unsign itself immediately after I thought about this too. The reason I didn't do that is because sometimes we don't want to re-assign the region now. For example, inside handleRegion when an unexpected RS_ZK_REGION_OPENED is received. Regions are unexpectedly made offline in certain failure conditions --- Key: HBASE-9480 URL: https://issues.apache.org/jira/browse/HBASE-9480 Project: HBase Issue Type: Bug Reporter: Devaraj Das Assignee: Jimmy Xiang Priority: Blocker Fix For: 0.96.0 Attachments: 9480-1.txt, trunk-9480.patch, trunk-9480_v1.1.patch, trunk-9480_v1.2.patch Came across this issue (HBASE-9338 test): 1. Client issues a request to move a region from ServerA to ServerB 2. ServerA is compacting that region and doesn't close region immediately. In fact, it takes a while to complete the request. 3. The master in the meantime, sends another close request. 4. ServerA sends it a NotServingRegionException 5. Master handles the exception, deletes the znode, and invokes regionOffline for the said region. 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is deleted. The region is permanently offline. There are potentially other situations where when a RegionServer is offline and the client asks for a region move off from that server, the master makes the region offline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9462) HBaseAdmin#isTableEnabled() should throw exception for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9462: -- Status: Open (was: Patch Available) HBaseAdmin#isTableEnabled() should throw exception for non-existent table - Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0, 0.96.0, 0.94.13 Attachments: 9462-0.94.txt, 9462-0.96-v4.patch, 9462.patch, 9462-trunk.txt, 9462-trunk-v2.txt, 9462-v2.patch, 9462-v3.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9101) Addendum to pluggable RpcScheduler
[ https://issues.apache.org/jira/browse/HBASE-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Shi updated HBASE-9101: Attachment: hbase-9101-v5.patch @stack, I understood what you meant. I did a little bit different: I extracted an interface QosFunction (to ipc package) and rename the original one to QosFunctionImpl. Please feel free to hack it based on my patch if you like. Addendum to pluggable RpcScheduler -- Key: HBASE-9101 URL: https://issues.apache.org/jira/browse/HBASE-9101 Project: HBase Issue Type: Improvement Components: IPC/RPC Reporter: Chao Shi Assignee: Chao Shi Fix For: 0.98.0 Attachments: hbase-9101.patch, hbase-9101-v2.patch, hbase-9101-v3.patch, hbase-9101-v4.patch, hbase-9101-v5.patch This patch fixes the review comments from [~stack] and a small fix: - Make RpcScheduler fully pluggable. One can write his/her own implementation and add it to classpath and specify it by config hbase.region.server.rpc.scheduler.factory.class. - Add unit tests and fix that RpcScheduler.stop is not called (discovered by tests) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9522) Allow region opening even if creation of some HFile Readers creation fail.
[ https://issues.apache.org/jira/browse/HBASE-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-9522: -- Affects Version/s: 0.96.0 0.95.2 0.94.11 Fix Version/s: 0.96.0 0.94.12 Allow region opening even if creation of some HFile Readers creation fail. -- Key: HBASE-9522 URL: https://issues.apache.org/jira/browse/HBASE-9522 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.11, 0.96.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Minor Fix For: 0.94.12, 0.96.0 In some scenarios, when am sure that the Reader creation while region opening would fail, it would be better if the region still opens with a warning. This would ensure that atleast the data that is available can be read instead of failing the region assignment. Agree that, this should be based on a configuration. If you feel this is ok, I can give a patch. The patch would just collect a list of store files for which the region opening failed and just log those files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9522) Allow region opening even if creation of some HFile Readers creation fail.
ramkrishna.s.vasudevan created HBASE-9522: - Summary: Allow region opening even if creation of some HFile Readers creation fail. Key: HBASE-9522 URL: https://issues.apache.org/jira/browse/HBASE-9522 Project: HBase Issue Type: Improvement Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Minor In some scenarios, when am sure that the Reader creation while region opening would fail, it would be better if the region still opens with a warning. This would ensure that atleast the data that is available can be read instead of failing the region assignment. Agree that, this should be based on a configuration. If you feel this is ok, I can give a patch. The patch would just collect a list of store files for which the region opening failed and just log those files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9519) fix NPE in EncodedScannerV2.getFirstKeyInBlock()
[ https://issues.apache.org/jira/browse/HBASE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9519: -- Issue Type: Bug (was: Improvement) fix NPE in EncodedScannerV2.getFirstKeyInBlock() Key: HBASE-9519 URL: https://issues.apache.org/jira/browse/HBASE-9519 Project: HBase Issue Type: Bug Components: HFile Affects Versions: 0.98.0, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9519.txt we observed a reproducable NPE while scanning special table under special condition in our IntegratedTesting scenario, it was fixed by appling the attached patch. org.apache.hadoop.hbase.client.ScannerCallable@67ee75a5, java.io.IOException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1186) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1175) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2391) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:456) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1071) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:547) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:159) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:142) at org.apache.hadoop.hbase.io.HalfStoreFileReader.getLastKey(HalfStoreFileReader.java:267) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesKeyRangeFilter(StoreFile.java:1543) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:375) at org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:298) at org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:262) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:149) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2122) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3460) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1645) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1635) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1610) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2377) ... 5 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9522) Allow region opening even if creation of some HFile Readers fail.
[ https://issues.apache.org/jira/browse/HBASE-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-9522: -- Summary: Allow region opening even if creation of some HFile Readers fail. (was: Allow region opening even if creation of some HFile Readers creation fail.) Allow region opening even if creation of some HFile Readers fail. - Key: HBASE-9522 URL: https://issues.apache.org/jira/browse/HBASE-9522 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.11, 0.96.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Minor Fix For: 0.94.12, 0.96.0 In some scenarios, when am sure that the Reader creation while region opening would fail, it would be better if the region still opens with a warning. This would ensure that atleast the data that is available can be read instead of failing the region assignment. Agree that, this should be based on a configuration. If you feel this is ok, I can give a patch. The patch would just collect a list of store files for which the region opening failed and just log those files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9462) HBaseAdmin#isTableEnabled() should throw exception for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765563#comment-13765563 ] Ted Yu commented on HBASE-9462: --- 0.96 test suite passed: {code} [INFO] HBase . SUCCESS [1.794s] [INFO] HBase - Common SUCCESS [18.220s] [INFO] HBase - Protocol .. SUCCESS [0.588s] [INFO] HBase - Client SUCCESS [18.824s] [INFO] HBase - Hadoop Compatibility .. SUCCESS [5.102s] [INFO] HBase - Hadoop Two Compatibility .. SUCCESS [1.763s] [INFO] HBase - Prefix Tree ... SUCCESS [2.706s] [INFO] HBase - Server SUCCESS [51:37.547s] [INFO] HBase - Integration Tests . SUCCESS [1.192s] [INFO] HBase - Examples .. SUCCESS [0.957s] [INFO] HBase - Assembly .. SUCCESS [0.931s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 52:30.228s [INFO] Finished at: Thu Sep 12 15:56:40 UTC 2013 [INFO] Final Memory: 47M/631M {code} HBaseAdmin#isTableEnabled() should throw exception for non-existent table - Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0, 0.96.0, 0.94.13 Attachments: 9462-0.94.txt, 9462-0.96-v4.patch, 9462.patch, 9462-trunk.txt, 9462-trunk-v2.txt, 9462-v2.patch, 9462-v3.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions
[ https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-9480: --- Status: Open (was: Patch Available) Regions are unexpectedly made offline in certain failure conditions --- Key: HBASE-9480 URL: https://issues.apache.org/jira/browse/HBASE-9480 Project: HBase Issue Type: Bug Reporter: Devaraj Das Assignee: Jimmy Xiang Priority: Blocker Fix For: 0.96.0 Attachments: 9480-1.txt, trunk-9480.patch, trunk-9480_v1.1.patch, trunk-9480_v1.2.patch Came across this issue (HBASE-9338 test): 1. Client issues a request to move a region from ServerA to ServerB 2. ServerA is compacting that region and doesn't close region immediately. In fact, it takes a while to complete the request. 3. The master in the meantime, sends another close request. 4. ServerA sends it a NotServingRegionException 5. Master handles the exception, deletes the znode, and invokes regionOffline for the said region. 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is deleted. The region is permanently offline. There are potentially other situations where when a RegionServer is offline and the client asks for a region move off from that server, the master makes the region offline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8496) Implement tags and the internals of how a tag should look like
[ https://issues.apache.org/jira/browse/HBASE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765700#comment-13765700 ] ramkrishna.s.vasudevan commented on HBASE-8496: --- @Ted Thanks for the reviews. Once your review is done will update the patch. Implement tags and the internals of how a tag should look like -- Key: HBASE-8496 URL: https://issues.apache.org/jira/browse/HBASE-8496 Project: HBase Issue Type: New Feature Affects Versions: 0.98.0, 0.95.2 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Critical Attachments: Comparison.pdf, HBASE-8496_2.patch, HBASE-8496.patch, Tag design.pdf, Tag design_updated.pdf, Tag_In_KV_Buffer_For_reference.patch The intent of this JIRA comes from HBASE-7897. This would help us to decide on the structure and format of how the tags should look like. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0
[ https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765614#comment-13765614 ] Hadoop QA commented on HBASE-9338: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602803/HBASE-9338-1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7192//console This message is automatically generated. Test Big Linked List fails on Hadoop 2.1.0 -- Key: HBASE-9338 URL: https://issues.apache.org/jira/browse/HBASE-9338 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9338-0.patch, HBASE-9338-1.patch, HBASE-9338-TESTING-2.patch, HBASE-9338-TESTING-3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9517) Include only InterfaceAudiencePublic elements in generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765731#comment-13765731 ] Jonathan Hsieh commented on HBASE-9517: --- This isn't perfect (there are quite a several empty packages in the javadoc) but it is much easier to see what is exposed and not now. Include only InterfaceAudiencePublic elements in generated Javadoc -- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9517.patch We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9517) Include only InterfaceAudiencePublic elements in generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765801#comment-13765801 ] Hadoop QA commented on HBASE-9517: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602842/hbase-9517.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+0 tests included{color}. The patch appears to be a documentation patch that doesn't require tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7194//console This message is automatically generated. Include only InterfaceAudiencePublic elements in generated Javadoc -- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9517.patch We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9517) Include only InterfaceAudiencePublic elements in generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765715#comment-13765715 ] Jonathan Hsieh commented on HBASE-9517: --- Also instead of Excluding private, I use the IncludePublicAnnotationsStandardDoclet. This makes any classes not marked properly default to private. Include only InterfaceAudiencePublic elements in generated Javadoc -- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.96.0 Attachments: hbase-9517.patch We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9517) Include only InterfaceAudiencePublic elements in generated Javadoc
[ https://issues.apache.org/jira/browse/HBASE-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-9517: -- Fix Version/s: 0.98.0 Include only InterfaceAudiencePublic elements in generated Javadoc -- Key: HBASE-9517 URL: https://issues.apache.org/jira/browse/HBASE-9517 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9517.patch We should generate two sets of javadoc a la HADOOP-6658 -- one for api users that excludes all InterfaceAudiencePrivate apis, and one for hbase core developers. Eventually when we tighten up the other modules we might add another for coproc developers, and other custom 3rd party pluggable elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-9343: Attachment: HBASE-9343_trunk.02.patch rebased on open source Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0
[ https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13765836#comment-13765836 ] Elliott Clark commented on HBASE-9338: -- So the patch changes slow deterministic monkey. It changes the lists of actions so that MoveRandomRegionOfTableAction and MoveRegionsOfTableAction can't run in parallel with Region servers being killed. Then it adds extra sleep in those actions so that there's even more buffer. Test Big Linked List fails on Hadoop 2.1.0 -- Key: HBASE-9338 URL: https://issues.apache.org/jira/browse/HBASE-9338 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9338-0.patch, HBASE-9338-1.patch, HBASE-9338-TESTING-2.patch, HBASE-9338-TESTING-3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira