[jira] [Commented] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly
[ https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254301#comment-13254301 ] Ferdy Galema commented on HBASE-2214: - Ok thanks for your comments. I just submitted new patch to review board. https://reviews.apache.org/r/4726/ So what's protocol here, can I add 'hbase' to reviewers group or just add individual reviewers? Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly - Key: HBASE-2214 URL: https://issues.apache.org/jira/browse/HBASE-2214 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Ferdy Galema Fix For: 0.94.1 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt The notion that you set size rather than row count specifying how many rows a scanner should return in each cycle was raised over in hbase-1966. Its a good one making hbase regular though the data under it may vary. HBase-1966 was committed but the patch was constrained by the fact that it needed to not change RPC interface. This issue is about doing hbase-1966 for 0.21 in a clean, unconstrained way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly
[ https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254330#comment-13254330 ] Ferdy Galema commented on HBASE-2214: - Secondly, I'd like the uploaded diff to be compared to the 0.94 branch. How can I specify this? It seems it is always compared to trunk? Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly - Key: HBASE-2214 URL: https://issues.apache.org/jira/browse/HBASE-2214 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Ferdy Galema Fix For: 0.94.1 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt The notion that you set size rather than row count specifying how many rows a scanner should return in each cycle was raised over in hbase-1966. Its a good one making hbase regular though the data under it may vary. HBase-1966 was committed but the patch was constrained by the fact that it needed to not change RPC interface. This issue is about doing hbase-1966 for 0.21 in a clean, unconstrained way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly
[ https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254338#comment-13254338 ] Ferdy Galema commented on HBASE-2214: - Sure I will do that. Some patch segments do not apply to trunk so I will first create a patch for trunk. Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly - Key: HBASE-2214 URL: https://issues.apache.org/jira/browse/HBASE-2214 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Ferdy Galema Fix For: 0.94.1 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt The notion that you set size rather than row count specifying how many rows a scanner should return in each cycle was raised over in hbase-1966. Its a good one making hbase regular though the data under it may vary. HBase-1966 was committed but the patch was constrained by the fact that it needed to not change RPC interface. This issue is about doing hbase-1966 for 0.21 in a clean, unconstrained way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly
[ https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254339#comment-13254339 ] Ferdy Galema commented on HBASE-2214: - (Nevermind the latest comment. It seems that with a fuzz factor it does apply.) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly - Key: HBASE-2214 URL: https://issues.apache.org/jira/browse/HBASE-2214 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Ferdy Galema Fix For: 0.94.1 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt The notion that you set size rather than row count specifying how many rows a scanner should return in each cycle was raised over in hbase-1966. Its a good one making hbase regular though the data under it may vary. HBase-1966 was committed but the patch was constrained by the fact that it needed to not change RPC interface. This issue is about doing hbase-1966 for 0.21 in a clean, unconstrained way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5607) Implement scanner caching throttling to prevent too big responses
[ https://issues.apache.org/jira/browse/HBASE-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240412#comment-13240412 ] Ferdy Galema commented on HBASE-5607: - Ok sure. I'm currently not into HBase development at all but I'm willing to give it a shot. Implement scanner caching throttling to prevent too big responses -- Key: HBASE-5607 URL: https://issues.apache.org/jira/browse/HBASE-5607 Project: HBase Issue Type: Improvement Reporter: Ferdy Galema When a misconfigured client retrieves fat rows with a scanner caching value set too high, there is a big chance the regionserver cannot handle the response buffers. (See log example below). Also see the mailing list thread: http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/24819 This issue is for tracking a solution that throttles the scanner caching value in the case the response buffers are too big. A few possible solutions: a) Is a response (repeatedly) over 100MB (configurable), then reduce the scanner-caching by half its size. (In either server or client). b) Introduce a property that defines a fixed (target) response size, instead of defining the numbers of rows to cache. 2012-03-20 07:57:40,092 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020, responseTooLarge for: next(4438820558358059204, 1000) from 172.23.122.15:50218: Size: 105.0m 2012-03-20 07:57:53,226 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 214.4m 2012-03-20 07:57:57,839 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 103.2m 2012-03-20 07:57:59,442 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 101.8m 2012-03-20 07:58:20,025 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(9033159548564260857, 1000) from 172.23.122.15:50218: Size: 107.2m 2012-03-20 07:58:27,273 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(9033159548564260857, 1000) from 172.23.122.15:50218: Size: 100.1m 2012-03-20 07:58:52,783 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020, responseTooLarge for: next(-8611621895979000997, 1000) from 172.23.122.15:50218: Size: 101.7m 2012-03-20 07:59:02,541 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020, responseTooLarge for: next(-511305750191148153, 1000) from 172.23.122.15:50218: Size: 120.9m 2012-03-20 07:59:25,346 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(1570572538285935733, 1000) from 172.23.122.15:50218: Size: 107.8m 2012-03-20 07:59:46,805 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-727080724379055435, 1000) from 172.23.122.15:50218: Size: 102.7m 2012-03-20 08:00:00,138 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-3701270248575643714, 1000) from 172.23.122.15:50218: Size: 122.1m 2012-03-20 08:00:21,232 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 157.5m 2012-03-20 08:00:23,199 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 160.7m 2012-03-20 08:00:28,174 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 160.8m 2012-03-20 08:00:32,643 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 182.4m 2012-03-20 08:00:36,826 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 237.2m 2012-03-20 08:00:40,850 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 212.7m 2012-03-20 08:00:44,736 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 232.9m 2012-03-20 08:00:49,471 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020,
[jira] [Commented] (HBASE-5607) Implement scanner caching throttling to prevent too big responses
[ https://issues.apache.org/jira/browse/HBASE-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239784#comment-13239784 ] Ferdy Galema commented on HBASE-5607: - I agree that HBASE-2214 makes this issue considerably less important, perhaps even obsolete. It depends on how 2214 will be implemented: If it becomes a default setting then this issue does not have to be fixed. A user disabling the response cap obviously has good reasons to do so. However if a user has to explicitly set it in order to be in effect, then they might not think about it and simply set a caching based on the number of rows, therefore possibly causing too big response. Throttling would help in this case. If you think the latter is not a real problem, then this issue can be closed. Implement scanner caching throttling to prevent too big responses -- Key: HBASE-5607 URL: https://issues.apache.org/jira/browse/HBASE-5607 Project: HBase Issue Type: Improvement Reporter: Ferdy Galema When a misconfigured client retrieves fat rows with a scanner caching value set too high, there is a big chance the regionserver cannot handle the response buffers. (See log example below). Also see the mailing list thread: http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/24819 This issue is for tracking a solution that throttles the scanner caching value in the case the response buffers are too big. A few possible solutions: a) Is a response (repeatedly) over 100MB (configurable), then reduce the scanner-caching by half its size. (In either server or client). b) Introduce a property that defines a fixed (target) response size, instead of defining the numbers of rows to cache. 2012-03-20 07:57:40,092 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020, responseTooLarge for: next(4438820558358059204, 1000) from 172.23.122.15:50218: Size: 105.0m 2012-03-20 07:57:53,226 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 214.4m 2012-03-20 07:57:57,839 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 103.2m 2012-03-20 07:57:59,442 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 101.8m 2012-03-20 07:58:20,025 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(9033159548564260857, 1000) from 172.23.122.15:50218: Size: 107.2m 2012-03-20 07:58:27,273 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(9033159548564260857, 1000) from 172.23.122.15:50218: Size: 100.1m 2012-03-20 07:58:52,783 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020, responseTooLarge for: next(-8611621895979000997, 1000) from 172.23.122.15:50218: Size: 101.7m 2012-03-20 07:59:02,541 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020, responseTooLarge for: next(-511305750191148153, 1000) from 172.23.122.15:50218: Size: 120.9m 2012-03-20 07:59:25,346 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(1570572538285935733, 1000) from 172.23.122.15:50218: Size: 107.8m 2012-03-20 07:59:46,805 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-727080724379055435, 1000) from 172.23.122.15:50218: Size: 102.7m 2012-03-20 08:00:00,138 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-3701270248575643714, 1000) from 172.23.122.15:50218: Size: 122.1m 2012-03-20 08:00:21,232 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 157.5m 2012-03-20 08:00:23,199 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 160.7m 2012-03-20 08:00:28,174 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 160.8m 2012-03-20 08:00:32,643 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 182.4m 2012-03-20 08:00:36,826 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from
[jira] [Commented] (HBASE-5607) Implement scanner caching throttling to prevent too big responses
[ https://issues.apache.org/jira/browse/HBASE-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13234407#comment-13234407 ] Ferdy Galema commented on HBASE-5607: - HBASE-2214 is a followup on HBASE-1996. Let's keep issue HBASE-2214 for option b. This issue is for creating an option that throttles a scan regardless of how scanner caching is configured (either number of rows or bytes size). Implement scanner caching throttling to prevent too big responses -- Key: HBASE-5607 URL: https://issues.apache.org/jira/browse/HBASE-5607 Project: HBase Issue Type: Improvement Reporter: Ferdy Galema When a misconfigured client retrieves fat rows with a scanner caching value set too high, there is a big chance the regionserver cannot handle the response buffers. (See log example below). Also see the mailing list thread: http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/24819 This issue is for tracking a solution that throttles the scanner caching value in the case the response buffers are too big. A few possible solutions: a) Is a response (repeatedly) over 100MB (configurable), then reduce the scanner-caching by half its size. (In either server or client). b) Introduce a property that defines a fixed (target) response size, instead of defining the numbers of rows to cache. 2012-03-20 07:57:40,092 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020, responseTooLarge for: next(4438820558358059204, 1000) from 172.23.122.15:50218: Size: 105.0m 2012-03-20 07:57:53,226 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 214.4m 2012-03-20 07:57:57,839 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 103.2m 2012-03-20 07:57:59,442 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020, responseTooLarge for: next(-7429189123174849941, 1000) from 172.23.122.15:50218: Size: 101.8m 2012-03-20 07:58:20,025 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(9033159548564260857, 1000) from 172.23.122.15:50218: Size: 107.2m 2012-03-20 07:58:27,273 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(9033159548564260857, 1000) from 172.23.122.15:50218: Size: 100.1m 2012-03-20 07:58:52,783 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020, responseTooLarge for: next(-8611621895979000997, 1000) from 172.23.122.15:50218: Size: 101.7m 2012-03-20 07:59:02,541 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020, responseTooLarge for: next(-511305750191148153, 1000) from 172.23.122.15:50218: Size: 120.9m 2012-03-20 07:59:25,346 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(1570572538285935733, 1000) from 172.23.122.15:50218: Size: 107.8m 2012-03-20 07:59:46,805 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-727080724379055435, 1000) from 172.23.122.15:50218: Size: 102.7m 2012-03-20 08:00:00,138 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(-3701270248575643714, 1000) from 172.23.122.15:50218: Size: 122.1m 2012-03-20 08:00:21,232 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 157.5m 2012-03-20 08:00:23,199 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 160.7m 2012-03-20 08:00:28,174 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 160.8m 2012-03-20 08:00:32,643 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 182.4m 2012-03-20 08:00:36,826 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 237.2m 2012-03-20 08:00:40,850 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from 172.23.122.15:50218: Size: 212.7m 2012-03-20 08:00:44,736 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020, responseTooLarge for: next(5831907615409186602, 1000) from