[jira] [Resolved] (HBASE-24705) MetaFixer#fixHoles() does not include the case for read replicas (i.e, replica regions are not created)
[ https://issues.apache.org/jira/browse/HBASE-24705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun resolved HBASE-24705. -- Fix Version/s: 2.4.0 2.3.1 3.0.0-alpha-1 Resolution: Fixed > MetaFixer#fixHoles() does not include the case for read replicas (i.e, > replica regions are not created) > --- > > Key: HBASE-24705 > URL: https://issues.apache.org/jira/browse/HBASE-24705 > Project: HBase > Issue Type: Bug > Components: read replicas >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24744) enable_table_replication command granting permissions on table automatically for the user
Dhanalakshmi Periyalwar created HBASE-24744: --- Summary: enable_table_replication command granting permissions on table automatically for the user Key: HBASE-24744 URL: https://issues.apache.org/jira/browse/HBASE-24744 Project: HBase Issue Type: Bug Components: acl, security Affects Versions: 2.1.0 Reporter: Dhanalakshmi Periyalwar While enabling the table replication for the user table as an hbase user using the "enable_table_replication" command, permission has been granted automatically for the hbase user and getting listed in hbase:acl. The same behaviour is applicable to other users too. Issue Replication Steps: hbase(main):001:0> whoami dhana (auth:SIMPLE) groups: dhana Took 0.0214 seconds hbase(main):002:0> list TABLE 0 row(s) Took 0.4268 seconds => [] hbase(main):003:0> create 'mytab','f1' Created table mytab Took 0.7834 seconds => Hbase::Table - mytab hbase(main):004:0> describe 'mytab' Table mytab is ENABLED mytab COLUMN FAMILIES DESCRIPTION \{NAME => 'f1', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KE EP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'F OREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 1 row(s) Took 0.1319 seconds hbase(main):005:0> scan 'hbase:acl' ROWCOLUMN+CELL hbase:acl column=l:dhana, timestamp=1593669605273, value=RWXCA mytab column=l:dhana, timestamp=1593673200269, value=RWXCA 2 row(s) Took 0.0969 seconds hbase(main):006:0> exit hbase(main):001:0> whoami hbase (auth:SIMPLE) groups: hbase Took 0.0271 seconds hbase(main):002:0> scan 'hbase:acl' ROWCOLUMN+CELL hbase:acl column=l:dhana, timestamp=1593669605273, value=RWXCA mytab column=l:dhana, timestamp=1593673200269, value=RWXCA 2 row(s) Took 0.5223 seconds hbase(main):003:0> enable_table_replication 'mytab' The replication of table 'mytab' successfully enabled Took 16.0711 seconds hbase(main):004:0> scan 'hbase:acl' ROWCOLUMN+CELL hbase:acl column=l:dhana, timestamp=1593669605273, value=RWXCA mytab column=l:dhana, timestamp=1593673200269, value=RWXCA mytab column=l:hbase, timestamp=1593673390976, value=RWXCA < 2 row(s) Took 0.0089 seconds -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24743) Reject to add a peer which replicate to itself earlier
Guanghao Zhang created HBASE-24743: -- Summary: Reject to add a peer which replicate to itself earlier Key: HBASE-24743 URL: https://issues.apache.org/jira/browse/HBASE-24743 Project: HBase Issue Type: Improvement Reporter: Guanghao Zhang Now there are one check in ReplicationSource#initialize method {code:java} // In rare case, zookeeper setting may be messed up. That leads to the incorrect // peerClusterId value, which is the same as the source clusterId if (clusterId.equals(peerClusterId) && !replicationEndpoint.canReplicateToSameCluster()) { this.terminate("ClusterId " + clusterId + " is replicating to itself: peerClusterId " + peerClusterId + " which is not allowed by ReplicationEndpoint:" + replicationEndpoint.getClass().getName(), null, false); this.manager.removeSource(this); return; } {code} This check should move to AddPeerProcedure's precheck. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: HBase 2 slower than HBase 1?
This is a good finding, nice work! I added a comment on HBASE-24742 that mentions HBASE-24637 on the off chance they are related, although I suspect more changes are implicated by the 2.x regression. On Tue, Jul 14, 2020 at 5:53 PM Bharath Vissapragada wrote: > FYI, we filed this today https://issues.apache.org/jira/browse/HBASE-24742 > . > We ran into a similar regression when upgrading from 1.3 based branch to > 1.6 based branch. After some profiling and code analysis we narrowed down > the code paths. > > On Tue, Jul 14, 2020 at 11:38 AM Josh Elser wrote: > > > Wow. Great stuff, Andrew! > > > > Thank you for compiling and posting it all here. I can only imagine how > > time-consuming this was. > > > > On 6/26/20 1:57 PM, Andrew Purtell wrote: > > > Hey Anoop, I opened https://issues.apache.org/jira/browse/HBASE-24637 > > and > > > attached the patches and script used to make the comparison. > > > > > > On Fri, Jun 26, 2020 at 2:33 AM Anoop John > > wrote: > > > > > >> Great investigation Andy. Do you know any Jiras which made changes in > > SQM? > > >> Would be great if you can attach your patch which tracks the scan > > flow. If > > >> we have a Jira for this issue, can you pls attach? > > >> > > >> Anoop > > >> > > >> On Fri, Jun 26, 2020 at 1:56 AM Andrew Purtell < > > andrew.purt...@gmail.com> > > >> wrote: > > >> > > >>> Related, I think I found a bug in branch-1 where we don’t heartbeat > in > > >> the > > >>> filter all case until we switch store files, so scanning a very large > > >> store > > >>> file might time out with client defaults. Remarking on this here so I > > >> don’t > > >>> forget to follow up. > > >>> > > On Jun 25, 2020, at 12:27 PM, Andrew Purtell > > >>> wrote: > > > > > > I repeated this test with pe --filterAll and the results were > > >> revealing, > > >>> at least for this case. I also patched in thread local hash map for > > >> atomic > > >>> counters that I could update from code paths in SQM, StoreScanner, > > >>> HFileReader*, and HFileBlock. Because a RPC is processed by a single > > >>> handler thread I could update counters and accumulate micro-timings > via > > >>> System#nanoTime() per RPC and dump them out of CallRunner in some new > > >> trace > > >>> logging. I spent a couple of days making sure the instrumentation was > > >>> placed equivalently in both 1.6 and 2.2 code bases and was producing > > >>> consistent results. I can provide these patches upon request. > > > > Again, test tables with one family and 1, 5, 10, 20, 50, and 100 > > >>> distinct column-qualifiers per row. After loading the table I made a > > >>> snapshot and cloned the snapshot for testing, for both 1.6 and 2.2, > so > > >> both > > >>> versions were tested using the exact same data files on HDFS. I also > > used > > >>> the 1.6 version of PE for both, so the only change is on the server > > (1.6 > > >> vs > > >>> 2.2 masters and regionservers). > > > > It appears a refactor to ScanQueryMatcher and friends has disabled > the > > >>> ability of filters to provide SKIP hints, which prevents us from > > >> bypassing > > >>> version checking (so some extra cost in SQM), and appears to disable > an > > >>> optimization that avoids reseeking, leading to a serious and > > proportional > > >>> regression in reseek activity and time spent in that code path. So > for > > >>> queries that use filters, there can be a substantial regression. > > > > Other test cases that did not use filters did not show a regression. > > > > A test case where I used ROW_INDEX_V1 encoding showed an expected > > >> modest > > >>> proportional regression in seeking time, due to the fact it is > > optimized > > >>> for point queries and not optimized for the full table scan case. > > > > I will come back here when I understand this better. > > > > Here are the results for the pe --filterAll case: > > > > > > 1.6.0 c1 2.2.5 c1 > > 1.6.0 c5 2.2.5 c5 > > 1.6.0 c10 2.2.5 c10 > > 1.6.0 c20 2.2.5 c20 > > 1.6.0 c50 2.2.5 c50 > > 1.6.0 c1002.2.5 c100 > > Counts > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > (better heartbeating) > > (better heartbeating) > > (better heartbeating) > > (better heartbeating) > > (better heartbeating) > > rpcs 1 2 200%2 6 300%2 10 > > >>> 500%3 17 567%4 37 925%8 72 > > >> 900% > > block_reads 11507 11508 100%57255 57257 100%114471 > > >>> 114474 100%230372 230377 100%578292 578298 100% > 1157955 > > >>> 1157963 100% > > block_unpacks 11507 11508 100%57255 57257 100%114471 > > >>> 114474 100%230372
Re: HBase 2 slower than HBase 1?
I went out on vacation (and am still out) before tracking this down. If you are waiting for me to make more progress with HBASE-24637, I can do that in a couple of weeks, Anyone is welcome to step in sooner. .. On Tue, Jul 14, 2020 at 11:38 AM Josh Elser wrote: > Wow. Great stuff, Andrew! > > Thank you for compiling and posting it all here. I can only imagine how > time-consuming this was. > > On 6/26/20 1:57 PM, Andrew Purtell wrote: > > Hey Anoop, I opened https://issues.apache.org/jira/browse/HBASE-24637 > and > > attached the patches and script used to make the comparison. > > > > On Fri, Jun 26, 2020 at 2:33 AM Anoop John > wrote: > > > >> Great investigation Andy. Do you know any Jiras which made changes in > SQM? > >> Would be great if you can attach your patch which tracks the scan > flow. If > >> we have a Jira for this issue, can you pls attach? > >> > >> Anoop > >> > >> On Fri, Jun 26, 2020 at 1:56 AM Andrew Purtell < > andrew.purt...@gmail.com> > >> wrote: > >> > >>> Related, I think I found a bug in branch-1 where we don’t heartbeat in > >> the > >>> filter all case until we switch store files, so scanning a very large > >> store > >>> file might time out with client defaults. Remarking on this here so I > >> don’t > >>> forget to follow up. > >>> > On Jun 25, 2020, at 12:27 PM, Andrew Purtell > >>> wrote: > > > I repeated this test with pe --filterAll and the results were > >> revealing, > >>> at least for this case. I also patched in thread local hash map for > >> atomic > >>> counters that I could update from code paths in SQM, StoreScanner, > >>> HFileReader*, and HFileBlock. Because a RPC is processed by a single > >>> handler thread I could update counters and accumulate micro-timings via > >>> System#nanoTime() per RPC and dump them out of CallRunner in some new > >> trace > >>> logging. I spent a couple of days making sure the instrumentation was > >>> placed equivalently in both 1.6 and 2.2 code bases and was producing > >>> consistent results. I can provide these patches upon request. > > Again, test tables with one family and 1, 5, 10, 20, 50, and 100 > >>> distinct column-qualifiers per row. After loading the table I made a > >>> snapshot and cloned the snapshot for testing, for both 1.6 and 2.2, so > >> both > >>> versions were tested using the exact same data files on HDFS. I also > used > >>> the 1.6 version of PE for both, so the only change is on the server > (1.6 > >> vs > >>> 2.2 masters and regionservers). > > It appears a refactor to ScanQueryMatcher and friends has disabled the > >>> ability of filters to provide SKIP hints, which prevents us from > >> bypassing > >>> version checking (so some extra cost in SQM), and appears to disable an > >>> optimization that avoids reseeking, leading to a serious and > proportional > >>> regression in reseek activity and time spent in that code path. So for > >>> queries that use filters, there can be a substantial regression. > > Other test cases that did not use filters did not show a regression. > > A test case where I used ROW_INDEX_V1 encoding showed an expected > >> modest > >>> proportional regression in seeking time, due to the fact it is > optimized > >>> for point queries and not optimized for the full table scan case. > > I will come back here when I understand this better. > > Here are the results for the pe --filterAll case: > > > 1.6.0 c1 2.2.5 c1 > 1.6.0 c5 2.2.5 c5 > 1.6.0 c10 2.2.5 c10 > 1.6.0 c20 2.2.5 c20 > 1.6.0 c50 2.2.5 c50 > 1.6.0 c1002.2.5 c100 > Counts > > > > > > > > > > > > > > > > > > > > > > > (better heartbeating) > (better heartbeating) > (better heartbeating) > (better heartbeating) > (better heartbeating) > rpcs 1 2 200%2 6 300%2 10 > >>> 500%3 17 567%4 37 925%8 72 > >> 900% > block_reads 11507 11508 100%57255 57257 100%114471 > >>> 114474 100%230372 230377 100%578292 578298 100%1157955 > >>> 1157963 100% > block_unpacks 11507 11508 100%57255 57257 100%114471 > >>> 114474 100%230372 230377 100%578292 578298 100%1157955 > >>> 1157963 100% > seeker_next 10001000100%5000 > >>> 5000100%1 1 100% > 2 > >>> 2 100%5 5 100% > >>> 10 10 100% > store_next10009988268 100%500049940082 > >>>100%1 99879401100%2 > >>> 199766539 100%5 499414653 100% > >>> 10
Re: HBase 2 slower than HBase 1?
FYI, we filed this today https://issues.apache.org/jira/browse/HBASE-24742. We ran into a similar regression when upgrading from 1.3 based branch to 1.6 based branch. After some profiling and code analysis we narrowed down the code paths. On Tue, Jul 14, 2020 at 11:38 AM Josh Elser wrote: > Wow. Great stuff, Andrew! > > Thank you for compiling and posting it all here. I can only imagine how > time-consuming this was. > > On 6/26/20 1:57 PM, Andrew Purtell wrote: > > Hey Anoop, I opened https://issues.apache.org/jira/browse/HBASE-24637 > and > > attached the patches and script used to make the comparison. > > > > On Fri, Jun 26, 2020 at 2:33 AM Anoop John > wrote: > > > >> Great investigation Andy. Do you know any Jiras which made changes in > SQM? > >> Would be great if you can attach your patch which tracks the scan > flow. If > >> we have a Jira for this issue, can you pls attach? > >> > >> Anoop > >> > >> On Fri, Jun 26, 2020 at 1:56 AM Andrew Purtell < > andrew.purt...@gmail.com> > >> wrote: > >> > >>> Related, I think I found a bug in branch-1 where we don’t heartbeat in > >> the > >>> filter all case until we switch store files, so scanning a very large > >> store > >>> file might time out with client defaults. Remarking on this here so I > >> don’t > >>> forget to follow up. > >>> > On Jun 25, 2020, at 12:27 PM, Andrew Purtell > >>> wrote: > > > I repeated this test with pe --filterAll and the results were > >> revealing, > >>> at least for this case. I also patched in thread local hash map for > >> atomic > >>> counters that I could update from code paths in SQM, StoreScanner, > >>> HFileReader*, and HFileBlock. Because a RPC is processed by a single > >>> handler thread I could update counters and accumulate micro-timings via > >>> System#nanoTime() per RPC and dump them out of CallRunner in some new > >> trace > >>> logging. I spent a couple of days making sure the instrumentation was > >>> placed equivalently in both 1.6 and 2.2 code bases and was producing > >>> consistent results. I can provide these patches upon request. > > Again, test tables with one family and 1, 5, 10, 20, 50, and 100 > >>> distinct column-qualifiers per row. After loading the table I made a > >>> snapshot and cloned the snapshot for testing, for both 1.6 and 2.2, so > >> both > >>> versions were tested using the exact same data files on HDFS. I also > used > >>> the 1.6 version of PE for both, so the only change is on the server > (1.6 > >> vs > >>> 2.2 masters and regionservers). > > It appears a refactor to ScanQueryMatcher and friends has disabled the > >>> ability of filters to provide SKIP hints, which prevents us from > >> bypassing > >>> version checking (so some extra cost in SQM), and appears to disable an > >>> optimization that avoids reseeking, leading to a serious and > proportional > >>> regression in reseek activity and time spent in that code path. So for > >>> queries that use filters, there can be a substantial regression. > > Other test cases that did not use filters did not show a regression. > > A test case where I used ROW_INDEX_V1 encoding showed an expected > >> modest > >>> proportional regression in seeking time, due to the fact it is > optimized > >>> for point queries and not optimized for the full table scan case. > > I will come back here when I understand this better. > > Here are the results for the pe --filterAll case: > > > 1.6.0 c1 2.2.5 c1 > 1.6.0 c5 2.2.5 c5 > 1.6.0 c10 2.2.5 c10 > 1.6.0 c20 2.2.5 c20 > 1.6.0 c50 2.2.5 c50 > 1.6.0 c1002.2.5 c100 > Counts > > > > > > > > > > > > > > > > > > > > > > > (better heartbeating) > (better heartbeating) > (better heartbeating) > (better heartbeating) > (better heartbeating) > rpcs 1 2 200%2 6 300%2 10 > >>> 500%3 17 567%4 37 925%8 72 > >> 900% > block_reads 11507 11508 100%57255 57257 100%114471 > >>> 114474 100%230372 230377 100%578292 578298 100%1157955 > >>> 1157963 100% > block_unpacks 11507 11508 100%57255 57257 100%114471 > >>> 114474 100%230372 230377 100%578292 578298 100%1157955 > >>> 1157963 100% > seeker_next 10001000100%5000 > >>> 5000100%1 1 100% > 2 > >>> 2 100%5 5 100% > >>> 10 10 100% > store_next10009988268 100%500049940082 > >>>100%1 99879401100%2 > >>> 199766539 100%5 499414653
[jira] [Created] (HBASE-24742) Improve performance of SKIP vs SEEK logic
Lars Hofhansl created HBASE-24742: - Summary: Improve performance of SKIP vs SEEK logic Key: HBASE-24742 URL: https://issues.apache.org/jira/browse/HBASE-24742 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl In our testing of HBase 1.3 against the current tip of branch-1 we saw a 30% slowdown in scanning scenarios. We tracked it back to HBASE-17958 and HBASE-19863. Both add comparisons to one of the tightest HBase has. [~bharathv] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24741) Preserve mvn site output in precommit jobs
Nick Dimiduk created HBASE-24741: Summary: Preserve mvn site output in precommit jobs Key: HBASE-24741 URL: https://issues.apache.org/jira/browse/HBASE-24741 Project: HBase Issue Type: Task Components: build Reporter: Nick Dimiduk It would be nice to see the result of site changes in PRs. This probably balloons the size of archived builds, but we don't (usually) keep PR builds around very long. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24740) Enable journal logging for HBase snapshot operation
Sandeep Guggilam created HBASE-24740: Summary: Enable journal logging for HBase snapshot operation Key: HBASE-24740 URL: https://issues.apache.org/jira/browse/HBASE-24740 Project: HBase Issue Type: Improvement Reporter: Sandeep Guggilam Assignee: Sandeep Guggilam The HBase snapshot operation contains multiple steps like actual snapshot creation, consolidate phase (reading region manifests from HDFS) , verifier phase ( validate the consolidated manifests against the actual number of regions for the table) Sometimes it happens to be taking time in one of the phases and we don't know exactly which one is taking time unless we have a thread dump at the very same time. The journal logging would definitely help us give more insights into the time taken for each phase -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: HBase 2 slower than HBase 1?
Wow. Great stuff, Andrew! Thank you for compiling and posting it all here. I can only imagine how time-consuming this was. On 6/26/20 1:57 PM, Andrew Purtell wrote: Hey Anoop, I opened https://issues.apache.org/jira/browse/HBASE-24637 and attached the patches and script used to make the comparison. On Fri, Jun 26, 2020 at 2:33 AM Anoop John wrote: Great investigation Andy. Do you know any Jiras which made changes in SQM? Would be great if you can attach your patch which tracks the scan flow. If we have a Jira for this issue, can you pls attach? Anoop On Fri, Jun 26, 2020 at 1:56 AM Andrew Purtell wrote: Related, I think I found a bug in branch-1 where we don’t heartbeat in the filter all case until we switch store files, so scanning a very large store file might time out with client defaults. Remarking on this here so I don’t forget to follow up. On Jun 25, 2020, at 12:27 PM, Andrew Purtell wrote: I repeated this test with pe --filterAll and the results were revealing, at least for this case. I also patched in thread local hash map for atomic counters that I could update from code paths in SQM, StoreScanner, HFileReader*, and HFileBlock. Because a RPC is processed by a single handler thread I could update counters and accumulate micro-timings via System#nanoTime() per RPC and dump them out of CallRunner in some new trace logging. I spent a couple of days making sure the instrumentation was placed equivalently in both 1.6 and 2.2 code bases and was producing consistent results. I can provide these patches upon request. Again, test tables with one family and 1, 5, 10, 20, 50, and 100 distinct column-qualifiers per row. After loading the table I made a snapshot and cloned the snapshot for testing, for both 1.6 and 2.2, so both versions were tested using the exact same data files on HDFS. I also used the 1.6 version of PE for both, so the only change is on the server (1.6 vs 2.2 masters and regionservers). It appears a refactor to ScanQueryMatcher and friends has disabled the ability of filters to provide SKIP hints, which prevents us from bypassing version checking (so some extra cost in SQM), and appears to disable an optimization that avoids reseeking, leading to a serious and proportional regression in reseek activity and time spent in that code path. So for queries that use filters, there can be a substantial regression. Other test cases that did not use filters did not show a regression. A test case where I used ROW_INDEX_V1 encoding showed an expected modest proportional regression in seeking time, due to the fact it is optimized for point queries and not optimized for the full table scan case. I will come back here when I understand this better. Here are the results for the pe --filterAll case: 1.6.0 c1 2.2.5 c1 1.6.0 c5 2.2.5 c5 1.6.0 c10 2.2.5 c10 1.6.0 c20 2.2.5 c20 1.6.0 c50 2.2.5 c50 1.6.0 c1002.2.5 c100 Counts (better heartbeating) (better heartbeating) (better heartbeating) (better heartbeating) (better heartbeating) rpcs 1 2 200%2 6 300%2 10 500%3 17 567%4 37 925%8 72 900% block_reads 11507 11508 100%57255 57257 100%114471 114474 100%230372 230377 100%578292 578298 100%1157955 1157963 100% block_unpacks 11507 11508 100%57255 57257 100%114471 114474 100%230372 230377 100%578292 578298 100%1157955 1157963 100% seeker_next 10001000100%5000 5000100%1 1 100%2 2 100%5 5 100% 10 10 100% store_next10009988268 100%500049940082 100%1 99879401100%2 199766539 100%5 499414653 100% 10 998836518 100% store_reseek 1 11733 > ! 2 59924 > ! 8 120607 > ! 6 233467 > ! 10 585357 > ! 8 1163490 > ! cells_matched 20002000100%6000 6000100%11000 11000 100%21000 21000 100%51000 51000 100% 101000 101000 100% column_hint_include 10001000100%5000 5000100%1 1 100% 2 2 100%5 5 100%10 10 100% filter_hint_skip 10001000100%5000 5000100%1 1 100% 2 2 100%5 5 100%10 10 100% sqm_hint_done 999 999
[jira] [Resolved] (HBASE-24720) Meta replicas not cleaned when disabled
[ https://issues.apache.org/jira/browse/HBASE-24720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-24720. --- Fix Version/s: 2.2.6 2.4.0 2.3.1 3.0.0-alpha-1 Resolution: Fixed Thanks for the commit [~bszabolcs]! Pushed to branch-2.2+. > Meta replicas not cleaned when disabled > --- > > Key: HBASE-24720 > URL: https://issues.apache.org/jira/browse/HBASE-24720 > Project: HBase > Issue Type: Bug > Components: read replicas >Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.2.5 >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > The assignMetaReplicas method works kinda like this: > {code:java} > void assignMetaReplicas(){ > if (numReplicas <= 1) return; > //create if needed then assign meta replicas > unassignExcessMetaReplica(numReplicas); > } > {code} > Now this unassignExcessMetaReplica method is the one that gets rid of the > replicas we no longer need. It closes them and deletes their zNode. > Unfortunately this only happens if we decreased the replica number. If we > disabled it, by setting the replica number to 1 assignMetaReplicas returns > instantly without cleaning up the no longer needed replicas resulting in > replicas lingering around. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24566) Add 2.3.0 to the downloads page
[ https://issues.apache.org/jira/browse/HBASE-24566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-24566. -- Resolution: Fixed > Add 2.3.0 to the downloads page > --- > > Key: HBASE-24566 > URL: https://issues.apache.org/jira/browse/HBASE-24566 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Once release bits are finalized, add reference to downloads page. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24487) Add 2.3 Documentation to the website
[ https://issues.apache.org/jira/browse/HBASE-24487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-24487. -- Resolution: Fixed Changes applied to both hbase.git and hbase-site.git. Thanks for the reviews [~busbey], [~vjasani]. > Add 2.3 Documentation to the website > > > Key: HBASE-24487 > URL: https://issues.apache.org/jira/browse/HBASE-24487 > Project: HBase > Issue Type: Sub-task > Components: community, documentation >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24739) [Build] branch-1's build seems broken because of pylint
Reid Chan created HBASE-24739: - Summary: [Build] branch-1's build seems broken because of pylint Key: HBASE-24739 URL: https://issues.apache.org/jira/browse/HBASE-24739 Project: HBase Issue Type: Bug Reporter: Reid Chan Assignee: Reid Chan -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24738) [Shell] processlist command fails with ERROR: Unexpected end of file from server when SSL enabled
Pankaj Kumar created HBASE-24738: Summary: [Shell] processlist command fails with ERROR: Unexpected end of file from server when SSL enabled Key: HBASE-24738 URL: https://issues.apache.org/jira/browse/HBASE-24738 Project: HBase Issue Type: Bug Components: shell Reporter: Pankaj Kumar Assignee: Pankaj Kumar HBase Shell command "processlist" fails with ERROR: Unexpected end of file from server when HBase SSL enabled. Below code is commented since HBASE-4368, https://github.com/apache/hbase/blob/8076eafb187ce32d4a78aef482b6218d85a985ac/hbase-shell/src/main/ruby/hbase/taskmonitor.rb#L84 -- This message was sent by Atlassian Jira (v8.3.4#803005)