[jira] [Updated] (HBASE-26010) Backport HBASE-25703 and HBASE-26002 to branch-2.3
[ https://issues.apache.org/jira/browse/HBASE-26010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-26010: - Fix Version/s: 2.3.6 > Backport HBASE-25703 and HBASE-26002 to branch-2.3 > -- > > Key: HBASE-26010 > URL: https://issues.apache.org/jira/browse/HBASE-26010 > Project: HBase > Issue Type: Improvement > Components: backport >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 2.3.6 > > > Backport HBASE-25703 "Support conditional update in MultiRowMutationEndpoint" > and HBASE-26002 "MultiRowMutationEndpoint should return the result of the > conditional update" to branch-2.3. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26009) Backport HBASE-25766 "Introduce RegionSplitRestriction that restricts the pattern of the split point" to branch-2.3
[ https://issues.apache.org/jira/browse/HBASE-26009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-26009: - Fix Version/s: 2.3.6 > Backport HBASE-25766 "Introduce RegionSplitRestriction that restricts the > pattern of the split point" to branch-2.3 > --- > > Key: HBASE-26009 > URL: https://issues.apache.org/jira/browse/HBASE-26009 > Project: HBase > Issue Type: Sub-task >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 2.3.6 > > > Backport the parent issue to branch-2.3. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26010) Backport HBASE-25703 and HBASE-26002 to branch-2.3
[ https://issues.apache.org/jira/browse/HBASE-26010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-26010: - Issue Type: Improvement (was: Bug) > Backport HBASE-25703 and HBASE-26002 to branch-2.3 > -- > > Key: HBASE-26010 > URL: https://issues.apache.org/jira/browse/HBASE-26010 > Project: HBase > Issue Type: Improvement > Components: backport >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > > Backport HBASE-25703 "Support conditional update in MultiRowMutationEndpoint" > and HBASE-26002 "MultiRowMutationEndpoint should return the result of the > conditional update" to branch-2.3. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26010) Backport HBASE-25703 and HBASE-26002 to branch-2.3
Toshihiro Suzuki created HBASE-26010: Summary: Backport HBASE-25703 and HBASE-26002 to branch-2.3 Key: HBASE-26010 URL: https://issues.apache.org/jira/browse/HBASE-26010 Project: HBase Issue Type: Bug Components: backport Reporter: Toshihiro Suzuki Assignee: Toshihiro Suzuki Backport HBASE-25703 "Support conditional update in MultiRowMutationEndpoint" and HBASE-26002 "MultiRowMutationEndpoint should return the result of the conditional update" to branch-2.3. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26009) Backport HBASE-25766 "Introduce RegionSplitRestriction that restricts the pattern of the split point" to branch-2.3
Toshihiro Suzuki created HBASE-26009: Summary: Backport HBASE-25766 "Introduce RegionSplitRestriction that restricts the pattern of the split point" to branch-2.3 Key: HBASE-26009 URL: https://issues.apache.org/jira/browse/HBASE-26009 Project: HBase Issue Type: Sub-task Reporter: Toshihiro Suzuki Assignee: Toshihiro Suzuki Backport the parent issue to branch-2.3. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22657) HBase : STUCK Region-In-Transition
[ https://issues.apache.org/jira/browse/HBASE-22657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364655#comment-17364655 ] kangTwang commented on HBASE-22657: --- Hi: I also have this problem here. Have you solved it now?? > HBase : STUCK Region-In-Transition > --- > > Key: HBASE-22657 > URL: https://issues.apache.org/jira/browse/HBASE-22657 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: oktay tuncay >Priority: Critical > > When we check the number of regions in transition on Ambari, It shows 1 > transition is waiting. (It's more than 1 in other cluster) > And also, when check the table with command "hbase hbck -details > *table_name*" status looks INCONSISTENT > _There are 0 overlap groups with 0 overlapping regions > ERROR: Found inconsistency in table *Table_Name* > Summary: > Table hbase:meta is okay. > Number of regions: 1 > Deployed on: hostname1:port, hostname2:port, hostname3:port, hostname4:port > Table *Table_Name *is okay. > Number of regions: 39 > Deployed on: hostname1:port, hostname2:port, hostname3:port, hostname4:port > 2 inconsistencies detected. > Status: *INCONSISTENT* > When I check the logfiles, I saw following warning messages, > 2019-06-09T07:14:15.179+02:00 WARN > org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK > Region-In-Transition rit=CLOSING, location=*hostname*,*port*,1558699727048, > table=*table_name*, region=c67dd5d8bcd174cc2001695c31475ab1 > According this message, region c67dd5d8bcd174cc2001695c31475ab1 try to assign > *host* but this operation is stuck. > We stopped RS process on *host* and force assign to another RS which are > running. > *hbase(main):001:0> assign 'c67dd5d8bcd174cc2001695c31475ab1'* > After that operaion, INCONSISTENT has gone and we re-started RS on host. > One of the reasons why a region gets stuck in transition is because, when it > is being moved across regionservers, it is unassigned from the source > regionserver but is never assigned to another regionserver > I think Below code is responsible for that process. > private void handleRegionOverStuckWarningThreshold(final RegionInfo > regionInfo) { > final RegionStateNode regionNode = > regionStates.getRegionStateNode(regionInfo); > //if (regionNode.isStuck()) { > LOG.warn("STUCK Region-In-Transition {}", regionNode);_ > It seems one potential way of unstuck the region is to send close request to > the region server. May be blocked because another Procedure holds the > exclusive lock and is not letting go. > My question is what is the root cause for this problem and I think, HBase > should be able to fix region-In-Transition issue. > We can fix this problem by manual but some customer does not have this > knowledge and I think HBase needs to be recover itself. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-25975) Row commit sequencer
[ https://issues.apache.org/jira/browse/HBASE-25975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364618#comment-17364618 ] Andrew Kyle Purtell edited comment on HBASE-25975 at 6/17/21, 2:16 AM: --- Latest code with better microbenchmark, measuring exactly the time to call region.batchMutate() at each iteration. Times are per op, measured in nanos, converted to milliseconds for printing. "0% contention case" -- All row keys in submitted requests are unique, so should never overlap in the same clock tick. Differences in these values from the baseline represent a combination of system variance and the additional overheads introduced by the patch. "100% contention case" -- All requests have the same duplicate set of row keys, so should always overlap in the same clock tick. You can clearly see the application of the constraint in the increases of MAX time, as expected. Baseline: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.0561 p99=0.0561, p999=0.0561 max=0.3001 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0552 p99=0.0564, p999=0.0564 max=1.0601 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.0686 p99=0.0692, p999=0.0692 max=0.9828 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.0792 p99=0.0813, p999=0.0813 max=1.3901 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1197 p99=0.1296, p999=0.1297 max=1.9159 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1318 p99=0.1414, p999=0.1415 max=4.8060 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0608 p99=0.0608, p999=0.0609 max=0.3793 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0533 p99=0.0559, p999=0.0559 max=0.4041 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0603 p99=0.0612, p999=0.0612 max=0.4097 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1045 p99=0.1121, p999=0.1121 max=0.8509 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1334 p99=0.1422, p999=0.1426 max=1.2274 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1301 p99=0.1399, p999=0.1400 max=2.9175 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1789 p99=0.1789, p999=0.1789 max=0.4747 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1709 p99=0.1742, p999=0.1743 max=0.6043 4 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1820 p99=0.1897, p999=0.1898 max=2.5493 8 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2809 p99=0.2856, p999=0.2857 max=4.1059 16 threads 100 non-contended rows 100 iterations, ms/op: p50=0.4268 p99=0.4393, p999=0.4394 max=5.5858 32 threads 100 non-contended rows 100 iterations, ms/op: p50=0.6382 p99=0.7335, p999=0.7338 max=16.4132 1 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.5447 p99=1.5447, p999=1.5448 max=2.4460 2 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.6133 p99=1.6489, p999=1.6493 max=10.4490 4 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.1818 p99=2.2960, p999=2.2995 max=23.4926 8 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.3715 p99=2.4573, p999=2.4615 max=33.7535 16 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.3571 p99=4.4556, p999=4.4600 max=111.8534 32 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.4922 p99=5.4098, p999=5.4190 max=225.4612 {noformat} With RowCommitSequencer 0% contention case: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.1207 p99=0.1207, p999=0.1209 max=1.8813 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.1048 p99=0.1102, p999=0.1103 max=2.1397 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.1166 p99=0.1312, p999=0.1315 max=1.7879 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.1331 p99=0.1440, p999=0.1443 max=3.7075 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1464 p99=0.1697, p999=0.1704 max=1.9381 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1386 p99=0.1524, p999=0.1525 max=2.3399 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0690 p99=0.0690, p999=0.0691 max=1.5870 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1142 p99=0.1170, p999=0.1171 max=1.7043 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0944 p99=0.0959, p999=0.0959 max=1.7438 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1086 p99=0., p999=0. max=1.9832 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1259 p99=0.1396, p999=0.1398 max=1.7491 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1155 p99=0.1261, p999=0.1264 max=4.7331 1 threads 100
[jira] [Commented] (HBASE-25975) Row commit sequencer
[ https://issues.apache.org/jira/browse/HBASE-25975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364624#comment-17364624 ] Andrew Kyle Purtell commented on HBASE-25975: - Performance is good enough for the first pass. Next step is to get all unit tests passing. > Row commit sequencer > > > Key: HBASE-25975 > URL: https://issues.apache.org/jira/browse/HBASE-25975 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > Attachments: HBASE-25975-c4cf83ce.pdf > > > Use a row commit sequencer in HRegion to ensure that only the operations that > mutate disjoint sets of rows are able to commit within the same clock tick. > This maintains the invariant that more than one mutation to a given row will > never be committed in the same clock tick. > Callers will first acquire row locks for the row(s) the pending mutation will > mutate. Then they will use RowCommitSequencer.getRowSequence to ensure that > the set of rows about to be mutated do not overlap with those for any other > pending mutations in the current clock tick. If an overlap is identified, > getRowSequence will yield and loop until there is no longer an overlap and > the caller's pending mutation can succeed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-25975) Row commit sequencer
[ https://issues.apache.org/jira/browse/HBASE-25975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364618#comment-17364618 ] Andrew Kyle Purtell edited comment on HBASE-25975 at 6/17/21, 1:09 AM: --- Latest code with better microbenchmark, measuring exactly the time to call region.batchMutate() at each iteration. Times are per op, measured in nanos, converted to milliseconds for printing. "0% contention case" -- All row keys in submitted requests are unique, so should never overlap in the same clock tick. Differences in these values from the baseline represent a combination of system variance and the additional overheads introduced by the patch. "100% contention case" -- All requests have the same duplicate set of row keys, so should always overlap in the same clock tick. You can clearly see the application of the constraint in the increases of MAX time, as expected. Baseline: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.0561 p99=0.0561, p999=0.0561 max=0.3001 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0552 p99=0.0564, p999=0.0564 max=1.0601 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.0686 p99=0.0692, p999=0.0692 max=0.9828 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.0792 p99=0.0813, p999=0.0813 max=1.3901 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1197 p99=0.1296, p999=0.1297 max=1.9159 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1318 p99=0.1414, p999=0.1415 max=4.8060 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0608 p99=0.0608, p999=0.0609 max=0.3793 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0533 p99=0.0559, p999=0.0559 max=0.4041 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0603 p99=0.0612, p999=0.0612 max=0.4097 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1045 p99=0.1121, p999=0.1121 max=0.8509 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1334 p99=0.1422, p999=0.1426 max=1.2274 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1301 p99=0.1399, p999=0.1400 max=2.9175 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1789 p99=0.1789, p999=0.1789 max=0.4747 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1709 p99=0.1742, p999=0.1743 max=0.6043 4 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1820 p99=0.1897, p999=0.1898 max=2.5493 8 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2809 p99=0.2856, p999=0.2857 max=4.1059 16 threads 100 non-contended rows 100 iterations, ms/op: p50=0.4268 p99=0.4393, p999=0.4394 max=5.5858 32 threads 100 non-contended rows 100 iterations, ms/op: p50=0.6382 p99=0.7335, p999=0.7338 max=16.4132 1 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.5447 p99=1.5447, p999=1.5448 max=2.4460 2 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.6133 p99=1.6489, p999=1.6493 max=10.4490 4 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.1818 p99=2.2960, p999=2.2995 max=23.4926 8 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.3715 p99=2.4573, p999=2.4615 max=33.7535 16 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.3571 p99=4.4556, p999=4.4600 max=111.8534 32 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.4922 p99=5.4098, p999=5.4190 max=225.4612 {noformat} With RowCommitSequencer 0% contention case: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.1108 p99=0.1108, p999=0.1108 max=3.3199 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0902 p99=0.1213, p999=0.1216 max=1.7926 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.1287 p99=0.1346, p999=0.1347 max=1.8199 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.1200 p99=0.1365, p999=0.1367 max=1.9109 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1298 p99=0.1352, p999=0.1352 max=1.9318 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1315 p99=0.1522, p999=0.1525 max=2.6332 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0595 p99=0.0595, p999=0.0596 max=1.6384 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0684 p99=0.0692, p999=0.0692 max=1.4759 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0918 p99=0.0973, p999=0.0974 max=1.6548 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0929 p99=0.1042, p999=0.1043 max=1.7982 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1206 p99=0.1313, p999=0.1313 max=1.7856 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1198 p99=0.1400, p999=0.1400 max=4.4632 1 threads 100
[jira] [Comment Edited] (HBASE-25975) Row commit sequencer
[ https://issues.apache.org/jira/browse/HBASE-25975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364618#comment-17364618 ] Andrew Kyle Purtell edited comment on HBASE-25975 at 6/17/21, 1:01 AM: --- Latest code with better microbenchmark, measuring exactly the time to call region.batchMutate() at each iteration. Times are per op, measured in nanos, converted to milliseconds for printing. "0% contention case" -- All row keys in submitted requests are unique, so should never overlap in the same clock tick. "100% contention case" -- All requests have the same duplicate set of row keys, so should always overlap in the same clock tick. Baseline: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.0561 p99=0.0561, p999=0.0561 max=0.3001 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0552 p99=0.0564, p999=0.0564 max=1.0601 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.0686 p99=0.0692, p999=0.0692 max=0.9828 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.0792 p99=0.0813, p999=0.0813 max=1.3901 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1197 p99=0.1296, p999=0.1297 max=1.9159 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1318 p99=0.1414, p999=0.1415 max=4.8060 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0608 p99=0.0608, p999=0.0609 max=0.3793 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0533 p99=0.0559, p999=0.0559 max=0.4041 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0603 p99=0.0612, p999=0.0612 max=0.4097 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1045 p99=0.1121, p999=0.1121 max=0.8509 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1334 p99=0.1422, p999=0.1426 max=1.2274 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1301 p99=0.1399, p999=0.1400 max=2.9175 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1789 p99=0.1789, p999=0.1789 max=0.4747 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1709 p99=0.1742, p999=0.1743 max=0.6043 4 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1820 p99=0.1897, p999=0.1898 max=2.5493 8 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2809 p99=0.2856, p999=0.2857 max=4.1059 16 threads 100 non-contended rows 100 iterations, ms/op: p50=0.4268 p99=0.4393, p999=0.4394 max=5.5858 32 threads 100 non-contended rows 100 iterations, ms/op: p50=0.6382 p99=0.7335, p999=0.7338 max=16.4132 1 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.5447 p99=1.5447, p999=1.5448 max=2.4460 2 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.6133 p99=1.6489, p999=1.6493 max=10.4490 4 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.1818 p99=2.2960, p999=2.2995 max=23.4926 8 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.3715 p99=2.4573, p999=2.4615 max=33.7535 16 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.3571 p99=4.4556, p999=4.4600 max=111.8534 32 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.4922 p99=5.4098, p999=5.4190 max=225.4612 {noformat} With RowCommitSequencer 0% contention case: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.1108 p99=0.1108, p999=0.1108 max=3.3199 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0902 p99=0.1213, p999=0.1216 max=1.7926 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.1287 p99=0.1346, p999=0.1347 max=1.8199 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.1200 p99=0.1365, p999=0.1367 max=1.9109 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1298 p99=0.1352, p999=0.1352 max=1.9318 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1315 p99=0.1522, p999=0.1525 max=2.6332 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0595 p99=0.0595, p999=0.0596 max=1.6384 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0684 p99=0.0692, p999=0.0692 max=1.4759 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0918 p99=0.0973, p999=0.0974 max=1.6548 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0929 p99=0.1042, p999=0.1043 max=1.7982 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1206 p99=0.1313, p999=0.1313 max=1.7856 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1198 p99=0.1400, p999=0.1400 max=4.4632 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2106 p99=0.2106, p999=0.2106 max=2.2026 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1910 p99=0.2283, p999=0.2287 max=2.0957 4 threads 100 non-contended rows 100
[jira] [Comment Edited] (HBASE-25975) Row commit sequencer
[ https://issues.apache.org/jira/browse/HBASE-25975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364618#comment-17364618 ] Andrew Kyle Purtell edited comment on HBASE-25975 at 6/17/21, 1:00 AM: --- Latest code with better microbenchmark, measuring exactly the time to call region.batchMutate() at each iteration. Times are per op, measured in nanos, converted to milliseconds for printing. "0% contention case" -- No rows ever overlap in submitted requests, so should never overlap in the same clock tick. "100% contention case" -- All requests have the same duplicate set of row keys, so should always overlap in the same clock tick. Baseline: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.0561 p99=0.0561, p999=0.0561 max=0.3001 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0552 p99=0.0564, p999=0.0564 max=1.0601 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.0686 p99=0.0692, p999=0.0692 max=0.9828 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.0792 p99=0.0813, p999=0.0813 max=1.3901 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1197 p99=0.1296, p999=0.1297 max=1.9159 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1318 p99=0.1414, p999=0.1415 max=4.8060 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0608 p99=0.0608, p999=0.0609 max=0.3793 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0533 p99=0.0559, p999=0.0559 max=0.4041 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0603 p99=0.0612, p999=0.0612 max=0.4097 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1045 p99=0.1121, p999=0.1121 max=0.8509 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1334 p99=0.1422, p999=0.1426 max=1.2274 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1301 p99=0.1399, p999=0.1400 max=2.9175 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1789 p99=0.1789, p999=0.1789 max=0.4747 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1709 p99=0.1742, p999=0.1743 max=0.6043 4 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1820 p99=0.1897, p999=0.1898 max=2.5493 8 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2809 p99=0.2856, p999=0.2857 max=4.1059 16 threads 100 non-contended rows 100 iterations, ms/op: p50=0.4268 p99=0.4393, p999=0.4394 max=5.5858 32 threads 100 non-contended rows 100 iterations, ms/op: p50=0.6382 p99=0.7335, p999=0.7338 max=16.4132 1 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.5447 p99=1.5447, p999=1.5448 max=2.4460 2 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.6133 p99=1.6489, p999=1.6493 max=10.4490 4 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.1818 p99=2.2960, p999=2.2995 max=23.4926 8 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.3715 p99=2.4573, p999=2.4615 max=33.7535 16 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.3571 p99=4.4556, p999=4.4600 max=111.8534 32 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.4922 p99=5.4098, p999=5.4190 max=225.4612 {noformat} With RowCommitSequencer 0% contention case: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.1108 p99=0.1108, p999=0.1108 max=3.3199 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0902 p99=0.1213, p999=0.1216 max=1.7926 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.1287 p99=0.1346, p999=0.1347 max=1.8199 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.1200 p99=0.1365, p999=0.1367 max=1.9109 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1298 p99=0.1352, p999=0.1352 max=1.9318 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1315 p99=0.1522, p999=0.1525 max=2.6332 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0595 p99=0.0595, p999=0.0596 max=1.6384 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0684 p99=0.0692, p999=0.0692 max=1.4759 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0918 p99=0.0973, p999=0.0974 max=1.6548 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0929 p99=0.1042, p999=0.1043 max=1.7982 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1206 p99=0.1313, p999=0.1313 max=1.7856 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1198 p99=0.1400, p999=0.1400 max=4.4632 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2106 p99=0.2106, p999=0.2106 max=2.2026 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1910 p99=0.2283, p999=0.2287 max=2.0957 4 threads 100 non-contended rows 100
[jira] [Commented] (HBASE-25975) Row commit sequencer
[ https://issues.apache.org/jira/browse/HBASE-25975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364618#comment-17364618 ] Andrew Kyle Purtell commented on HBASE-25975: - Latest code with better microbenchmark, measuring exactly the time to call region.batchMutate() at each iteration. Times are per op, measured in nanos, converted to milliseconds for printing. Baseline: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.0561 p99=0.0561, p999=0.0561 max=0.3001 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0552 p99=0.0564, p999=0.0564 max=1.0601 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.0686 p99=0.0692, p999=0.0692 max=0.9828 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.0792 p99=0.0813, p999=0.0813 max=1.3901 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1197 p99=0.1296, p999=0.1297 max=1.9159 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1318 p99=0.1414, p999=0.1415 max=4.8060 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0608 p99=0.0608, p999=0.0609 max=0.3793 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0533 p99=0.0559, p999=0.0559 max=0.4041 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0603 p99=0.0612, p999=0.0612 max=0.4097 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1045 p99=0.1121, p999=0.1121 max=0.8509 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1334 p99=0.1422, p999=0.1426 max=1.2274 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1301 p99=0.1399, p999=0.1400 max=2.9175 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1789 p99=0.1789, p999=0.1789 max=0.4747 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1709 p99=0.1742, p999=0.1743 max=0.6043 4 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1820 p99=0.1897, p999=0.1898 max=2.5493 8 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2809 p99=0.2856, p999=0.2857 max=4.1059 16 threads 100 non-contended rows 100 iterations, ms/op: p50=0.4268 p99=0.4393, p999=0.4394 max=5.5858 32 threads 100 non-contended rows 100 iterations, ms/op: p50=0.6382 p99=0.7335, p999=0.7338 max=16.4132 1 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.5447 p99=1.5447, p999=1.5448 max=2.4460 2 threads 1000 non-contended rows 100 iterations, ms/op: p50=1.6133 p99=1.6489, p999=1.6493 max=10.4490 4 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.1818 p99=2.2960, p999=2.2995 max=23.4926 8 threads 1000 non-contended rows 100 iterations, ms/op: p50=2.3715 p99=2.4573, p999=2.4615 max=33.7535 16 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.3571 p99=4.4556, p999=4.4600 max=111.8534 32 threads 1000 non-contended rows 100 iterations, ms/op: p50=4.4922 p99=5.4098, p999=5.4190 max=225.4612 {noformat} With RowCommitSequencer 0% contention case: {noformat} 1 threads1 non-contended rows 100 iterations, ms/op: p50=0.1108 p99=0.1108, p999=0.1108 max=3.3199 2 threads1 non-contended rows 100 iterations, ms/op: p50=0.0902 p99=0.1213, p999=0.1216 max=1.7926 4 threads1 non-contended rows 100 iterations, ms/op: p50=0.1287 p99=0.1346, p999=0.1347 max=1.8199 8 threads1 non-contended rows 100 iterations, ms/op: p50=0.1200 p99=0.1365, p999=0.1367 max=1.9109 16 threads1 non-contended rows 100 iterations, ms/op: p50=0.1298 p99=0.1352, p999=0.1352 max=1.9318 32 threads1 non-contended rows 100 iterations, ms/op: p50=0.1315 p99=0.1522, p999=0.1525 max=2.6332 1 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0595 p99=0.0595, p999=0.0596 max=1.6384 2 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0684 p99=0.0692, p999=0.0692 max=1.4759 4 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0918 p99=0.0973, p999=0.0974 max=1.6548 8 threads 10 non-contended rows 100 iterations, ms/op: p50=0.0929 p99=0.1042, p999=0.1043 max=1.7982 16 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1206 p99=0.1313, p999=0.1313 max=1.7856 32 threads 10 non-contended rows 100 iterations, ms/op: p50=0.1198 p99=0.1400, p999=0.1400 max=4.4632 1 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2106 p99=0.2106, p999=0.2106 max=2.2026 2 threads 100 non-contended rows 100 iterations, ms/op: p50=0.1910 p99=0.2283, p999=0.2287 max=2.0957 4 threads 100 non-contended rows 100 iterations, ms/op: p50=0.2774 p99=0.2824, p999=0.2824 max=3.6852 8 threads 100 non-contended rows 100 iterations, ms/op: p50=0.3029 p99=0.3219, p999=0.3224 max=4.0838 16 threads 100 non-contended rows 100 iterations, ms/op: p50=0.3813 p99=0.4126, p999=0.4129 max=7.7517 32 threads 100
[jira] [Issue Comment Deleted] (HBASE-25975) Row commit sequencer
[ https://issues.apache.org/jira/browse/HBASE-25975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Kyle Purtell updated HBASE-25975: Comment: was deleted (was: The microbenchmark is going to be very helpful. Right now I have it hacked into TestHRegion but will move it out. See this gist: [link|https://gist.github.com/apurtell/eb1122f74b0a9f0305f0c8c575b2fc21] As of [c6d2a11b|https://github.com/apache/hbase/pull/3360/commits/c6d2a11b] performance is better especially as the number of rows processed in the previous tick increases, by simply allocating a new CSLS for the next tick rather than clear()ing. Etc. {noformat} 1 threads1 non-contended rows 100 iterations in 248262978 ns (2.482629 ms/op) 2 threads1 non-contended rows 100 iterations in 119657896 ns (1.196578 ms/op) 4 threads1 non-contended rows 100 iterations in 117589133 ns (1.175891 ms/op) 8 threads1 non-contended rows 100 iterations in 127482269 ns (1.274822 ms/op) 16 threads1 non-contended rows 100 iterations in 120375922 ns (1.203759 ms/op) 32 threads1 non-contended rows 100 iterations in 117154493 ns (1.171544 ms/op) 1 threads 10 non-contended rows 100 iterations in 123248732 ns (1.232487 ms/op) 2 threads 10 non-contended rows 100 iterations in 122647177 ns (1.226471 ms/op) 4 threads 10 non-contended rows 100 iterations in 127126968 ns (1.271269 ms/op) 8 threads 10 non-contended rows 100 iterations in 133759033 ns (1.337590 ms/op) 16 threads 10 non-contended rows 100 iterations in 133973857 ns (1.339738 ms/op) 32 threads 10 non-contended rows 100 iterations in 126716770 ns (1.267167 ms/op) 1 threads 100 non-contended rows 100 iterations in 127032261 ns (1.270322 ms/op) 2 threads 100 non-contended rows 100 iterations in 128259658 ns (1.282596 ms/op) 4 threads 100 non-contended rows 100 iterations in 120013005 ns (1.200130 ms/op) 8 threads 100 non-contended rows 100 iterations in 126168665 ns (1.261686 ms/op) 16 threads 100 non-contended rows 100 iterations in 138842281 ns (1.388422 ms/op) 32 threads 100 non-contended rows 100 iterations in 266622073 ns (2.666220 ms/op) 1 threads 1000 non-contended rows 100 iterations in 224824016 ns (2.248240 ms/op) 2 threads 1000 non-contended rows 100 iterations in 276253087 ns (2.762530 ms/op) 4 threads 1000 non-contended rows 100 iterations in 373552155 ns (3.735521 ms/op) 8 threads 1000 non-contended rows 100 iterations in 622022490 ns (6.220224 ms/op) 16 threads 1000 non-contended rows 100 iterations in 1289010748 ns (12.890107 ms/op) 32 threads 1000 non-contended rows 100 iterations in 2449270127 ns (24.492701 ms/op) 1 threads1 contended rows 100 iterations in 119867953 ns (1.198679 ms/op) 2 threads1 contended rows 100 iterations in 225605406 ns (2.256054 ms/op) 4 threads1 contended rows 100 iterations in 427749326 ns (4.277493 ms/op) 8 threads1 contended rows 100 iterations in 776111781 ns (7.761117 ms/op) 16 threads1 contended rows 100 iterations in 1638138512 ns (16.381385 ms/op) 32 threads1 contended rows 100 iterations in 3221263267 ns (32.212632 ms/op) 1 threads 10 contended rows 100 iterations in 122263470 ns (1.222634 ms/op) 2 threads 10 contended rows 100 iterations in 225890471 ns (2.258904 ms/op) 4 threads 10 contended rows 100 iterations in 423801468 ns (4.238014 ms/op) 8 threads 10 contended rows 100 iterations in 819573522 ns (8.195735 ms/op) 16 threads 10 contended rows 100 iterations in 1604154859 ns (16.041548 ms/op) 32 threads 10 contended rows 100 iterations in 3127778875 ns (31.277788 ms/op) 1 threads 100 contended rows 100 iterations in 116046683 ns (1.160466 ms/op) 2 threads 100 contended rows 100 iterations in 215477979 ns (2.154779 ms/op) 4 threads 100 contended rows 100 iterations in 411627258 ns (4.116272 ms/op) 8 threads 100 contended rows 100 iterations in 806653481 ns (8.066534 ms/op) 16 threads 100 contended rows 100 iterations in 1600262862 ns (16.002628 ms/op) 32 threads 100 contended rows 100 iterations in 3179850096 ns (31.798500 ms/op) 1 threads 1000 contended rows 100 iterations in 231174490 ns (2.311744 ms/op) 2 threads 1000 contended rows 100 iterations in 294631204 ns (2.946312 ms/op) 4 threads 1000 contended rows 100 iterations in 513858509 ns (5.138585 ms/op) 8 threads 1000 contended rows 100 iterations in 886817867 ns (8.868178 ms/op) 16 threads 1000 contended rows 100 iterations in 1745257920 ns (17.452579 ms/op) 32 threads 1000 contended rows 100 iterations in 3404472773 ns (34.044727 ms/op) {noformat} ) > Row commit sequencer > > > Key: HBASE-25975 > URL:
[jira] [Comment Edited] (HBASE-25998) Revisit synchronization in SyncFuture
[ https://issues.apache.org/jira/browse/HBASE-25998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364520#comment-17364520 ] Bharath Vissapragada edited comment on HBASE-25998 at 6/16/21, 9:21 PM: Thanks [~apurtell] for trying out the patch (and review). One interesting behavior here is that this big throughput difference is only obvious for Async WAL implementation, not clear to me why, perhaps there is a lot more contention in that implementation for some reason. I repeated the same set of tests in branch-1/master based FSHLog and the patch only performs slightly better (few single digit % points). This behavior was also confirmed in the YCSB runs on branch-1 (on a 3 node containerized EC2 cluster). Without patch: branch-1/FSHLog (10M ingest only) {noformat} [OVERALL], RunTime(ms), 199938 [OVERALL], Throughput(ops/sec), 50015.50480649001 [TOTAL_GCS_PS_Scavenge], Count, 293 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 1222 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.611189468735308 [TOTAL_GCS_PS_MarkSweep], Count, 1 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 34 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.017005271634206603 [TOTAL_GCs], Count, 294 [TOTAL_GC_TIME], Time(ms), 1256 [TOTAL_GC_TIME_%], Time(%), 0.6281947403695145 [CLEANUP], Operations, 512 [CLEANUP], AverageLatency(us), 41.0234375 [CLEANUP], MinLatency(us), 0 [CLEANUP], MaxLatency(us), 18527 [CLEANUP], 95thPercentileLatency(us), 13 [CLEANUP], 99thPercentileLatency(us), 37 [INSERT], Operations, 1000 [INSERT], AverageLatency(us), 5085.9494093 [INSERT], MinLatency(us), 1499 [INSERT], MaxLatency(us), 220927 [INSERT], 95thPercentileLatency(us), 6511 [INSERT], 99thPercentileLatency(us), 16655 [INSERT], Return=OK, 1000 {noformat} With patch: branch-1/FSHLog (10M ingest only) {noformat} [OVERALL], RunTime(ms), 195064 [OVERALL], Throughput(ops/sec), 51265.2257720543 [TOTAL_GCS_PS_Scavenge], Count, 284 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 1184 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.6069802731411229 [TOTAL_GCS_PS_MarkSweep], Count, 1 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 33 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.01691752450477792 [TOTAL_GCs], Count, 285 [TOTAL_GC_TIME], Time(ms), 1217 [TOTAL_GC_TIME_%], Time(%), 0.6238977976459008 [CLEANUP], Operations, 512 [CLEANUP], AverageLatency(us), 45.783203125 [CLEANUP], MinLatency(us), 1 [CLEANUP], MaxLatency(us), 20591 [CLEANUP], 95thPercentileLatency(us), 14 [CLEANUP], 99thPercentileLatency(us), 37 [INSERT], Operations, 1000 [INSERT], AverageLatency(us), 4958.6662675 [INSERT], MinLatency(us), 1380 [INSERT], MaxLatency(us), 295935 [INSERT], 95thPercentileLatency(us), 6335 [INSERT], 99thPercentileLatency(us), 19071 [INSERT], Return=OK, 1000 {noformat} Unfortunately, the tooling I have does not support branch-2/master (yet) so that I can repeat this YCSB run for Async WAL implementation but if WALPE runs are any indication, there should be a good enough throughput improvement. was (Author: bharathv): Thanks [~apurtell] for trying out the patch (and review). One interesting behavior here is that this big throughput difference is only obvious for Async WAL implementation, not clear to me why, perhaps there is a lot more contention in that implementation for some reason. I repeated the same set of tests in branch-1/master based FSHLog and the patch only performs slightly better (few single digit % points). This behavior was also confirmed in the YCSB runs on branch-1 (on a 3 node containerized EC2 cluster). Without patch: branch-1/FSHLog (10M ingest only) {noformat} [OVERALL], RunTime(ms), 199938 [OVERALL], Throughput(ops/sec), 50015.50480649001 [TOTAL_GCS_PS_Scavenge], Count, 293 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 1222 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.611189468735308 [TOTAL_GCS_PS_MarkSweep], Count, 1 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 34 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.017005271634206603 [TOTAL_GCs], Count, 294 [TOTAL_GC_TIME], Time(ms), 1256 [TOTAL_GC_TIME_%], Time(%), 0.6281947403695145 [CLEANUP], Operations, 512 [CLEANUP], AverageLatency(us), 41.0234375 [CLEANUP], MinLatency(us), 0 [CLEANUP], MaxLatency(us), 18527 [CLEANUP], 95thPercentileLatency(us), 13 [CLEANUP], 99thPercentileLatency(us), 37 [INSERT], Operations, 1000 [INSERT], AverageLatency(us), 5085.9494093 [INSERT], MinLatency(us), 1499 [INSERT], MaxLatency(us), 220927 [INSERT], 95thPercentileLatency(us), 6511 [INSERT], 99thPercentileLatency(us), 16655 [INSERT], Return=OK, 1000 {noformat} With patch: branch-1/FSHLog (10M ingest only) {noformat} [OVERALL], RunTime(ms), 195064 [OVERALL], Throughput(ops/sec), 51265.2257720543 [TOTAL_GCS_PS_Scavenge], Count, 284 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 1184 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.6069802731411229 [TOTAL_GCS_PS_MarkSweep], Count, 1 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 33 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%),
[jira] [Commented] (HBASE-25998) Revisit synchronization in SyncFuture
[ https://issues.apache.org/jira/browse/HBASE-25998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364520#comment-17364520 ] Bharath Vissapragada commented on HBASE-25998: -- Thanks [~apurtell] for trying out the patch (and review). One interesting behavior here is that this big throughput difference is only obvious for Async WAL implementation, not clear to me why, perhaps there is a lot more contention in that implementation for some reason. I repeated the same set of tests in branch-1/master based FSHLog and the patch only performs slightly better (few single digit % points). This behavior was also confirmed in the YCSB runs on branch-1 (on a 3 node containerized EC2 cluster). Without patch: branch-1/FSHLog (10M ingest only) {noformat} [OVERALL], RunTime(ms), 199938 [OVERALL], Throughput(ops/sec), 50015.50480649001 [TOTAL_GCS_PS_Scavenge], Count, 293 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 1222 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.611189468735308 [TOTAL_GCS_PS_MarkSweep], Count, 1 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 34 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.017005271634206603 [TOTAL_GCs], Count, 294 [TOTAL_GC_TIME], Time(ms), 1256 [TOTAL_GC_TIME_%], Time(%), 0.6281947403695145 [CLEANUP], Operations, 512 [CLEANUP], AverageLatency(us), 41.0234375 [CLEANUP], MinLatency(us), 0 [CLEANUP], MaxLatency(us), 18527 [CLEANUP], 95thPercentileLatency(us), 13 [CLEANUP], 99thPercentileLatency(us), 37 [INSERT], Operations, 1000 [INSERT], AverageLatency(us), 5085.9494093 [INSERT], MinLatency(us), 1499 [INSERT], MaxLatency(us), 220927 [INSERT], 95thPercentileLatency(us), 6511 [INSERT], 99thPercentileLatency(us), 16655 [INSERT], Return=OK, 1000 {noformat} With patch: branch-1/FSHLog (10M ingest only) {noformat} [OVERALL], RunTime(ms), 195064 [OVERALL], Throughput(ops/sec), 51265.2257720543 [TOTAL_GCS_PS_Scavenge], Count, 284 [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 1184 [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.6069802731411229 [TOTAL_GCS_PS_MarkSweep], Count, 1 [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 33 [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.01691752450477792 [TOTAL_GCs], Count, 285 [TOTAL_GC_TIME], Time(ms), 1217 [TOTAL_GC_TIME_%], Time(%), 0.6238977976459008 [CLEANUP], Operations, 512 [CLEANUP], AverageLatency(us), 45.783203125 [CLEANUP], MinLatency(us), 1 [CLEANUP], MaxLatency(us), 20591 [CLEANUP], 95thPercentileLatency(us), 14 [CLEANUP], 99thPercentileLatency(us), 37 [INSERT], Operations, 1000 [INSERT], AverageLatency(us), 4958.6662675 [INSERT], MinLatency(us), 1380 [INSERT], MaxLatency(us), 295935 [INSERT], 95thPercentileLatency(us), 6335 [INSERT], 99thPercentileLatency(us), 19071 [INSERT], Return=OK, 1000 {noformat} Unfortunately, the tooling I have does not support branch-2/master (yet) so that I can repeat this YCSB run for Async WAL implementation but if WALPE runs are any indication, we should be a good enough throughput improvement. > Revisit synchronization in SyncFuture > - > > Key: HBASE-25998 > URL: https://issues.apache.org/jira/browse/HBASE-25998 > Project: HBase > Issue Type: Improvement > Components: Performance, regionserver, wal >Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > Attachments: monitor-overhead-1.png, monitor-overhead-2.png > > > While working on HBASE-25984, I noticed some weird frames in the flame graphs > around monitor entry exit consuming a lot of CPU cycles (see attached > images). Noticed that the synchronization there is too coarse grained and > sometimes unnecessary. I did a simple patch that switched to a reentrant lock > based synchronization with condition variable rather than a busy wait and > that showed 70-80% increased throughput in WAL PE. Seems too good to be > true.. (more details in the comments). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26000) Optimize the display of ZK dump in the master web UI
[ https://issues.apache.org/jira/browse/HBASE-26000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364446#comment-17364446 ] Hudson commented on HBASE-26000: Results for branch branch-2 [build #278 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/278/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/278/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/278/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/278/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/278/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} -- Something went wrong with this stage, [check relevant console output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/278//console]. > Optimize the display of ZK dump in the master web UI > > > Key: HBASE-26000 > URL: https://issues.apache.org/jira/browse/HBASE-26000 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.4 >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: 1-replica-after.jpg, 1-replica-before.jpg, > 3-replica-after.jpg, 3-replica-before.jpg > > > Optimize the display of ZK dump in the master web UI. > h3. *Before:* > _*hbase:meta with 1 replica:*_ > !1-replica-before.jpg|width=667,height=215! > _*hbase:meta with 3 replica:*_ > !3-replica-before.jpg|width=658,height=187! > h3. *After:* > _*hbase:meta with 1 replica:*_ > !1-replica-after.jpg|width=648,height=229! > _*hbase:meta with 3 replica:*_ > !3-replica-after.jpg|width=656,height=254! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26000) Optimize the display of ZK dump in the master web UI
[ https://issues.apache.org/jira/browse/HBASE-26000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364408#comment-17364408 ] Hudson commented on HBASE-26000: Results for branch branch-2.3 [build #238 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/238/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/238/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/238/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/238/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/238/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimize the display of ZK dump in the master web UI > > > Key: HBASE-26000 > URL: https://issues.apache.org/jira/browse/HBASE-26000 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.4 >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: 1-replica-after.jpg, 1-replica-before.jpg, > 3-replica-after.jpg, 3-replica-before.jpg > > > Optimize the display of ZK dump in the master web UI. > h3. *Before:* > _*hbase:meta with 1 replica:*_ > !1-replica-before.jpg|width=667,height=215! > _*hbase:meta with 3 replica:*_ > !3-replica-before.jpg|width=658,height=187! > h3. *After:* > _*hbase:meta with 1 replica:*_ > !1-replica-after.jpg|width=648,height=229! > _*hbase:meta with 3 replica:*_ > !3-replica-after.jpg|width=656,height=254! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25997) NettyRpcFrameDecoder decode request header wrong when handleTooBigRequest
[ https://issues.apache.org/jira/browse/HBASE-25997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364295#comment-17364295 ] Hudson commented on HBASE-25997: Results for branch master [build #324 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > NettyRpcFrameDecoder decode request header wrong when handleTooBigRequest > -- > > Key: HBASE-25997 > URL: https://issues.apache.org/jira/browse/HBASE-25997 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 2.4.4 >Reporter: Lijin Bin >Assignee: Lijin Bin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.5 > > > Client write a big request to server, server decode request wrong, so client > do not get a RequestTooBigException as expected. > {code} > 2021-06-11 18:57:27,340 INFO [RS-EventLoopGroup-1-20] ipc.NettyRpcServer: > org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: > Protocol message tag had invalid wire type. > at > org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:111) > at > org.apache.hbase.thirdparty.com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:519) > at > org.apache.hbase.thirdparty.com.google.protobuf.GeneratedMessageV3.parseUnknownField(GeneratedMessageV3.java:298) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$RequestHeader.(RPCProtos.java:5958) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$RequestHeader.(RPCProtos.java:5916) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$RequestHeader$1.parsePartialFrom(RPCProtos.java:7249) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$RequestHeader$1.parsePartialFrom(RPCProtos.java:7244) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$RequestHeader$Builder.mergeFrom(RPCProtos.java:6679) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$RequestHeader$Builder.mergeFrom(RPCProtos.java:6482) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:420) > at > org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:317) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.mergeFrom(ProtobufUtil.java:2716) > at > org.apache.hadoop.hbase.ipc.NettyRpcFrameDecoder.getHeader(NettyRpcFrameDecoder.java:174) > at > org.apache.hadoop.hbase.ipc.NettyRpcFrameDecoder.handleTooBigRequest(NettyRpcFrameDecoder.java:126) > at > org.apache.hadoop.hbase.ipc.NettyRpcFrameDecoder.decode(NettyRpcFrameDecoder.java:65) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:405) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:372) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:228) > at >
[jira] [Commented] (HBASE-26000) Optimize the display of ZK dump in the master web UI
[ https://issues.apache.org/jira/browse/HBASE-26000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364296#comment-17364296 ] Hudson commented on HBASE-26000: Results for branch master [build #324 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/324/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Optimize the display of ZK dump in the master web UI > > > Key: HBASE-26000 > URL: https://issues.apache.org/jira/browse/HBASE-26000 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.4 >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: 1-replica-after.jpg, 1-replica-before.jpg, > 3-replica-after.jpg, 3-replica-before.jpg > > > Optimize the display of ZK dump in the master web UI. > h3. *Before:* > _*hbase:meta with 1 replica:*_ > !1-replica-before.jpg|width=667,height=215! > _*hbase:meta with 3 replica:*_ > !3-replica-before.jpg|width=658,height=187! > h3. *After:* > _*hbase:meta with 1 replica:*_ > !1-replica-after.jpg|width=648,height=229! > _*hbase:meta with 3 replica:*_ > !3-replica-after.jpg|width=656,height=254! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26008) Fix typo in AsyncConnectionImpl
[ https://issues.apache.org/jira/browse/HBASE-26008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yulin Niu resolved HBASE-26008. --- Resolution: Fixed > Fix typo in AsyncConnectionImpl > --- > > Key: HBASE-26008 > URL: https://issues.apache.org/jira/browse/HBASE-26008 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Yulin Niu >Assignee: Yulin Niu >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26008) Fix typo in AsyncConnectionImpl
[ https://issues.apache.org/jira/browse/HBASE-26008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364236#comment-17364236 ] Yulin Niu commented on HBASE-26008: --- push to master, thanks [~zhangduo]'s reviewing > Fix typo in AsyncConnectionImpl > --- > > Key: HBASE-26008 > URL: https://issues.apache.org/jira/browse/HBASE-26008 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Yulin Niu >Assignee: Yulin Niu >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3391: HBASE-26008 Fix typo in AsyncConnectionImpl
Apache-HBase commented on pull request #3391: URL: https://github.com/apache/hbase/pull/3391#issuecomment-862237947 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 41s | master passed | | +1 :green_heart: | compile | 0m 35s | master passed | | +1 :green_heart: | shadedjars | 8m 43s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 28s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 46s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed | | +1 :green_heart: | javac | 0m 29s | the patch passed | | +1 :green_heart: | shadedjars | 8m 33s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 36s | hbase-client in the patch passed. | | | | 32m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3391/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3391 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6f94103fc864 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 555f8b461f | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3391/1/testReport/ | | Max. process+thread count | 297 (vs. ulimit of 3) | | modules | C: hbase-client U: hbase-client | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3391/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3378: HBASE-25968 Request compact to compaction server
Apache-HBase commented on pull request #3378: URL: https://github.com/apache/hbase/pull/3378#issuecomment-862223592 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 8s | HBASE-25714 passed | | +1 :green_heart: | compile | 2m 42s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 7m 45s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 21s | HBASE-25714 passed | | -0 :warning: | patch | 9m 41s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 12s | the patch passed | | +1 :green_heart: | compile | 2m 41s | the patch passed | | +1 :green_heart: | javac | 2m 41s | the patch passed | | +1 :green_heart: | shadedjars | 7m 45s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 21s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 0s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 16s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 149m 1s | hbase-server in the patch passed. | | | | 187m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3378 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux abdcfa1b3776 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 6afca943ea | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/testReport/ | | Max. process+thread count | 3705 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3378: HBASE-25968 Request compact to compaction server
Apache-HBase commented on pull request #3378: URL: https://github.com/apache/hbase/pull/3378#issuecomment-86037 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 37s | HBASE-25714 passed | | +1 :green_heart: | compile | 2m 12s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 7m 46s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 14s | HBASE-25714 passed | | -0 :warning: | patch | 9m 33s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | compile | 2m 14s | the patch passed | | +1 :green_heart: | javac | 2m 14s | the patch passed | | +1 :green_heart: | shadedjars | 7m 39s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 47s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 13s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 149m 53s | hbase-server in the patch passed. | | | | 184m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3378 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 39f019141322 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 6afca943ea | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/testReport/ | | Max. process+thread count | 4107 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-26008) Fix typo in AsyncConnectionImpl
[ https://issues.apache.org/jira/browse/HBASE-26008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26008: -- Component/s: Client > Fix typo in AsyncConnectionImpl > --- > > Key: HBASE-26008 > URL: https://issues.apache.org/jira/browse/HBASE-26008 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Yulin Niu >Assignee: Yulin Niu >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] nyl3532016 opened a new pull request #3391: HBASE-26008 Fix typo in AsyncConnectionImpl
nyl3532016 opened a new pull request #3391: URL: https://github.com/apache/hbase/pull/3391 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-26008) Fix typo in AsyncConnectionImpl
Yulin Niu created HBASE-26008: - Summary: Fix typo in AsyncConnectionImpl Key: HBASE-26008 URL: https://issues.apache.org/jira/browse/HBASE-26008 Project: HBase Issue Type: Improvement Reporter: Yulin Niu Assignee: Yulin Niu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Reidddddd commented on pull request #3385: HBASE-26001 When turn on access control, the cell level TTL of Increment and Append operations is invalid
Reidd commented on pull request #3385: URL: https://github.com/apache/hbase/pull/3385#issuecomment-862199002 Please also pay attention to the checkstyle warnings. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3390: HBASE-25976 Implement a master based ReplicationTracker
Apache-HBase commented on pull request #3390: URL: https://github.com/apache/hbase/pull/3390#issuecomment-862186013 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 22s | master passed | | +1 :green_heart: | compile | 1m 22s | master passed | | +1 :green_heart: | shadedjars | 8m 25s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 52s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 37s | the patch passed | | +1 :green_heart: | compile | 1m 18s | the patch passed | | +1 :green_heart: | javac | 1m 18s | the patch passed | | +1 :green_heart: | shadedjars | 8m 16s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 50s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 31s | hbase-replication in the patch passed. | | -1 :x: | unit | 153m 33s | hbase-server in the patch failed. | | | | 186m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3390 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a05e85cbb5ac 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 555f8b461f | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/testReport/ | | Max. process+thread count | 4101 (vs. ulimit of 3) | | modules | C: hbase-replication hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652491718 ## File path: hbase-thrift/src/main/resources/hbase-webapps/thrift/thrift.jsp ## @@ -93,11 +98,6 @@ String framed = conf.get("hbase.regionserver.thrift.framed", "false"); Value Description - Review comment: why remove? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652490713 ## File path: hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift ## @@ -296,6 +318,187 @@ enum TCompareOp { NO_OP = 6 } +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.regionserver.BloomType + */ +enum TBloomFilterType { +/** + * Bloomfilters disabled + */ + NONE = 0, + /** + * Bloom enabled with Table row as Key + */ + ROW = 1, + /** + * Bloom enabled with Table row column (family+qualifier) as Key + */ + ROWCOL = 2, + /** + * Bloom enabled with Table row prefix as Key, specify the length of the prefix + */ + ROWPREFIX_FIXED_LENGTH = 3, +} + +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.io.compress.Algorithm + */ +enum TCompressionAlgorithm { + LZO = 0, + GZ = 1, + NONE = 2, + SNAPPY = 3, + LZ4 = 4, + BZIP2 = 5, + ZSTD = 6 +} + +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.io.encoding.DataBlockEncoding + */ +enum TDataBlockEncoding { +/** Disable data block encoding. */ + NONE = 0, + // id 1 is reserved for the BITSET algorithm to be added later + PREFIX = 2, + DIFF = 3, + FAST_DIFF = 4, + // id 5 is reserved for the COPY_KEY algorithm for benchmarking + // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), + // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), + ROW_INDEX_V1 = 7 +} + +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.KeepDeletedCells + */ +enum TKeepDeletedCells { + /** Deleted Cells are not retained. */ + FALSE = 0, + /** + * Deleted Cells are retained until they are removed by other means + * such TTL or VERSIONS. + * If no TTL is specified or no new versions of delete cells are + * written, they are retained forever. + */ + TRUE = 1, + /** + * Deleted Cells are retained until the delete marker expires due to TTL. + * This is useful when TTL is combined with MIN_VERSIONS and one + * wants to keep a minimum number of versions around but at the same + * time remove deleted cells after the TTL. + */ + TTL = 2 +} + +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.TableName + */ +struct TTableName { + /** namespace name */ + 1: optional binary ns + /** tablename */ + 2: required binary qualifier +} + +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.client.ColumnFamilyDescriptor + */ +struct TColumnFamilyDescriptor { + 1: required binary name + 2: optional map attributes + 3: optional map configuration + 4: optional i32 blockSize + 5: optional TBloomFilterType bloomnFilterType + 6: optional TCompressionAlgorithm compressionType + 7: optional i16 dfsReplication + 8: optional TDataBlockEncoding dataBlockEncoding + 9: optional TKeepDeletedCells keepDeletedCells + 10: optional i32 maxVersions + 11: optional i32 minVersions + 12: optional i32 scope + 13: optional i32 timeToLive + 14: optional bool blockCacheEnabled + 15: optional bool cacheBloomsOnWrite + 16: optional bool cacheDataOnWrite + 17: optional bool cacheIndexesOnWrite + 18: optional bool compressTags + 19: optional bool evictBlocksOnClose + 20: optional bool inMemory + +} + +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.client.TableDescriptor + */ +struct TTableDescriptor { + 1: required TTableName tableName + 2: optional list columns + 3: optional map attributes + 4: optional TDurability durability +} + +/** + * Thrift wrapper around + * org.apache.hadoop.hbase.NamespaceDescriptor + */ +struct TNamespaceDescriptor { +1: required string name +2: optional map configuration +} + +enum TLogType { + SLOW_LOG = 1, + LARGE_LOG = 2 +} + +enum TFilterByOperator { + AND, Review comment: ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652488577 ## File path: hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java ## @@ -1,40 +1,49 @@ /* - * Copyright The Apache Software Foundation Review comment: We should avoid changing the header -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652487758 ## File path: hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.thrift; + +import java.io.Closeable; +import java.io.IOException; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.util.StringUtils; + +/** + * Run ThriftServer with passed arguments. Access the exception thrown after we complete run -- if + * an exception thrown -- via {@link #getRunException()}}. Call close to shutdown this Runner + * and hosted {@link ThriftServer}. + */ +class ThriftServerRunner extends Thread implements Closeable { + private static final Log LOG = LogFactory.getLog(ThriftServerRunner.class); + Exception exception = null; + private final ThriftServer thriftServer; + private final String [] args; + + ThriftServerRunner(ThriftServer thriftServer, String [] args) { +this.thriftServer = thriftServer; +this.args = args; +LOG.info(String.format("thriftServer=%s, args=%s", getThriftServer(), + StringUtils.join(" ", args))); + } + + ThriftServer getThriftServer() { +return this.thriftServer; + } + + /** + * @return Empty unless {@link #run()} threw an exception; if it did, access it here. + */ + Exception getRunException() { +return this.exception; + } + + @Override public void run() { +try { + this.thriftServer.run(this.args); +} catch (Exception e) { + LOG.error("Run threw an exception", e); + this.exception = e; +} + } + + @Override public void close() throws IOException { Review comment: ditto -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652487568 ## File path: hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.thrift; + +import java.io.Closeable; +import java.io.IOException; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.util.StringUtils; + +/** + * Run ThriftServer with passed arguments. Access the exception thrown after we complete run -- if + * an exception thrown -- via {@link #getRunException()}}. Call close to shutdown this Runner + * and hosted {@link ThriftServer}. + */ +class ThriftServerRunner extends Thread implements Closeable { + private static final Log LOG = LogFactory.getLog(ThriftServerRunner.class); + Exception exception = null; + private final ThriftServer thriftServer; + private final String [] args; + + ThriftServerRunner(ThriftServer thriftServer, String [] args) { +this.thriftServer = thriftServer; +this.args = args; +LOG.info(String.format("thriftServer=%s, args=%s", getThriftServer(), + StringUtils.join(" ", args))); + } + + ThriftServer getThriftServer() { +return this.thriftServer; + } + + /** + * @return Empty unless {@link #run()} threw an exception; if it did, access it here. + */ + Exception getRunException() { +return this.exception; + } + + @Override public void run() { Review comment: newline between override and publice ## File path: hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.thrift; + +import java.io.Closeable; +import java.io.IOException; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.util.StringUtils; + +/** + * Run ThriftServer with passed arguments. Access the exception thrown after we complete run -- if + * an exception thrown -- via {@link #getRunException()}}. Call close to shutdown this Runner + * and hosted {@link ThriftServer}. + */ +class ThriftServerRunner extends Thread implements Closeable { + private static final Log LOG = LogFactory.getLog(ThriftServerRunner.class); + Exception exception = null; + private final ThriftServer thriftServer; + private final String [] args; + + ThriftServerRunner(ThriftServer thriftServer, String [] args) { +this.thriftServer = thriftServer; +this.args = args; +LOG.info(String.format("thriftServer=%s, args=%s", getThriftServer(), + StringUtils.join(" ", args))); + } + + ThriftServer getThriftServer() { +return this.thriftServer; + } + + /** + * @return Empty unless {@link #run()} threw an exception; if it did, access it here. + */ + Exception getRunException() { +return this.exception; + } + + @Override public void run() { Review comment: newline between override and public -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652485054 ## File path: hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.thrift; + +import java.io.Closeable; +import java.io.IOException; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.util.StringUtils; + +/** + * Run ThriftServer with passed arguments. Access the exception thrown after we complete run -- if + * an exception thrown -- via {@link #getRunException()}}. Call close to shutdown this Runner + * and hosted {@link ThriftServer}. + */ +class ThriftServerRunner extends Thread implements Closeable { Review comment: public? missing annotation? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652483439 ## File path: hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java ## @@ -97,8 +93,13 @@ import static org.apache.hadoop.hbase.thrift2.ThriftUtilities.scanFromThrift; import static org.apache.hadoop.hbase.thrift2.ThriftUtilities.incrementFromThrift; import static org.apache.hadoop.hbase.thrift2.ThriftUtilities.deleteFromThrift; -import static org.junit.Assert.*; import static java.nio.ByteBuffer.wrap; +import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; Review comment: are the import orders correct? looks like should be org.* then java.* -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #2941: HBASE-21674:Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1
Reidd commented on a change in pull request #2941: URL: https://github.com/apache/hbase/pull/2941#discussion_r652482730 ## File path: hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java ## @@ -1465,37 +1465,6 @@ public void testCheckAndMutate() throws Exception { assertTColumnValueEqual(columnValueB, result.getColumnValues().get(1)); } - @Test - public void testConsistency() throws Exception { Review comment: why this method is deleted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3390: HBASE-25976 Implement a master based ReplicationTracker
Apache-HBase commented on pull request #3390: URL: https://github.com/apache/hbase/pull/3390#issuecomment-862175345 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 21s | master passed | | +1 :green_heart: | compile | 1m 31s | master passed | | +1 :green_heart: | shadedjars | 8m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 15s | the patch passed | | +1 :green_heart: | compile | 1m 33s | the patch passed | | +1 :green_heart: | javac | 1m 33s | the patch passed | | +1 :green_heart: | shadedjars | 8m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 33s | hbase-replication in the patch passed. | | +1 :green_heart: | unit | 137m 5s | hbase-server in the patch passed. | | | | 170m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3390 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 709ef0108c6a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 555f8b461f | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/testReport/ | | Max. process+thread count | 4027 (vs. ulimit of 3) | | modules | C: hbase-replication hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3385: HBASE-26001 When turn on access control, the cell level TTL of Increment and Append operations is invalid
Apache-HBase commented on pull request #3385: URL: https://github.com/apache/hbase/pull/3385#issuecomment-862162568 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 35s | master passed | | +1 :green_heart: | compile | 3m 9s | master passed | | +1 :green_heart: | checkstyle | 1m 1s | master passed | | +1 :green_heart: | spotbugs | 2m 2s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 33s | the patch passed | | +1 :green_heart: | compile | 3m 9s | the patch passed | | +1 :green_heart: | javac | 3m 9s | the patch passed | | -0 :warning: | checkstyle | 1m 2s | hbase-server: The patch generated 11 new + 20 unchanged - 0 fixed = 31 total (was 20) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 53s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 45m 47s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3385 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 3177967c17f0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 555f8b461f | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 95 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/4/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #3385: HBASE-26001 When turn on access control, the cell level TTL of Increment and Append operations is invalid
Reidd commented on a change in pull request #3385: URL: https://github.com/apache/hbase/pull/3385#discussion_r652458898 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestPostIncrementAndAppendBeforeWAL.java ## @@ -161,6 +173,80 @@ public void testChangeCellWithNotExistColumnFamily() throws Exception { } } + @Test + public void testIncrementTTLWithACLTag() throws Exception { +TableName tableName = TableName.valueOf(name.getMethodName()); +createTableWithCoprocessor(tableName, ChangeCellWithACLTagObserver.class.getName()); +try (Table table = connection.getTable(tableName)) { + // Increment without TTL + Increment firstIncrement = new Increment(ROW).addColumn(CF1_BYTES, CQ1, 1).setACL(new HashMap<>()); + Result result = table.increment(firstIncrement); + assertEquals(1, result.size()); + assertEquals(1, Bytes.toLong(result.getValue(CF1_BYTES, CQ1))); + + // Check if the new cell can be read + Get get = new Get(ROW).addColumn(CF1_BYTES, CQ1); + result = table.get(get); + assertEquals(1, result.size()); + assertEquals(1, Bytes.toLong(result.getValue(CF1_BYTES, CQ1))); + + // Increment with TTL + Increment secondIncrement = new Increment(ROW).addColumn(CF1_BYTES, CQ1, 1).setTTL(1000) +.setACL(new HashMap<>()); + result = table.increment(secondIncrement); + + // We should get value 2 here + assertEquals(1, result.size()); + assertEquals(2, Bytes.toLong(result.getValue(CF1_BYTES, CQ1))); + + // Wait 2s to let the second increment expire + Thread.sleep(2000); + get = new Get(ROW).addColumn(CF1_BYTES, CQ1); + result = table.get(get); + + // The value should revert to 1 + assertEquals(1, result.size()); + assertEquals(1, Bytes.toLong(result.getValue(CF1_BYTES, CQ1))); +} + } + + @Test + public void testAppendTTLWithACLTag() throws Exception { +TableName tableName = TableName.valueOf(name.getMethodName()); +createTableWithCoprocessor(tableName, ChangeCellWithACLTagObserver.class.getName()); +try (Table table = connection.getTable(tableName)) { + // Append without TTL + Append firstAppend = new Append(ROW).addColumn(CF1_BYTES, CQ2, VALUE).setACL(new HashMap<>()); Review comment: Is it possible we add a non-null empty acls? then after the 2nd increment, we verify whether the result has the carried-forward acls? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #3385: HBASE-26001 When turn on access control, the cell level TTL of Increment and Append operations is invalid
Reidd commented on a change in pull request #3385: URL: https://github.com/apache/hbase/pull/3385#discussion_r652457713 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestPostIncrementAndAppendBeforeWAL.java ## @@ -232,4 +318,25 @@ private Cell newCellWithNotExistColumnFamily(Cell cell) { .collect(Collectors.toList()); } } -} Review comment: new line? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3378: HBASE-25968 Request compact to compaction server
Apache-HBase commented on pull request #3378: URL: https://github.com/apache/hbase/pull/3378#issuecomment-862146614 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 45s | HBASE-25714 passed | | +1 :green_heart: | compile | 6m 50s | HBASE-25714 passed | | +1 :green_heart: | checkstyle | 1m 55s | HBASE-25714 passed | | +1 :green_heart: | spotbugs | 7m 21s | HBASE-25714 passed | | -0 :warning: | patch | 2m 34s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 0s | the patch passed | | +1 :green_heart: | compile | 5m 47s | the patch passed | | +1 :green_heart: | cc | 5m 47s | the patch passed | | +1 :green_heart: | javac | 5m 47s | the patch passed | | +1 :green_heart: | checkstyle | 1m 50s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 19m 45s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | hbaseprotoc | 2m 3s | the patch passed | | +1 :green_heart: | spotbugs | 7m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 74m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3378 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool | | uname | Linux af7c6ed922e7 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 6afca943ea | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/6/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3373: HBASE-25980 Master table.jsp pointed at meta throws 500 when no all r…
Apache-HBase commented on pull request #3373: URL: https://github.com/apache/hbase/pull/3373#issuecomment-862109028 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 17s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 44s | branch-2 passed | | +1 :green_heart: | javadoc | 0m 45s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 32s | the patch passed | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 207m 48s | hbase-server in the patch passed. | | | | 221m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3373/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3373 | | Optional Tests | javac javadoc unit | | uname | Linux e073c3b547cb 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / a238e79e0f | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3373/2/testReport/ | | Max. process+thread count | 2450 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3373/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3385: HBASE-26001 When turn on access control, the cell level TTL of Increment and Append operations is invalid
Apache-HBase commented on pull request #3385: URL: https://github.com/apache/hbase/pull/3385#issuecomment-862100592 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | master passed | | +1 :green_heart: | compile | 1m 5s | master passed | | +1 :green_heart: | shadedjars | 9m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 6s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | shadedjars | 8m 59s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 211m 40s | hbase-server in the patch passed. | | | | 243m 50s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3385 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 65fd7ef9dc4a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 555f8b461f | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/3/testReport/ | | Max. process+thread count | 3699 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3390: HBASE-25976 Implement a master based ReplicationTracker
Apache-HBase commented on pull request #3390: URL: https://github.com/apache/hbase/pull/3390#issuecomment-862098596 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 48s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 18s | master passed | | +1 :green_heart: | compile | 3m 48s | master passed | | +1 :green_heart: | checkstyle | 1m 21s | master passed | | +1 :green_heart: | spotbugs | 2m 46s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 7s | the patch passed | | +1 :green_heart: | compile | 3m 51s | the patch passed | | +1 :green_heart: | javac | 0m 25s | hbase-replication generated 0 new + 5 unchanged - 1 fixed = 5 total (was 6) | | +1 :green_heart: | javac | 3m 26s | hbase-server in the patch passed. | | +1 :green_heart: | checkstyle | 1m 19s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 20m 35s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 3m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 20s | The patch does not generate ASF License warnings. | | | | 56m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3390 | | Optional Tests | dupname asflicense javac hadoopcheck xml compile spotbugs hbaseanti checkstyle | | uname | Linux a74bfc79e918 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 555f8b461f | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-replication hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3390/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3378: HBASE-25968 Request compact to compaction server
Apache-HBase commented on pull request #3378: URL: https://github.com/apache/hbase/pull/3378#issuecomment-862096283 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 38s | HBASE-25714 passed | | +1 :green_heart: | compile | 2m 13s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 7m 44s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 14s | HBASE-25714 passed | | -0 :warning: | patch | 9m 31s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 40s | the patch passed | | +1 :green_heart: | compile | 2m 13s | the patch passed | | +1 :green_heart: | javac | 2m 13s | the patch passed | | +1 :green_heart: | shadedjars | 7m 44s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 12s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 48s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 12s | hbase-client in the patch passed. | | -1 :x: | unit | 149m 59s | hbase-server in the patch failed. | | | | 185m 23s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3378 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9aef09cf0943 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 6afca943ea | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/5/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/5/testReport/ | | Max. process+thread count | 5102 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3378: HBASE-25968 Request compact to compaction server
Apache-HBase commented on pull request #3378: URL: https://github.com/apache/hbase/pull/3378#issuecomment-862095896 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 9s | HBASE-25714 passed | | +1 :green_heart: | compile | 2m 39s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 7m 41s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 19s | HBASE-25714 passed | | -0 :warning: | patch | 9m 35s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 17s | the patch passed | | +1 :green_heart: | compile | 2m 42s | the patch passed | | +1 :green_heart: | javac | 2m 42s | the patch passed | | +1 :green_heart: | shadedjars | 7m 46s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 0s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 17s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 146m 12s | hbase-server in the patch passed. | | | | 184m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3378 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a1d22d043b8c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 6afca943ea | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/5/testReport/ | | Max. process+thread count | 3935 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3378/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3385: HBASE-26001 When turn on access control, the cell level TTL of Increment and Append operations is invalid
Apache-HBase commented on pull request #3385: URL: https://github.com/apache/hbase/pull/3385#issuecomment-862070355 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 24s | master passed | | +1 :green_heart: | compile | 1m 14s | master passed | | +1 :green_heart: | shadedjars | 8m 38s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 52s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 52s | the patch passed | | +1 :green_heart: | compile | 1m 36s | the patch passed | | +1 :green_heart: | javac | 1m 36s | the patch passed | | +1 :green_heart: | shadedjars | 11m 29s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 155m 43s | hbase-server in the patch passed. | | | | 193m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3385 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a8b8ca0a56c5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 555f8b461f | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/3/testReport/ | | Max. process+thread count | 3950 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3385/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org