[GitHub] [hbase] Apache-HBase commented on pull request #2731: HBASE-25349 [Flakey Tests] branch-2 TestRefreshRecoveredReplication.t…
Apache-HBase commented on pull request #2731: URL: https://github.com/apache/hbase/pull/2731#issuecomment-737033253 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 20s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 16s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 6s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 34s | the patch passed | | -0 :warning: | checkstyle | 1m 14s | hbase-server: The patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 54s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 36m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2731/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2731 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux d746a0b5bbed 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 745a3a9ab7 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2731/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2731/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242089#comment-17242089 ] Xiaolin Ha commented on HBASE-25302: Hi, [~zhangduo], [~stack], [~zghao], [~anoop.hbase] , Could you help to review this issue? We have practiced the idea on our production clusters. Detailed design doc is in the issue description, and images of before and after splitting are attached to show the result. If you think this idea is OK, I can contribute all the implement codes to community, and our company is very happy to do it. Thank you. > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: after-split.png, before-split.png > > > In our company MeiTuan, we have encountered a problem, that is a production > cluster with about more than 1 regions switched off the split switch by > some mistakes. And more than 1 regions are larger than 600GB before we > can do something to split them. Because the R+W QPS is higher than 8 and > if split all the large regions at a time means lots of IO and CPU resources > are needed under current splitting method, expecially that all the regions > need to split more than once. > Fortunately, we use stripe compaction in most of our production clusters, > according to the design docs in > https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a > fast split region method using the idea of HFileLink and a hfile movement > method between same table regions. Actually, this idea was mentioned in > HBASE-7667 , it said that `region splits become marvelously simple (if we > could move files between regions, no references would be needed)`. > This issue is point at the fast split method. > Details are in the doc, > [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] > It is very simple and efficiency, we have implement all the ideas described > in the design doc and used on our production clusters. A region of about 600G > can be splitted to 75G*8 regions in about five minutes, with less than 5G > total rewrite size(all are L0) in the whole process, while normal continuous > split needs 600G*3=1800G. If using movement for same table HFileLinks, the > rewritten size is less than 50G(two stripe size), because the rebuild of > HFileLinks to stripes may insert some files to L0. > I will push two images about a RS before and after splitting all the regions > using this method. > > We are willing to contribute the codes to community. This idea can be not > only be used in stripe store engine, but also default store engine, and can > be very benefit to merge regions. If there is someone who has interest in > this issue, please let me know, thanks. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha updated HBASE-25302: --- Description: In our company MeiTuan, we have encountered a problem, that is a production cluster with about more than 1 regions switched off the split switch by some mistakes. And more than 1 regions are larger than 600GB before we can do something to split them. Because the R+W QPS is higher than 8 and if split all the large regions at a time means lots of IO and CPU resources are needed under current splitting method, expecially that all the regions need to split more than once. Fortunately, we use stripe compaction in most of our production clusters, according to the design docs in https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a fast split region method using the idea of HFileLink and a hfile movement method between same table regions. Actually, this idea was mentioned in HBASE-7667 , it said that `region splits become marvelously simple (if we could move files between regions, no references would be needed)`. This issue is point at the fast split method. Details are in the doc, [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] It is very simple and efficiency, we have implement all the ideas described in the design doc and used on our production clusters. A region of about 600G can be splitted to 75G*8 regions in about five minutes, with less than 5G total rewrite size(all are L0) in the whole process, while normal continuous split needs 600G*3=1800G. If using movement for same table HFileLinks, the rewritten size is less than 50G(two stripe size), because the rebuild of HFileLinks to stripes may insert some files to L0. I will push two images about a RS before and after splitting all the regions using this method. We are willing to contribute the codes to community. This idea can be not only be used in stripe store engine, but also default store engine, and can be very benefit to merge regions. If there is someone who has interest in this issue, please let me know, thanks. was: In our company MeiTuan, we have encountered a problem, that is a production cluster with about more than 1 regions switched off the split switch by some mistakes. And more than 1 regions are larger than 600GB before we can do something to split them. Because the R+W QPS is higher than 8 and if split all the large regions at a time means lots of IO and CPU resources are needed under current splitting method, expecially that all the regions need to split more than once. Fortunately, we use stripe compaction in most of our production clusters, according to the design docs in https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a fast split region method using the idea of HFileLink and a hfile movement method between same table regions. Actually, this idea was mentioned in HBASE-7667 , it said that `region splits become marvelously simple (if we could move files between regions, no references would be needed)`. This issue is point at the fast split method. Details are in the doc, [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] It is very simple and efficiency, we have implement all the ideas described in the design doc and used on our production clusters. A region of about 600G can be splitted to 75G*8 regions in about five minutes, with less than 5G total rewrite size(all are L0) in the whole process, while normal continuous split needs 600G*3=1800G. If using movement for same table HFileLinks, the rewritten size is less than 50G(two stripe size), because the rebuild of HFileLinks to stripes may insert some files to L0. I will push two images about a RS before and after splitting all the regions using this method. If there is someone who has interest in this issue, please let me know, thanks. > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: after-split.png, before-split.png > > > In our company MeiTuan, we have encountered a problem, that is a production > cluster with about more than 1 regions switched off the split switch by > some mistakes. And more than 1 regions are larger than 600GB before we > can do something to split them. Because the R+W QPS is higher than 8 and > if split all the large regions at a time means lots of IO and CPU resources > are needed under current splitting method, expecially that all the regions > need to split
[jira] [Updated] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha updated HBASE-25302: --- Attachment: after-split.png > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: after-split.png, before-split.png > > > In our company MeiTuan, we have encountered a problem, that is a production > cluster with about more than 1 regions switched off the split switch by > some mistakes. And more than 1 regions are larger than 600GB before we > can do something to split them. Because the R+W QPS is higher than 8 and > if split all the large regions at a time means lots of IO and CPU resources > are needed under current splitting method, expecially that all the regions > need to split more than once. > Fortunately, we use stripe compaction in most of our production clusters, > according to the design docs in > https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a > fast split region method using the idea of HFileLink and a hfile movement > method between same table regions. Actually, this idea was mentioned in > HBASE-7667 , it said that `region splits become marvelously simple (if we > could move files between regions, no references would be needed)`. > This issue is point at the fast split method. > Details are in the doc, > [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] > It is very simple and efficiency, we have implement all the ideas described > in the design doc and used on our production clusters. A region of about 600G > can be splitted to 75G*8 regions in about five minutes, with less than 5G > total rewrite size(all are L0) in the whole process, while normal continuous > split needs 600G*3=1800G. If using movement for same table HFileLinks, the > rewritten size is less than 50G(two stripe size), because the rebuild of > HFileLinks to stripes may insert some files to L0. > I will push two images about a RS before and after splitting all the regions > using this method. > > If there is someone who has interest in this issue, please let me know, > thanks. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha updated HBASE-25302: --- Attachment: before-split.png > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: after-split.png, before-split.png > > > In our company MeiTuan, we have encountered a problem, that is a production > cluster with about more than 1 regions switched off the split switch by > some mistakes. And more than 1 regions are larger than 600GB before we > can do something to split them. Because the R+W QPS is higher than 8 and > if split all the large regions at a time means lots of IO and CPU resources > are needed under current splitting method, expecially that all the regions > need to split more than once. > Fortunately, we use stripe compaction in most of our production clusters, > according to the design docs in > https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a > fast split region method using the idea of HFileLink and a hfile movement > method between same table regions. Actually, this idea was mentioned in > HBASE-7667 , it said that `region splits become marvelously simple (if we > could move files between regions, no references would be needed)`. > This issue is point at the fast split method. > Details are in the doc, > [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] > It is very simple and efficiency, we have implement all the ideas described > in the design doc and used on our production clusters. A region of about 600G > can be splitted to 75G*8 regions in about five minutes, with less than 5G > total rewrite size(all are L0) in the whole process, while normal continuous > split needs 600G*3=1800G. If using movement for same table HFileLinks, the > rewritten size is less than 50G(two stripe size), because the rebuild of > HFileLinks to stripes may insert some files to L0. > I will push two images about a RS before and after splitting all the regions > using this method. > > If there is someone who has interest in this issue, please let me know, > thanks. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha updated HBASE-25302: --- Description: In our company MeiTuan, we have encountered a problem, that is a production cluster with about more than 1 regions switched off the split switch by some mistakes. And more than 1 regions are larger than 600GB before we can do something to split them. Because the R+W QPS is higher than 8 and if split all the large regions at a time means lots of IO and CPU resources are needed under current splitting method, expecially that all the regions need to split more than once. Fortunately, we use stripe compaction in most of our production clusters, according to the design docs in https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a fast split region method using the idea of HFileLink and a hfile movement method between same table regions. Actually, this idea was mentioned in HBASE-7667 , it said that `region splits become marvelously simple (if we could move files between regions, no references would be needed)`. This issue is point at the fast split method. Details are in the doc, [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] It is very simple and efficiency, we have implement all the ideas described in the design doc and used on our production clusters. A region of about 600G can be splitted to 75G*8 regions in about five minutes, with less than 5G total rewrite size(all are L0) in the whole process, while normal continuous split needs 600G*3=1800G. If using movement for same table HFileLinks, the rewritten size is less than 50G(two stripe size), because the rebuild of HFileLinks to stripes may insert some files to L0. I will push two images about a RS before and after splitting all the regions using this method. If there is someone who has interest in this issue, please let me know, thanks. was: In our company MeiTuan, we have encountered a problem, that is a production cluster with about more than 1 regions switched off the split switch by some mistakes. And more than 1 regions are larger than 600GB before we can do something to split them. Because the R+W QPS is higher than 8 and if split all the large regions at a time means lots of IO under current splitting method, expecially that all the regions need to split more than once. Fortunately, we use stripe compaction in most of our production clusters, according to the design docs in https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a fast split region method using the idea of HFileLink and a hfile movement method between same table regions. Actually, this idea was mentioned in HBASE-7667 , it said that `region splits become marvelously simple (if we could move files between regions, no references would be needed)`. This issue is point at the fast split method. Details are in the doc, [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] It is very simple and efficiency, we have implement all the ideas described in the design doc and used on our production clusters. A region of about 600G can be splitted to 75G*8 regions in about five minutes, with less than 5G total rewrite size(all are L0) in the whole process, while normal continuous split needs 600G*3=1800G. If using movement for same table HFileLinks, the rewritten size is less than 50G(two stripe size), because the rebuild of HFileLinks to stripes may insert some files to L0. I will push two images about a RS before and after splitting all the regions using this method. If there is someone who has interest in this issue, please let me know, thanks. > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: after-split.png, before-split.png > > > In our company MeiTuan, we have encountered a problem, that is a production > cluster with about more than 1 regions switched off the split switch by > some mistakes. And more than 1 regions are larger than 600GB before we > can do something to split them. Because the R+W QPS is higher than 8 and > if split all the large regions at a time means lots of IO and CPU resources > are needed under current splitting method, expecially that all the regions > need to split more than once. > Fortunately, we use stripe compaction in most of our production clusters, > according to the design docs in > https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a > fast split
[jira] [Updated] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha updated HBASE-25302: --- Description: In our company MeiTuan, we have encountered a problem, that is a production cluster with about more than 1 regions switched off the split switch by some mistakes. And more than 1 regions are larger than 600GB before we can do something to split them. Because the R+W QPS is higher than 8 and if split all the large regions at a time means lots of IO under current splitting method, expecially that all the regions need to split more than once. Fortunately, we use stripe compaction in most of our production clusters, according to the design docs in https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a fast split region method using the idea of HFileLink and a hfile movement method between same table regions. Actually, this idea was mentioned in HBASE-7667 , it said that `region splits become marvelously simple (if we could move files between regions, no references would be needed)`. This issue is point at the fast split method. Details are in the doc, [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] It is very simple and efficiency, we have implement all the ideas described in the design doc and used on our production clusters. A region of about 600G can be splitted to 75G*8 regions in about five minutes, with less than 5G total rewrite size(all are L0) in the whole process, while normal continuous split needs 600G*3=1800G. If using movement for same table HFileLinks, the rewritten size is less than 50G(two stripe size), because the rebuild of HFileLinks to stripes may insert some files to L0. I will push two images about a RS before and after splitting all the regions using this method. If there is someone who has interest in this issue, please let me know, thanks. was: We have implemented a fast continuous split region method using HFileLink, depending on the stripe store file manager. It is very simple and efficiency, we have implement all the ideas described in the design doc and used on our production clusters. A region of about 600G can be splitted to 75G*8 regions in about five minutes, with less than 5G total rewrite size(all are L0) in the whole process, while normal continuous split needs 600G*3=1800G. If using movement for same table HFileLinks, the rewritten size is less than 50G(two stripe size), because the rebuild of HFileLinks to stripes may insert some files to L0. Details are in the doc, [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] If there is someone who has interest in this issue, please let me know, thanks. > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > > In our company MeiTuan, we have encountered a problem, that is a production > cluster with about more than 1 regions switched off the split switch by > some mistakes. And more than 1 regions are larger than 600GB before we > can do something to split them. Because the R+W QPS is higher than 8 and > if split all the large regions at a time means lots of IO under current > splitting method, expecially that all the regions need to split more than > once. > Fortunately, we use stripe compaction in most of our production clusters, > according to the design docs in > https://issues.apache.org/jira/browse/HBASE-7667 , we have implemented a > fast split region method using the idea of HFileLink and a hfile movement > method between same table regions. Actually, this idea was mentioned in > HBASE-7667 , it said that `region splits become marvelously simple (if we > could move files between regions, no references would be needed)`. > This issue is point at the fast split method. > Details are in the doc, > [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] > It is very simple and efficiency, we have implement all the ideas described > in the design doc and used on our production clusters. A region of about 600G > can be splitted to 75G*8 regions in about five minutes, with less than 5G > total rewrite size(all are L0) in the whole process, while normal continuous > split needs 600G*3=1800G. If using movement for same table HFileLinks, the > rewritten size is less than 50G(two stripe size), because the rebuild of > HFileLinks to stripes may insert some files to L0. > I will push two images about a RS before and after splitting all the regions > using this
[GitHub] [hbase] saintstack opened a new pull request #2731: HBASE-25349 [Flakey Tests] branch-2 TestRefreshRecoveredReplication.t…
saintstack opened a new pull request #2731: URL: https://github.com/apache/hbase/pull/2731 …estReplicationRefreshSource:141 Waiting timed out after [60,000] msec Start the check for recovered queue presence earlier. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
Apache9 commented on pull request #2237: URL: https://github.com/apache/hbase/pull/2237#issuecomment-737011410 > sorry for the delayed response. > > > Is clumsy operator deleting the meta location znode by mistake a valid failure mode ? > > no this is a special case that we have been supporting, where the HBase cluster freshly restarts on top of only flushed HFiles and does not come with WAL or ZK. and we admitted that it's a bit different from the community stand points that WAL and ZK must be both pre-existed when master or/and RSs start on existing HFiles to resume the states left from any procedures. Yes, this is not a typical scenario in the open source version of HBase so I do not think adding the special logic in the open source version is a good idea. In the future new developers who do not know this background may change the code again and cause problems. > > > What about adding extra step before assign where we wait asking Master a question about the cluster state such as if any of the RSs that are checking in have Regions on them; i.e. if Regions already assigned, if an already 'up' cluster? Would that help? > > having extra step to check if RSs has any assigned may help, but I don't know if we can do that before the server manager find any region server is online. > > > You fellows don't want to have to run a script beforehand? ZK is up and just put an empty location up or ask Master or hbck2 to do it for you? > > I think HBCK/HBCK2 is performing online repairing, there are few concerns we're having > > 1. if the master is not up and running, then we cannot proceed > 2. even if the master is up, the repairing on hundreds or thousand of regions implies long scanning time, which IMO we can save this time by just reloading it from existing meta. > 3. having an additional steps/scripts to start a HBase cluster in the mentioned cloud use case seem a manual/semi-automated step we don't find a good fit to hold and maintain them. I'm fine with adding a new command in HBCK2 to do these fix ups before starting a cluster. Personally I do not think HBCK2 'must' put all the fix logic at master side. Buy anyway, since the repo is called hbase-operator-tools, I think it is free for us to create a new sub module to place new scripts? Though for now it only happens on AWS, I think we could abstract it as a general scenario where we want to start a HBase cluster only on HFiles. > > Personally, it's fine to me with throwing exception as Duo suggested, and on our side we need to find a way to continue if we see this exception. then we improve it in the future when we need to completely getting rid of the extra step on hbck. > > So, for this PR, if we don't hear any other critical suggestion, maybe I will leave it "close" as unresolved, do you guys agree ? This is a scenario for HBase on cloud, especially for AWS, so I think if you guys want to close it as unresolved, others will not have any strong opinon to object :) Take it easy. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #2730: HBASE-25320 Upgrade hbase-thirdparty dependency to 3.4.1
Apache9 commented on pull request #2730: URL: https://github.com/apache/hbase/pull/2730#issuecomment-737003245 Closed as the problem in HBASE-25320 is gone. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 closed pull request #2730: HBASE-25320 Upgrade hbase-thirdparty dependency to 3.4.1
Apache9 closed pull request #2730: URL: https://github.com/apache/hbase/pull/2730 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2730: HBASE-25320 Upgrade hbase-thirdparty dependency to 3.4.1
Apache-HBase commented on pull request #2730: URL: https://github.com/apache/hbase/pull/2730#issuecomment-736986270 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | branch-2 passed | | +1 :green_heart: | compile | 2m 14s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 59s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 19s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 13s | the patch passed | | +1 :green_heart: | compile | 2m 12s | the patch passed | | +1 :green_heart: | javac | 2m 12s | the patch passed | | +1 :green_heart: | shadedjars | 6m 0s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 14s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 258m 22s | root in the patch failed. | | | | 289m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2730/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2730 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c911345acd65 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 78f30ff496 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2730/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-root.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2730/1/testReport/ | | Max. process+thread count | 4659 (vs. ulimit of 12500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2730/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25349) [Flakey Tests] branch-2 TestRefreshRecoveredReplication.testReplicationRefreshSource:141 Waiting timed out after [60,000] msec
[ https://issues.apache.org/jira/browse/HBASE-25349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-25349: -- Component/s: flakies Description: This one fails for me most times running locally. I added some debug (see the PR). In the test we look for old replication source to be non-zero long after it has been processed. The test gets stuck waiting. See here is where the table becomes available, after which we start expecting the old source replication queue to be non-zero. {{ 2020-12-01 20:02:52,400 INFO [Listener at localhost/56996] regionserver.TestRefreshRecoveredReplication(136): Available testReplicationRefreshSource}} But see here where the replication source has been removed before we get to the 'available' line above: {{2020-12-01 20:02:50,768 INFO [ReplicationExecutor-0.replicationSource,2-kalashnikov.attlocal.net,56950,1606881738045.replicationSource.shipperkalashnikov.attlocal.net%2C56950%2C1606881738045,2-kalashnikov.attlocal.net,56950,1606881738045] regionserver.ReplicationSourceManager(463): Done with the recovered queue 2-kalashnikov. attlocal.net,56950,1606881738045}}{{ }} was: We look for old replication source to be non-zero long after it has been processed. The test gets stuck waiting. See here is where the table becomes available, after which we start expecting the old source replication queue to be non-zero. {{ 2020-12-01 20:02:52,400 INFO [Listener at localhost/56996] regionserver.TestRefreshRecoveredReplication(136): Available testReplicationRefreshSource}} But see here where the replication source has been removed before we get to the 'available' line above: 2020-12-01 20:02:50,768 INFO [ReplicationExecutor-0.replicationSource,2-kalashnikov.attlocal.net,56950,1606881738045.replicationSource.shipperkalashnikov.attlocal.net%2C56950%2C1606881738045,2-kalashnikov.attlocal.net,56950,1606881738045] regionserver.ReplicationSourceManager(463): Done with the recovered queue 2-kalashnikov. attlocal.net,56950,1606881738045 Summary: [Flakey Tests] branch-2 TestRefreshRecoveredReplication.testReplicationRefreshSource:141 Waiting timed out after [60,000] msec (was: [fl) > [Flakey Tests] branch-2 > TestRefreshRecoveredReplication.testReplicationRefreshSource:141 Waiting > timed out after [60,000] msec > -- > > Key: HBASE-25349 > URL: https://issues.apache.org/jira/browse/HBASE-25349 > Project: HBase > Issue Type: Bug > Components: flakies >Reporter: Michael Stack >Priority: Major > > This one fails for me most times running locally. > I added some debug (see the PR). > In the test we look for old replication source to be non-zero long after it > has been processed. The test gets stuck waiting. > See here is where the table becomes available, after which we start expecting > the old source replication queue to be non-zero. > {{ 2020-12-01 20:02:52,400 INFO [Listener at localhost/56996] > regionserver.TestRefreshRecoveredReplication(136): Available > testReplicationRefreshSource}} > But see here where the replication source has been removed before we get to > the 'available' line above: > {{2020-12-01 20:02:50,768 INFO > [ReplicationExecutor-0.replicationSource,2-kalashnikov.attlocal.net,56950,1606881738045.replicationSource.shipperkalashnikov.attlocal.net%2C56950%2C1606881738045,2-kalashnikov.attlocal.net,56950,1606881738045] > regionserver.ReplicationSourceManager(463): Done with the recovered queue > 2-kalashnikov. attlocal.net,56950,1606881738045}}{{ }} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25349) [fl
Michael Stack created HBASE-25349: - Summary: [fl Key: HBASE-25349 URL: https://issues.apache.org/jira/browse/HBASE-25349 Project: HBase Issue Type: Bug Reporter: Michael Stack We look for old replication source to be non-zero long after it has been processed. The test gets stuck waiting. See here is where the table becomes available, after which we start expecting the old source replication queue to be non-zero. {{ 2020-12-01 20:02:52,400 INFO [Listener at localhost/56996] regionserver.TestRefreshRecoveredReplication(136): Available testReplicationRefreshSource}} But see here where the replication source has been removed before we get to the 'available' line above: 2020-12-01 20:02:50,768 INFO [ReplicationExecutor-0.replicationSource,2-kalashnikov.attlocal.net,56950,1606881738045.replicationSource.shipperkalashnikov.attlocal.net%2C56950%2C1606881738045,2-kalashnikov.attlocal.net,56950,1606881738045] regionserver.ReplicationSourceManager(463): Done with the recovered queue 2-kalashnikov. attlocal.net,56950,1606881738045 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25308) [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous
[ https://issues.apache.org/jira/browse/HBASE-25308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242019#comment-17242019 ] Reid Chan commented on HBASE-25308: --- Leave alone checkstyle, there's *COMPILATION ERROR* > [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous > - > > Key: HBASE-25308 > URL: https://issues.apache.org/jira/browse/HBASE-25308 > Project: HBase > Issue Type: Improvement >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 1.7.0 > > > We are again having classpath versioning issues related to Guava in our > branch-1 based application. > Hadoop 3, HBase 2, Phoenix 5, and other projects deal with Guava > cross-version incompatibilities, as they manifest on a combined classpath > with other components, via shading. > I propose to do a global search and replace of all direct uses of Guava in > our branch-1 code base and refer to Guava as provided in hbase-thirdparty's > hbase-shaded-miscellaneous. This will protect HBase branch-1 from Guava > cross-version vagaries just like the same technique protects branch-2 and > branch-2 based releases. > There are a couple of Public or LimitedPrivate interfaces that incorporate > Guava's HostAndPort and Service that will be indirectly impacted. We are > about to release a new minor branch-1 version, 1.7.0, and this would be a > great opportunity to introduce this kind of change in a manner consistent > with semantic versioning and our compatibility policies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25308) [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous
[ https://issues.apache.org/jira/browse/HBASE-25308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242018#comment-17242018 ] Reid Chan commented on HBASE-25308: --- do we need to take care those (x)? Or we can just directly merge > [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous > - > > Key: HBASE-25308 > URL: https://issues.apache.org/jira/browse/HBASE-25308 > Project: HBase > Issue Type: Improvement >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 1.7.0 > > > We are again having classpath versioning issues related to Guava in our > branch-1 based application. > Hadoop 3, HBase 2, Phoenix 5, and other projects deal with Guava > cross-version incompatibilities, as they manifest on a combined classpath > with other components, via shading. > I propose to do a global search and replace of all direct uses of Guava in > our branch-1 code base and refer to Guava as provided in hbase-thirdparty's > hbase-shaded-miscellaneous. This will protect HBase branch-1 from Guava > cross-version vagaries just like the same technique protects branch-2 and > branch-2 based releases. > There are a couple of Public or LimitedPrivate interfaces that incorporate > Guava's HostAndPort and Service that will be indirectly impacted. We are > about to release a new minor branch-1 version, 1.7.0, and this would be a > great opportunity to introduce this kind of change in a manner consistent > with semantic versioning and our compatibility policies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25308) [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous
[ https://issues.apache.org/jira/browse/HBASE-25308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242015#comment-17242015 ] Andrew Kyle Purtell commented on HBASE-25308: - All the patch does is change imports to use hbase-third party, rearranges import order to conform to our checkstyle rules, and makes fixups for guava changes. This is already polish. I struggle to understand what more polish needs polishing :-) > [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous > - > > Key: HBASE-25308 > URL: https://issues.apache.org/jira/browse/HBASE-25308 > Project: HBase > Issue Type: Improvement >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 1.7.0 > > > We are again having classpath versioning issues related to Guava in our > branch-1 based application. > Hadoop 3, HBase 2, Phoenix 5, and other projects deal with Guava > cross-version incompatibilities, as they manifest on a combined classpath > with other components, via shading. > I propose to do a global search and replace of all direct uses of Guava in > our branch-1 code base and refer to Guava as provided in hbase-thirdparty's > hbase-shaded-miscellaneous. This will protect HBase branch-1 from Guava > cross-version vagaries just like the same technique protects branch-2 and > branch-2 based releases. > There are a couple of Public or LimitedPrivate interfaces that incorporate > Guava's HostAndPort and Service that will be indirectly impacted. We are > about to release a new minor branch-1 version, 1.7.0, and this would be a > great opportunity to introduce this kind of change in a manner consistent > with semantic versioning and our compatibility policies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25308) [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous
[ https://issues.apache.org/jira/browse/HBASE-25308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242012#comment-17242012 ] Andrew Kyle Purtell commented on HBASE-25308: - Can you clarify? What needs to be polished? > [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous > - > > Key: HBASE-25308 > URL: https://issues.apache.org/jira/browse/HBASE-25308 > Project: HBase > Issue Type: Improvement >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 1.7.0 > > > We are again having classpath versioning issues related to Guava in our > branch-1 based application. > Hadoop 3, HBase 2, Phoenix 5, and other projects deal with Guava > cross-version incompatibilities, as they manifest on a combined classpath > with other components, via shading. > I propose to do a global search and replace of all direct uses of Guava in > our branch-1 code base and refer to Guava as provided in hbase-thirdparty's > hbase-shaded-miscellaneous. This will protect HBase branch-1 from Guava > cross-version vagaries just like the same technique protects branch-2 and > branch-2 based releases. > There are a couple of Public or LimitedPrivate interfaces that incorporate > Guava's HostAndPort and Service that will be indirectly impacted. We are > about to release a new minor branch-1 version, 1.7.0, and this would be a > great opportunity to introduce this kind of change in a manner consistent > with semantic versioning and our compatibility policies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25308) [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous
[ https://issues.apache.org/jira/browse/HBASE-25308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242010#comment-17242010 ] Reid Chan commented on HBASE-25308: --- Looked at the PR, many codes need to be polished > [branch-1] Consume Guava from hbase-thirdparty hbase-shaded-miscellaneous > - > > Key: HBASE-25308 > URL: https://issues.apache.org/jira/browse/HBASE-25308 > Project: HBase > Issue Type: Improvement >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 1.7.0 > > > We are again having classpath versioning issues related to Guava in our > branch-1 based application. > Hadoop 3, HBase 2, Phoenix 5, and other projects deal with Guava > cross-version incompatibilities, as they manifest on a combined classpath > with other components, via shading. > I propose to do a global search and replace of all direct uses of Guava in > our branch-1 code base and refer to Guava as provided in hbase-thirdparty's > hbase-shaded-miscellaneous. This will protect HBase branch-1 from Guava > cross-version vagaries just like the same technique protects branch-2 and > branch-2 based releases. > There are a couple of Public or LimitedPrivate interfaces that incorporate > Guava's HostAndPort and Service that will be indirectly impacted. We are > about to release a new minor branch-1 version, 1.7.0, and this would be a > great opportunity to introduce this kind of change in a manner consistent > with semantic versioning and our compatibility policies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25348) [hbase-thirdparty] 3.4.2 release
[ https://issues.apache.org/jira/browse/HBASE-25348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25348. --- Resolution: Won't Fix Resolving as 'Won't Fix' for now... Not needed afterall – not at the moment. See HBASE-25320. > [hbase-thirdparty] 3.4.2 release > > > Key: HBASE-25348 > URL: https://issues.apache.org/jira/browse/HBASE-25348 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > Fix For: hbase-thirdparty-3.4.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25320. --- Resolution: Fixed Re-resolve. Mis-signal on my part. All seems to be good. Thanks for checking [~zhangduo] > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241975#comment-17241975 ] Michael Stack commented on HBASE-25320: --- {{Thanks for checking.}} {{I am unable to reproduce any more.}} {{I lost my context because of forced restart but doing the below was how I used to reproduce it. I can't make it happen anymore. My context must have been polluted somehow.}} {{Let me revert the revert of this patch on branch-2 and reresolve.}} {{$ java -version}} {{java version "1.8.0_191"}} {{Java(TM) SE Runtime Environment (build 1.8.0_191-b12)}} {{Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)}} Checkout branch-2 w/o this patch. Full build. Run test "$ mvn test -Dtest=TestRefreshRecoveredReplication". It fails for me as it happens: {{}}{{[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 101.314 s <<< FAILURE! - in org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication}} {{[ERROR] org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.testReplicationRefreshSource Time elapsed: 76.808 s <<< FAILURE!}} {{java.lang.AssertionError: Waiting timed out after [60,000] msec}} {{ at org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.testReplicationRefreshSource(TestRefreshRecoveredReplication.java:138)}}{{[INFO]}} {{[INFO] Results:}} {{[INFO]}} {{[ERROR] Failures:}} {{[ERROR] TestRefreshRecoveredReplication.testReplicationRefreshSource:138 Waiting timed out after [60,000] msec}} But fails in the test. No mention of 'NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;}}'. Now, add the updated hbase-thirdparty git revert 78f30ff496c94f90ced6328c632272dce3582ab4 Rebuild with jdk8. Rerun the test. Fails to start the Master in the test startup with the Master complaining NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2730: HBASE-25320 Upgrade hbase-thirdparty dependency to 3.4.1
Apache-HBase commented on pull request #2730: URL: https://github.com/apache/hbase/pull/2730#issuecomment-736902837 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 11s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | ||| _ Patch Compile Tests _ | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 13m 2s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 17s | The patch does not generate ASF License warnings. | | | | 22m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2730/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2730 | | Optional Tests | dupname asflicense hadoopcheck xml | | uname | Linux 930bfcc1c571 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 78f30ff496 | | Max. process+thread count | 63 (vs. ulimit of 12500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2730/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241954#comment-17241954 ] Duo Zhang commented on HBASE-25320: --- And I tried locally to write the large UTs in hbase-server module with 3.4.1. This is my java env. {noformat} openjdk version "1.8.0_252" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_252-b09) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.252-b09, mixed mode) {noformat} At least the first UT TestFlushSnapshotFromClient passed. Checked the output, it started a mini cluster and uses rpc to communicate with the cluster, no problem. So could you please provide more details on your environment and how you get this error? Thanks. > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241953#comment-17241953 ] Duo Zhang commented on HBASE-25320: --- Opened a PR against branch-2 to see if it works. > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 opened a new pull request #2730: HBASE-25320 Upgrade hbase-thirdparty dependency to 3.4.1
Apache9 opened a new pull request #2730: URL: https://github.com/apache/hbase/pull/2730 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241947#comment-17241947 ] Michael Stack commented on HBASE-25320: --- {quote}I just uses the create release script, I’m fine with a new release but let’s confirm the root problem first again? {quote} Maybe you built outside of docker so you picked up jdk11 in your env? I confirmed the issue to my satisfaction. If you. have a chance, perhaps confirm you see same thing (but you busy... so no prob). > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241937#comment-17241937 ] Duo Zhang commented on HBASE-25320: --- Then why the pre commit jdk8 could succeeded? Strange. I just uses the create release script, I’m fine with a new release but let’s confirm the root problem first again? Thanks. > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25348) [hbase-thirdparty] 3.4.2 release
[ https://issues.apache.org/jira/browse/HBASE-25348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-25348: -- Fix Version/s: (was: 0.16.0) hbase-thirdparty-3.4.2 > [hbase-thirdparty] 3.4.2 release > > > Key: HBASE-25348 > URL: https://issues.apache.org/jira/browse/HBASE-25348 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > Fix For: hbase-thirdparty-3.4.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25348) [hbase-thirdparty] 3.4.2 release
[ https://issues.apache.org/jira/browse/HBASE-25348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-25348: -- Fix Version/s: hbase-thirdparty-3.4.2 > [hbase-thirdparty] 3.4.2 release > > > Key: HBASE-25348 > URL: https://issues.apache.org/jira/browse/HBASE-25348 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > Fix For: hbase-thirdparty-3.4.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25348) [hbase-thirdparty] 3.4.2 release
[ https://issues.apache.org/jira/browse/HBASE-25348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241859#comment-17241859 ] Michael Stack commented on HBASE-25348: --- Need to make a 3.4.2 release version in Jira at least. > [hbase-thirdparty] 3.4.2 release > > > Key: HBASE-25348 > URL: https://issues.apache.org/jira/browse/HBASE-25348 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25348) [hbase-thirdparty] 3.4.2 release
Michael Stack created HBASE-25348: - Summary: [hbase-thirdparty] 3.4.2 release Key: HBASE-25348 URL: https://issues.apache.org/jira/browse/HBASE-25348 Project: HBase Issue Type: Bug Reporter: Michael Stack -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241847#comment-17241847 ] Caroline edited comment on HBASE-25032 at 12/1/20, 8:41 PM: [~anoop.hbase] [~apurtell] Had some discussion with [~sandeep.guggilam] about possible fixes for this issue: # Move `reportForDuty()` after replication setup in `HRegionServer.java`. I think it would be moving [this line|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1546] out of the `handleReportForDutyResponse()` method and above the `reportForDuty()` line [here|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1033]. # Postpone adding the regionserver to master's online servers list until the regionserver's `online` flag has been set to true (i.e. all of the regionserver's initialization steps have completed). I believe that would be replacing [this line|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L280] with a thread or thread pool executor which asynchronously polls regionserver info (call [ServerManager.isServerReachable()|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L998]), and only calls `ServerManager.checkAndRecordNewServer()` after a response is received. We could create a new single thread pool executor every time `ServerManager.regionServerStartup()` is called, use the `MASTER_SERVER_OPERATIONS` service thread, or create a new executor service/thread pool/something else with configured x number of threads for this kind of task. Any thoughts on how we should configure the thread pool here? # Do not force region state to offline in the bulk assign method [here|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java#L1762]. I haven't investigated the implications of this. was (Author: caroliney14): [~anoop.hbase] [~apurtell] Had some discussion with [~sandeep.guggilam] about possible fixes for this issue: # Move `reportForDuty()` after replication setup in `HRegionServer.java`. I think it would be moving [this line|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1546] out of the `handleReportForDutyResponse()` method and above the `reportForDuty()` line [here|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1033]. # Postpone adding the regionserver to master's online servers list until the regionserver's `online` flag has been set to true (i.e. all of the regionserver's initialization steps have completed). I believe that would be replacing [this line|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L280] with a thread or thread pool executor which asynchronously polls regionserver info (call [ServerManager.isServerReachable()|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L998]), and only calls `checkAndRecordNewServer()` after a response is received. We could create a new single thread pool executor every time `regionServerStartup()` is called, use the `MASTER_SERVER_OPERATIONS` service thread, or create a new executor service/thread pool/something else with configured x number of threads for this kind of task. Any thoughts on how we should configure the thread pool here? # Do not force region state to offline in the bulk assign method [here|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java#L1762]. I haven't investigated the implications of this. > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline >Priority: Major > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication
[jira] [Commented] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241847#comment-17241847 ] Caroline commented on HBASE-25032: -- [~anoop.hbase] [~apurtell] Had some discussion with [~sandeep.guggilam] about possible fixes for this issue: # Move `reportForDuty()` after replication setup in `HRegionServer.java`. I think it would be moving [this line|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1546] out of the `handleReportForDutyResponse()` method and above the `reportForDuty()` line [here|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1033]. # Postpone adding the regionserver to master's online servers list until the regionserver's `online` flag has been set to true (i.e. all of the regionserver's initialization steps have completed). I believe that would be replacing [this line|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L280] with a thread or thread pool executor which asynchronously polls regionserver info (call [ServerManager.isServerReachable()|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L998]), and only calls `checkAndRecordNewServer()` after a response is received. We could create a new single thread pool executor every time `regionServerStartup()` is called, use the `MASTER_SERVER_OPERATIONS` service thread, or create a new executor service/thread pool/something else with configured x number of threads for this kind of task. Any thoughts on how we should configure the thread pool here? # Do not force region state to offline in the bulk assign method [here|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java#L1762]. I haven't investigated the implications of this. > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline >Priority: Major > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caroline reassigned HBASE-25032: Assignee: Caroline (was: Sandeep Guggilam) > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline >Priority: Major > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25347) [hbase-thirdparty] Add max java version enforcer rule
Michael Stack created HBASE-25347: - Summary: [hbase-thirdparty] Add max java version enforcer rule Key: HBASE-25347 URL: https://issues.apache.org/jira/browse/HBASE-25347 Project: HBase Issue Type: Bug Reporter: Michael Stack See HBASE-25320 where a release of hbase-thirdparty failed when used in a jdk8 built. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] taklwu edited a comment on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu edited a comment on pull request #2237: URL: https://github.com/apache/hbase/pull/2237#issuecomment-736772662 sorry for the delayed response. > Is clumsy operator deleting the meta location znode by mistake a valid failure mode ? no this is a special case that we have been supporting, where the HBase cluster freshly restarts on top of only flushed HFiles and does not come with WAL or ZK. and we admitted that it's a bit different from the community stand points that WAL and ZK must be both pre-existed when master or/and RSs start on existing HFiles to resume the states left from any procedures. > What about adding extra step before assign where we wait asking Master a question about the cluster state such as if any of the RSs that are checking in have Regions on them; i.e. if Regions already assigned, if an already 'up' cluster? Would that help? having extra step to check if RSs has any assigned may help, but I don't know if we can do that before the server manager find any region server is online. > You fellows don't want to have to run a script beforehand? ZK is up and just put an empty location up or ask Master or hbck2 to do it for you? I think HBCK/HBCK2 is performing online repairing, there are few concerns we're having 1. if the master is not up and running, then we cannot proceed 2. even if the master is up, the repairing on hundreds or thousand of regions implies long scanning time, which IMO we can save this time by just reloading it from existing meta. 3. having an additional steps/scripts to start a HBase cluster in the mentioned cloud use case seem a manual/semi-automated step we don't find a good fit to hold and maintain them. Personally, it's fine to me with throwing exception as Duo suggested, and on our side we need to find a way to continue if we see this exception. then we improve it in the future when we need to completely getting rid of the extra step on hbck. So, for this PR, if we don't hear any other critical suggestion, maybe I will leave it "close" as unresolved, do you guys agree ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu edited a comment on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu edited a comment on pull request #2237: URL: https://github.com/apache/hbase/pull/2237#issuecomment-736772662 sorry for the delay response. > Is clumsy operator deleting the meta location znode by mistake a valid failure mode ? no this is a special case that we have been supporting, where the HBase cluster freshly restarts on top of only flushed HFiles and does not come with WAL or ZK. and we admitted that it's a bit different from the community stand points that WAL and ZK must be both pre-existed when master or/and RSs start on existing HFiles to resume the states left from any procedures. > What about adding extra step before assign where we wait asking Master a question about the cluster state such as if any of the RSs that are checking in have Regions on them; i.e. if Regions already assigned, if an already 'up' cluster? Would that help? having extra step to check if RSs has any assigned may help, but I don't know if we can do that before the server manager find any region server is online. > You fellows don't want to have to run a script beforehand? ZK is up and just put an empty location up or ask Master or hbck2 to do it for you? I think HBCK/HBCK2 is performing online repairing, there are few concerns we're having 1. if the master is not up and running, then we cannot proceed 2. even if the master is up, the repairing on hundreds or thousand of regions implies long scanning time, which IMO we can save this time by just reloading it from existing meta. 3. having an additional steps/scripts to start a HBase cluster in the mentioned cloud use case seem a manual/semi-automated step we don't find a good fit to hold and maintain them. Personally, it's fine to me with throwing exception as Duo suggested, and on our side we need to find a way to continue if we see this exception. then we improve it in the future when we need to completely getting rid of the extra step on hbck. So, for this PR, if we don't hear any other critical suggestion, maybe I will leave it "close" as unresolved, do you guys agree ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu edited a comment on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu edited a comment on pull request #2237: URL: https://github.com/apache/hbase/pull/2237#issuecomment-736772662 > Is clumsy operator deleting the meta location znode by mistake a valid failure mode ? no this is a special case that we have been supporting, where the HBase cluster freshly restarts on top of only flushed HFiles and does not come with WAL or ZK. and we admitted that it's a bit different from the community stand points that WAL and ZK must be both pre-existed when master or/and RSs start on existing HFiles to resume the states left from any procedures. > What about adding extra step before assign where we wait asking Master a question about the cluster state such as if any of the RSs that are checking in have Regions on them; i.e. if Regions already assigned, if an already 'up' cluster? Would that help? having extra step to check if RSs has any assigned may help, but I don't know if we can do that before the server manager find any region server is online. > You fellows don't want to have to run a script beforehand? ZK is up and just put an empty location up or ask Master or hbck2 to do it for you? I think HBCK/HBCK2 is performing online repairing, there are few concerns we're having 1. if the master is not up and running, then we cannot proceed 2. even if the master is up, the repairing on hundreds or thousand of regions implies long scanning time, which IMO we can save this time by just reloading it from existing meta. 3. having an additional steps/scripts to start a HBase cluster in the mentioned cloud use case seem a manual/semi-automated step we don't find a good fit to hold and maintain them. Personally, it's fine to me with throwing exception as Duo suggested, and on our side we need to find a way to continue if we see this exception. then we improve it in the future when we need to completely getting rid of the extra step on hbck. So, for this PR, if we don't hear any other critical suggestion, maybe I will leave it "close" as unresolved, do you guys agree ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu commented on pull request #2237: URL: https://github.com/apache/hbase/pull/2237#issuecomment-736772662 > Is clumsy operator deleting the meta location znode by mistake a valid failure mode ? no this is a special case that we have been supporting, where the HBase cluster freshly restarts on top of only flushed HFiles and does not come with WAL or ZK. and we admitted that it's a bit different from the community stand points that WAL and ZK must be both pre-existed when master or/and RSs start on existing HFiles to resume the states left from any procedures. > What about adding extra step before assign where we wait asking Master a question about the cluster state such as if any of the RSs that are checking in have Regions on them; i.e. if Regions already assigned, if an already 'up' cluster? Would that help? having extra step to check if RSs has any assigned may help, but I don't know if we can do that before the server manager find any region server is online. > You fellows don't want to have to run a script beforehand? ZK is up and just put an empty location up or ask Master or hbck2 to do it for you? I think HBCK/HBCK2 is performing online repairing, there are few concerns we're having 1. if the master is not up and running, then we cannot proceed 2. even if the master is up, the repairing on hundreds or thousand of regions implies long scanning time, which IMO we can save this time by just reloading it from existing meta. 3. having an additional steps/scripts to start a HBase cluster in the mentioned cloud use case seem a manual/semi-automated step we don't find a good fit to hold and maintain them. Personally, it's fine to me with throwing exception as Duo suggested, and on our side we need to find a way to continue if we see this exception. then we improve it in the future when we need to completely getting rid of the extra step on hbck. So, for this PR, if we don't hear any other critical suggestion, maybe I will leave it "close" as unresolved, do you guys agree ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-connectors] LucaCanali commented on pull request #75: [HBASE-25326] Allow running and building hbase-connectors with Apache Spark 3.0
LucaCanali commented on pull request #75: URL: https://github.com/apache/hbase-connectors/pull/75#issuecomment-736768563 > Spark2 still works? Yes, Spark 2 still works. BTW, it is worth mentioning that there will have to be separate releases of the spark connector for each supported Scala version (notably for Scala 2.11 and 2.12) and that Spark 3.0 only supports Scala 2.12. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (HBASE-25343) Avoid the failed meta replica region temporarily in Load Balance mode
[ https://issues.apache.org/jira/browse/HBASE-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun reopened HBASE-25343: -- Reopen to reflect the new scope. > Avoid the failed meta replica region temporarily in Load Balance mode > - > > Key: HBASE-25343 > URL: https://issues.apache.org/jira/browse/HBASE-25343 > Project: HBase > Issue Type: Sub-task > Components: meta replicas >Affects Versions: 2.4.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 2.4.1 > > > This is a follow-up enhancement with Stack, Duo. With the newly introduced > meta replica LoadBalance mode, if there is something wrong with one of meta > replica regions, the current logic is that it keeps trying until the meta > replica region is onlined again or it reports error, i.e, there is no HA at > LoadBalance mode. HA can be implemented if it reports timeout with one meta > replica region and tries another meta replica region. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25343) Avoid the failed meta replica region temporarily in Load Balance mode
[ https://issues.apache.org/jira/browse/HBASE-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun updated HBASE-25343: - Summary: Avoid the failed meta replica region temporarily in Load Balance mode (was: Add HA support on top of Load Balance mode) > Avoid the failed meta replica region temporarily in Load Balance mode > - > > Key: HBASE-25343 > URL: https://issues.apache.org/jira/browse/HBASE-25343 > Project: HBase > Issue Type: Sub-task > Components: meta replicas >Affects Versions: 2.4.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 2.4.1 > > > This is a follow-up enhancement with Stack, Duo. With the newly introduced > meta replica LoadBalance mode, if there is something wrong with one of meta > replica regions, the current logic is that it keeps trying until the meta > replica region is onlined again or it reports error, i.e, there is no HA at > LoadBalance mode. HA can be implemented if it reports timeout with one meta > replica region and tries another meta replica region. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25343) Add HA support on top of Load Balance mode
[ https://issues.apache.org/jira/browse/HBASE-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241809#comment-17241809 ] Huaxiang Sun commented on HBASE-25343: -- I think it is a good idea, let me change the title of Jira and reopen it. > Add HA support on top of Load Balance mode > -- > > Key: HBASE-25343 > URL: https://issues.apache.org/jira/browse/HBASE-25343 > Project: HBase > Issue Type: Sub-task > Components: meta replicas >Affects Versions: 2.4.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 2.4.1 > > > This is a follow-up enhancement with Stack, Duo. With the newly introduced > meta replica LoadBalance mode, if there is something wrong with one of meta > replica regions, the current logic is that it keeps trying until the meta > replica region is onlined again or it reports error, i.e, there is no HA at > LoadBalance mode. HA can be implemented if it reports timeout with one meta > replica region and tries another meta replica region. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25345) [Flakey Tests] branch-2 TestReadReplicas#testVerifySecondaryAbilityToReadWithOnFiles
[ https://issues.apache.org/jira/browse/HBASE-25345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241778#comment-17241778 ] Hudson commented on HBASE-25345: Results for branch branch-2.2 [build #130 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Flakey Tests] branch-2 > TestReadReplicas#testVerifySecondaryAbilityToReadWithOnFiles > > > Key: HBASE-25345 > URL: https://issues.apache.org/jira/browse/HBASE-25345 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > This test fails for me locally every time. Looking at TestReadReplicas, a few > do a reset of the cluster after some configuration. Let me just break these > out to be their own test. When I do this, the test that fails for me starts > passing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25332) one NPE
[ https://issues.apache.org/jira/browse/HBASE-25332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241779#comment-17241779 ] Hudson commented on HBASE-25332: Results for branch branch-2.2 [build #130 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/130//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > one NPE > --- > > Key: HBASE-25332 > URL: https://issues.apache.org/jira/browse/HBASE-25332 > Project: HBase > Issue Type: Bug >Reporter: lujie >Assignee: lujie >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > * getData can return null at > > [https://github.com/apache/hbase/blob/1726160839368df14602da1618e3538955b25f74/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L615] > or > > [https://github.com/apache/hbase/blob/1726160839368df14602da1618e3538955b25f74/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L619] > all its caller have null checker except at > > [https://github.com/apache/hbase/blob/1726160839368df14602da1618e3538955b25f74/hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L467] > We shoud add null check for pontential NPEs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241774#comment-17241774 ] Michael Stack commented on HBASE-25320: --- [~zhangduo] I think I need to make a new release so I can update the hbase-thirdparty binary a 3.4.2 with only difference being that it was built w/ jdk8. I can do it (you are busy this morning I believe). > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2663: HBASE-24637 - Reseek regression related to filter SKIP hinting
Apache-HBase commented on pull request #2663: URL: https://github.com/apache/hbase/pull/2663#issuecomment-736741144 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 50s | master passed | | +1 :green_heart: | compile | 1m 13s | master passed | | +1 :green_heart: | shadedjars | 7m 20s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 45s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 38s | the patch passed | | +1 :green_heart: | compile | 1m 13s | the patch passed | | +1 :green_heart: | javac | 1m 13s | the patch passed | | +1 :green_heart: | shadedjars | 7m 26s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 200m 49s | hbase-server in the patch failed. | | | | 231m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2663 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d82f6b69e43a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8938b7a678 | | Default Java | AdoptOpenJDK-11.0.6+10 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/testReport/ | | Max. process+thread count | 3520 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241757#comment-17241757 ] Michael Stack commented on HBASE-25320: --- Looking at the thirdparty jar, it looks right regards target compile: "javap -v ./org/apache/hbase/thirdparty/com/google/protobuf/Enum.class|grep major major version: 51" Building the thirdparty w/ rel/3.4.1 tag using jdk8, I don't get the above error anymore. Let me look at updating the published binary. > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25345) [Flakey Tests] branch-2 TestReadReplicas#testVerifySecondaryAbilityToReadWithOnFiles
[ https://issues.apache.org/jira/browse/HBASE-25345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241752#comment-17241752 ] Hudson commented on HBASE-25345: Results for branch branch-2 [build #117 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Flakey Tests] branch-2 > TestReadReplicas#testVerifySecondaryAbilityToReadWithOnFiles > > > Key: HBASE-25345 > URL: https://issues.apache.org/jira/browse/HBASE-25345 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > This test fails for me locally every time. Looking at TestReadReplicas, a few > do a reset of the cluster after some configuration. Let me just break these > out to be their own test. When I do this, the test that fails for me starts > passing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25127) Enhance PerformanceEvaluation to profile meta replica performance.
[ https://issues.apache.org/jira/browse/HBASE-25127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241751#comment-17241751 ] Hudson commented on HBASE-25127: Results for branch branch-2 [build #117 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Enhance PerformanceEvaluation to profile meta replica performance. > -- > > Key: HBASE-25127 > URL: https://issues.apache.org/jira/browse/HBASE-25127 > Project: HBase > Issue Type: Sub-task >Reporter: Huaxiang Sun >Assignee: Clara Xiong >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > Attachments: Screen Shot 2020-11-13 at 5.30.11 PM.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25307) ThreadLocal pooling leads to NullPointerException
[ https://issues.apache.org/jira/browse/HBASE-25307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241749#comment-17241749 ] Hudson commented on HBASE-25307: Results for branch branch-2 [build #117 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ThreadLocal pooling leads to NullPointerException > - > > Key: HBASE-25307 > URL: https://issues.apache.org/jira/browse/HBASE-25307 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 3.0.0-alpha-1 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > We got NPE after setting {{hbase.client.ipc.pool.type}} to {{thread-local}}: > {noformat} > 20/11/18 01:53:04 ERROR yarn.ApplicationMaster: User class threw exception: > java.lang.NullPointerException > java.lang.NullPointerException > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.close(AbstractRpcClient.java:496) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.close(ConnectionImplementation.java:1944) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.close(TableInputFormatBase.java:660) > {noformat} > The root cause of the issue is probably at > {{PoolMap.ThreadLocalPool.values()}}: > {code:java} > public Collection values() { > List values = new ArrayList<>(); > values.add(get()); > return values; > } > {code} > It adds {{null}} into the collection if the current thread does not have any > resources which leads to NPE later. > I traced the usages of values() and it should return every resource, not just > that one which is attached to the caller thread. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25339) Method parameter and member variable are duplicated in checkSplittable() of SplitTableRegionProcedure
[ https://issues.apache.org/jira/browse/HBASE-25339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241750#comment-17241750 ] Hudson commented on HBASE-25339: Results for branch branch-2 [build #117 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/117/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Method parameter and member variable are duplicated in checkSplittable() of > SplitTableRegionProcedure > - > > Key: HBASE-25339 > URL: https://issues.apache.org/jira/browse/HBASE-25339 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1 >Reporter: Zhuoyue Huang >Assignee: Zhuoyue Huang >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > We input a 'this.bestSplitRow' as 'splitRow' of checkSplittable() > {code:java} > private void checkSplittable(final MasterProcedureEnv env, > final RegionInfo regionToSplit, final byte[] splitRow) > {code} > But this private method could used 'bestSplitRow' directly -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani commented on pull request #2707: [HBASE-25328] Add builder method to create Tags.
virajjasani commented on pull request #2707: URL: https://github.com/apache/hbase/pull/2707#issuecomment-736726353 Sure, let's wait for around half day time and unless there are any concerns, planning to merge the PR. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1941: HBASE-24157 Destination RSgroup aware export snapshot
Apache-HBase commented on pull request #1941: URL: https://github.com/apache/hbase/pull/1941#issuecomment-736723642 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 55s | master passed | | +1 :green_heart: | compile | 0m 39s | master passed | | +1 :green_heart: | shadedjars | 8m 30s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 40s | the patch passed | | +1 :green_heart: | compile | 0m 36s | the patch passed | | +1 :green_heart: | javac | 0m 36s | the patch passed | | +1 :green_heart: | shadedjars | 9m 18s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 17m 17s | hbase-mapreduce in the patch passed. | | | | 51m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1941 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5cc835f20e09 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e40c626ae1 | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/testReport/ | | Max. process+thread count | 2913 (vs. ulimit of 3) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2706: [HBASE-25246] Backup/Restore hbase cell tags.
Apache-HBase commented on pull request #2706: URL: https://github.com/apache/hbase/pull/2706#issuecomment-736717705 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 19s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 49s | master passed | | +1 :green_heart: | compile | 0m 48s | master passed | | +1 :green_heart: | shadedjars | 7m 11s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 39s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 51s | the patch passed | | +1 :green_heart: | compile | 0m 49s | the patch passed | | +1 :green_heart: | javac | 0m 49s | the patch passed | | +1 :green_heart: | shadedjars | 7m 8s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 13s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 13m 19s | hbase-mapreduce in the patch passed. | | | | 42m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2706 | | JIRA Issue | HBASE-25246 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 87309a15e4d6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e40c626ae1 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/testReport/ | | Max. process+thread count | 2952 (vs. ulimit of 3) | | modules | C: hbase-client hbase-mapreduce U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2663: HBASE-24637 - Reseek regression related to filter SKIP hinting
Apache-HBase commented on pull request #2663: URL: https://github.com/apache/hbase/pull/2663#issuecomment-736716359 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 46s | master passed | | +1 :green_heart: | compile | 1m 4s | master passed | | +1 :green_heart: | shadedjars | 7m 59s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 45s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 18s | the patch passed | | +1 :green_heart: | compile | 1m 10s | the patch passed | | +1 :green_heart: | javac | 1m 10s | the patch passed | | +1 :green_heart: | shadedjars | 8m 17s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 153m 3s | hbase-server in the patch failed. | | | | 186m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2663 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ea042ec9a344 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8938b7a678 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/testReport/ | | Max. process+thread count | 3866 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2706: [HBASE-25246] Backup/Restore hbase cell tags.
Apache-HBase commented on pull request #2706: URL: https://github.com/apache/hbase/pull/2706#issuecomment-736715888 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 22s | master passed | | +1 :green_heart: | compile | 1m 1s | master passed | | +1 :green_heart: | shadedjars | 6m 48s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 47s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 0m 56s | the patch passed | | +1 :green_heart: | javac | 0m 56s | the patch passed | | +1 :green_heart: | shadedjars | 6m 37s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 46s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 7s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 10m 13s | hbase-mapreduce in the patch passed. | | | | 39m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2706 | | JIRA Issue | HBASE-25246 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c4ee05457ef0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e40c626ae1 | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/testReport/ | | Max. process+thread count | 3731 (vs. ulimit of 3) | | modules | C: hbase-client hbase-mapreduce U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2706: [HBASE-25246] Backup/Restore hbase cell tags.
Apache-HBase commented on pull request #2706: URL: https://github.com/apache/hbase/pull/2706#issuecomment-736715570 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 3s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 29s | master passed | | +1 :green_heart: | checkstyle | 0m 50s | master passed | | +1 :green_heart: | spotbugs | 1m 37s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 28s | the patch passed | | +1 :green_heart: | checkstyle | 0m 50s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 4s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 1m 57s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 24s | The patch does not generate ASF License warnings. | | | | 39m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2706 | | JIRA Issue | HBASE-25246 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 6a01756e7ca5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e40c626ae1 | | Max. process+thread count | 94 (vs. ulimit of 3) | | modules | C: hbase-client hbase-mapreduce U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2706/4/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1941: HBASE-24157 Destination RSgroup aware export snapshot
Apache-HBase commented on pull request #1941: URL: https://github.com/apache/hbase/pull/1941#issuecomment-736714122 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 50s | master passed | | +1 :green_heart: | checkstyle | 0m 20s | master passed | | +1 :green_heart: | spotbugs | 0m 45s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 26s | the patch passed | | -0 :warning: | checkstyle | 0m 19s | hbase-mapreduce: The patch generated 12 new + 14 unchanged - 0 fixed = 26 total (was 14) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 9s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 34m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1941 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 4c59486c252f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e40c626ae1 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-mapreduce.txt | | Max. process+thread count | 94 (vs. ulimit of 3) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1941: HBASE-24157 Destination RSgroup aware export snapshot
Apache-HBase commented on pull request #1941: URL: https://github.com/apache/hbase/pull/1941#issuecomment-736713853 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | master passed | | +1 :green_heart: | compile | 0m 26s | master passed | | +1 :green_heart: | shadedjars | 6m 42s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 21s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 34s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | shadedjars | 6m 43s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 18s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 10m 4s | hbase-mapreduce in the patch passed. | | | | 34m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1941 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux af03672b2fbb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e40c626ae1 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/testReport/ | | Max. process+thread count | 4463 (vs. ulimit of 3) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ramkrish86 commented on pull request #2663: HBASE-24637 - Reseek regression related to filter SKIP hinting
ramkrish86 commented on pull request #2663: URL: https://github.com/apache/hbase/pull/2663#issuecomment-736707457 Thanks @apurtell - I will update the patch with test results and perf numbers. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack reopened HBASE-25320: --- I think third-party was compiled w/ jdk11? I get this with it in place. Reopening and reverting till we figure it out. {{ 2020-12-01 09:31:08,107 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=63923] helpers.MarkerIgnoringBase(143): Master server abort: loaded coprocessors are: []}} {{ 2020-12-01 09:31:08,108 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=63923] helpers.MarkerIgnoringBase(159): * ABORTING master kalashnikov.attlocal.net,63923,1606843863480: Number of failed RpcServer handler runs exceeded threshhold 0.5; reason: java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;}} {{ at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$HeapNioEncoder.flush(CodedOutputStream.java:1546)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.writeToCOS(ServerCall.java:376)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.createHeaderAndMessageBytes(ServerCall.java:383)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.createHeaderAndMessageBytes(ServerCall.java:361)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.setResponse(ServerCall.java:262)}} {{ at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:167)}} {{ at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)}} {{ at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)}} {{ *}} {{ java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;}} {{ at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$HeapNioEncoder.flush(CodedOutputStream.java:1546)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.writeToCOS(ServerCall.java:376)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.createHeaderAndMessageBytes(ServerCall.java:383)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.createHeaderAndMessageBytes(ServerCall.java:361)}} {{ at org.apache.hadoop.hbase.ipc.ServerCall.setResponse(ServerCall.java:262)}} {{ at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:167)}} {{ at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)}} {{ at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)}} > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] shahrs87 commented on pull request #2707: [HBASE-25328] Add builder method to create Tags.
shahrs87 commented on pull request #2707: URL: https://github.com/apache/hbase/pull/2707#issuecomment-736695442 @virajjasani Thank you for the review. Could you please merge the PR ? I can provide patch for branch-2 and branch-1 after that if it doesn't apply cleanly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] shahrs87 commented on pull request #2706: [HBASE-25246] Backup/Restore hbase cell tags.
shahrs87 commented on pull request #2706: URL: https://github.com/apache/hbase/pull/2706#issuecomment-736692035 @virajjasani Thank you for the review. Addressed your feedback in latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25307) ThreadLocal pooling leads to NullPointerException
[ https://issues.apache.org/jira/browse/HBASE-25307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241698#comment-17241698 ] Hudson commented on HBASE-25307: Results for branch branch-2.3 [build #116 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/116/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/116/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/116/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/116/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/116/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ThreadLocal pooling leads to NullPointerException > - > > Key: HBASE-25307 > URL: https://issues.apache.org/jira/browse/HBASE-25307 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 3.0.0-alpha-1 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > We got NPE after setting {{hbase.client.ipc.pool.type}} to {{thread-local}}: > {noformat} > 20/11/18 01:53:04 ERROR yarn.ApplicationMaster: User class threw exception: > java.lang.NullPointerException > java.lang.NullPointerException > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.close(AbstractRpcClient.java:496) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.close(ConnectionImplementation.java:1944) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.close(TableInputFormatBase.java:660) > {noformat} > The root cause of the issue is probably at > {{PoolMap.ThreadLocalPool.values()}}: > {code:java} > public Collection values() { > List values = new ArrayList<>(); > values.add(get()); > return values; > } > {code} > It adds {{null}} into the collection if the current thread does not have any > resources which leads to NPE later. > I traced the usages of values() and it should return every resource, not just > that one which is attached to the caller thread. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2729: Remove the use of term segregate
Apache-HBase commented on pull request #2729: URL: https://github.com/apache/hbase/pull/2729#issuecomment-736672823 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 38s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 44s | master passed | | +1 :green_heart: | compile | 1m 22s | master passed | | +1 :green_heart: | shadedjars | 9m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 54s | master passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 2m 52s | root in the patch failed. | | -1 :x: | compile | 0m 59s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 59s | hbase-server in the patch failed. | | -1 :x: | shadedjars | 5m 55s | patch has 16 errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 53s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 0m 55s | hbase-server in the patch failed. | | | | 31m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2729 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3e3b68c50bdd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8938b7a678 | | Default Java | AdoptOpenJDK-11.0.6+10 | | mvninstall | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk11-hadoop3-check/output/patch-mvninstall-root.txt | | compile | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/testReport/ | | Max. process+thread count | 87 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2729: Remove the use of term segregate
Apache-HBase commented on pull request #2729: URL: https://github.com/apache/hbase/pull/2729#issuecomment-736668855 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 7s | master passed | | +1 :green_heart: | compile | 0m 59s | master passed | | +1 :green_heart: | shadedjars | 7m 8s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | master passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 1m 59s | root in the patch failed. | | -1 :x: | compile | 0m 41s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 41s | hbase-server in the patch failed. | | -1 :x: | shadedjars | 5m 19s | patch has 16 errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 0m 42s | hbase-server in the patch failed. | | | | 24m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2729 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f2718eda3e99 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8938b7a678 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | mvninstall | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk8-hadoop3-check/output/patch-mvninstall-root.txt | | compile | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-server.txt | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/testReport/ | | Max. process+thread count | 78 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2729: Remove the use of term segregate
Apache-HBase commented on pull request #2729: URL: https://github.com/apache/hbase/pull/2729#issuecomment-73872 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 15s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 10s | master passed | | +1 :green_heart: | checkstyle | 1m 15s | master passed | | +1 :green_heart: | spotbugs | 2m 24s | master passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 2m 2s | root in the patch failed. | | -0 :warning: | checkstyle | 1m 14s | hbase-server: The patch generated 1 new + 2 unchanged - 1 fixed = 3 total (was 3) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | -1 :x: | hadoopcheck | 2m 20s | The patch causes 16 errors with Hadoop v3.1.2. | | -1 :x: | hadoopcheck | 4m 38s | The patch causes 16 errors with Hadoop v3.2.1. | | -1 :x: | hadoopcheck | 6m 59s | The patch causes 16 errors with Hadoop v3.3.0. | | -1 :x: | spotbugs | 0m 33s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 21m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2729 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 8eaab211387f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8938b7a678 | | mvninstall | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-general-check/output/patch-mvninstall-root.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | hadoopcheck | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-general-check/output/patch-javac-3.1.2.txt | | hadoopcheck | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-general-check/output/patch-javac-3.2.1.txt | | hadoopcheck | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-general-check/output/patch-javac-3.3.0.txt | | spotbugs | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/artifact/yetus-general-check/output/patch-spotbugs-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2729/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25320) Upgrade hbase-thirdparty dependency to 3.4.1
[ https://issues.apache.org/jira/browse/HBASE-25320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25320. --- Hadoop Flags: Reviewed Resolution: Fixed Merged to branch-2 and master. Thanks for the patch [~zhangduo] > Upgrade hbase-thirdparty dependency to 3.4.1 > > > Key: HBASE-25320 > URL: https://issues.apache.org/jira/browse/HBASE-25320 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack merged pull request #2693: HBASE-25320 Upgrade hbase-thirdparty dependency to 3.4.1
saintstack merged pull request #2693: URL: https://github.com/apache/hbase/pull/2693 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on pull request #2693: HBASE-25320 Upgrade hbase-thirdparty dependency to 3.4.1
saintstack commented on pull request #2693: URL: https://github.com/apache/hbase/pull/2693#issuecomment-736655133 Tests in backup are failing but I think these long-time flakies. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sethsawant opened a new pull request #2729: Remove the use of term segregate
sethsawant opened a new pull request #2729: URL: https://github.com/apache/hbase/pull/2729 Remove the use of the term segregate from the codebase due to its racial connotations. It is unnecessary and can be replaced easily with a more neutral term. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2699: HBASE-25287 Forgetting to unbuffer streams results in many CLOSE_WAIT…
Apache-HBase commented on pull request #2699: URL: https://github.com/apache/hbase/pull/2699#issuecomment-736640341 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 13s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 43s | master passed | | +1 :green_heart: | compile | 1m 12s | master passed | | +1 :green_heart: | shadedjars | 7m 24s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 36s | the patch passed | | +1 :green_heart: | compile | 1m 12s | the patch passed | | +1 :green_heart: | javac | 1m 12s | the patch passed | | +1 :green_heart: | shadedjars | 7m 27s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 197m 11s | hbase-server in the patch passed. | | | | 228m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2699 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0d8916bb50d9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/testReport/ | | Max. process+thread count | 3275 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2663: HBASE-24637 - Reseek regression related to filter SKIP hinting
Apache-HBase commented on pull request #2663: URL: https://github.com/apache/hbase/pull/2663#issuecomment-736625412 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 10s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 56s | master passed | | +1 :green_heart: | checkstyle | 1m 13s | master passed | | +1 :green_heart: | spotbugs | 2m 5s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 48s | the patch passed | | +1 :green_heart: | checkstyle | 1m 12s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 57s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 18s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 42m 35s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2663 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux b49c54dc5b1a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8938b7a678 | | Max. process+thread count | 84 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2663/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha updated HBASE-25302: --- Attachment: (was: Fast continuous split regions with stripe store engine.pdf) > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > > We have implemented a fast continuous split region method using HFileLink, > depending on the stripe store file manager. > It is very simple and efficiency, we have implement all the ideas described > in the design doc and used on our production clusters. A region of about 600G > can be splitted to 75G*8 regions in about five minutes, with less than 5G > total rewrite size(all are L0) in the whole process, while normal continuous > split needs 600G*3=1800G. If using movement for same table HFileLinks, the > rewritten size is less than 50G(two stripe size), because the rebuild of > HFileLinks to stripes may insert some files to L0. > Details are in the doc, > [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] > If there is someone who has interest in this issue, please let me know, > thanks. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25302) Fast split regions with stripe store engine
[ https://issues.apache.org/jira/browse/HBASE-25302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha updated HBASE-25302: --- Summary: Fast split regions with stripe store engine (was: Fast continuous split regions with stripe store engine) > Fast split regions with stripe store engine > --- > > Key: HBASE-25302 > URL: https://issues.apache.org/jira/browse/HBASE-25302 > Project: HBase > Issue Type: Improvement >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: Fast continuous split regions with stripe store > engine.pdf > > > We have implemented a fast continuous split region method using HFileLink, > depending on the stripe store file manager. > It is very simple and efficiency, we have implement all the ideas described > in the design doc and used on our production clusters. A region of about 600G > can be splitted to 75G*8 regions in about five minutes, with less than 5G > total rewrite size(all are L0) in the whole process, while normal continuous > split needs 600G*3=1800G. If using movement for same table HFileLinks, the > rewritten size is less than 50G(two stripe size), because the rebuild of > HFileLinks to stripes may insert some files to L0. > Details are in the doc, > [https://docs.google.com/document/d/1hzBMdEFCckw18RE-kQQCe2ArW0MXhmLiiqyqpngItBM/edit?usp=sharing] > If there is someone who has interest in this issue, please let me know, > thanks. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] yuqi1129 edited a comment on pull request #2728: HBASE-25334 TestRSGroupsFallback.testFallback is flaky
yuqi1129 edited a comment on pull request #2728: URL: https://github.com/apache/hbase/pull/2728#issuecomment-736616992 @sunhelly Yes, that is exactly the cause, in HBASE-25282, i use the ServerCrashProcedure to judge whether the Region Server is in processing, Sorry to introduce such a problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] yuqi1129 commented on pull request #2728: HBASE-25334 TestRSGroupsFallback.testFallback is flaky
yuqi1129 commented on pull request #2728: URL: https://github.com/apache/hbase/pull/2728#issuecomment-736616992 @sunhelly Yes, that is exactly the cause, in HBASE-25282, i use the ServerCrashProcedure to judge whether the Region Server is in processing, that is exactly the cause. Sorry to introduce such a problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25334) TestRSGroupsFallback.testFallback is flaky
[ https://issues.apache.org/jira/browse/HBASE-25334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241604#comment-17241604 ] yuqi commented on HBASE-25334: -- [~zhangduo] Sorry to introduce this problem, i will figure out the cause and fix is as soon as possible > TestRSGroupsFallback.testFallback is flaky > -- > > Key: HBASE-25334 > URL: https://issues.apache.org/jira/browse/HBASE-25334 > Project: HBase > Issue Type: Test >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > > Like in CI test results of PR [https://github.com/apache/hbase/pull/2699] > failed UTs site is > [https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt] > > In this unit test, it checks if all table regions assigned after balance, and > then assert for the RS group of regions. > But balance() uses aync move, and will throttle move regions, sleeping > between all the table regions are moved to its RSGroup. > If waiting time is not longer than the region movement duration, the > assertion will be fail. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] pankaj72981 commented on pull request #2675: HBASE-25277 postScannerFilterRow impacts Scan performance a lot in HBase 2.x
pankaj72981 commented on pull request #2675: URL: https://github.com/apache/hbase/pull/2675#issuecomment-736604176 > Test issue anyway related? Test failures are not relevant. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2699: HBASE-25287 Forgetting to unbuffer streams results in many CLOSE_WAIT…
Apache-HBase commented on pull request #2699: URL: https://github.com/apache/hbase/pull/2699#issuecomment-736598158 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 23s | master passed | | +1 :green_heart: | compile | 0m 56s | master passed | | +1 :green_heart: | shadedjars | 6m 33s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 29s | the patch passed | | +1 :green_heart: | compile | 0m 55s | the patch passed | | +1 :green_heart: | javac | 0m 55s | the patch passed | | +1 :green_heart: | shadedjars | 6m 33s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 139m 37s | hbase-server in the patch failed. | | | | 165m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2699 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 784d3bb4c1a1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/testReport/ | | Max. process+thread count | 4357 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25332) one NPE
[ https://issues.apache.org/jira/browse/HBASE-25332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani resolved HBASE-25332. -- Fix Version/s: 2.3.4 2.2.7 2.4.0 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Thanks for the contribution [~xiaoheipangzi]. > one NPE > --- > > Key: HBASE-25332 > URL: https://issues.apache.org/jira/browse/HBASE-25332 > Project: HBase > Issue Type: Bug >Reporter: lujie >Assignee: lujie >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > * getData can return null at > > [https://github.com/apache/hbase/blob/1726160839368df14602da1618e3538955b25f74/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L615] > or > > [https://github.com/apache/hbase/blob/1726160839368df14602da1618e3538955b25f74/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L619] > all its caller have null checker except at > > [https://github.com/apache/hbase/blob/1726160839368df14602da1618e3538955b25f74/hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L467] > We shoud add null check for pontential NPEs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani closed pull request #2715: HBASE-25332:fix One pontential NPE
virajjasani closed pull request #2715: URL: https://github.com/apache/hbase/pull/2715 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1941: HBASE-24157 Destination RSgroup aware export snapshot
Apache-HBase commented on pull request #1941: URL: https://github.com/apache/hbase/pull/1941#issuecomment-736572518 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 43s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 59s | master passed | | +1 :green_heart: | compile | 0m 32s | master passed | | +1 :green_heart: | shadedjars | 8m 39s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 32s | the patch passed | | +1 :green_heart: | compile | 0m 31s | the patch passed | | +1 :green_heart: | javac | 0m 31s | the patch passed | | +1 :green_heart: | shadedjars | 8m 37s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 16m 8s | hbase-mapreduce in the patch failed. | | | | 47m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1941 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 897ce8eb1e00 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-mapreduce.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/testReport/ | | Max. process+thread count | 3324 (vs. ulimit of 3) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1941: HBASE-24157 Destination RSgroup aware export snapshot
Apache-HBase commented on pull request #1941: URL: https://github.com/apache/hbase/pull/1941#issuecomment-736568599 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 54s | master passed | | +1 :green_heart: | compile | 0m 29s | master passed | | +1 :green_heart: | shadedjars | 7m 28s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 35s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed | | +1 :green_heart: | javac | 0m 29s | the patch passed | | +1 :green_heart: | shadedjars | 7m 21s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 12m 46s | hbase-mapreduce in the patch failed. | | | | 41m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1941 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 27b1f8f1d735 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | Default Java | AdoptOpenJDK-11.0.6+10 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-mapreduce.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/testReport/ | | Max. process+thread count | 2759 (vs. ulimit of 3) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1941: HBASE-24157 Destination RSgroup aware export snapshot
Apache-HBase commented on pull request #1941: URL: https://github.com/apache/hbase/pull/1941#issuecomment-736564893 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 51s | master passed | | +1 :green_heart: | checkstyle | 0m 22s | master passed | | +1 :green_heart: | spotbugs | 0m 47s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 26s | the patch passed | | -0 :warning: | checkstyle | 0m 19s | hbase-mapreduce: The patch generated 12 new + 14 unchanged - 0 fixed = 26 total (was 14) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 17s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 34m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1941 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 1cd3ab37e281 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-mapreduce.txt | | Max. process+thread count | 94 (vs. ulimit of 3) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-1941/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2716: HBASE-25336 Use Address instead of InetSocketAddress in RpcClient imp…
Apache-HBase commented on pull request #2716: URL: https://github.com/apache/hbase/pull/2716#issuecomment-736551384 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 4s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | master passed | | +1 :green_heart: | compile | 2m 21s | master passed | | +1 :green_heart: | shadedjars | 6m 36s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 23s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 26s | the patch passed | | +1 :green_heart: | compile | 2m 21s | the patch passed | | +1 :green_heart: | javac | 2m 21s | the patch passed | | +1 :green_heart: | shadedjars | 6m 42s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 28s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 331m 1s | root in the patch failed. | | | | 365m 22s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2716 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b5217e6034ce 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f71eb27be1 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/testReport/ | | Max. process+thread count | 6221 (vs. ulimit of 3) | | modules | C: hbase-client . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2728: HBASE-25334 TestRSGroupsFallback.testFallback is flaky
Apache-HBase commented on pull request #2728: URL: https://github.com/apache/hbase/pull/2728#issuecomment-736538009 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 53s | master passed | | +1 :green_heart: | compile | 0m 55s | master passed | | +1 :green_heart: | shadedjars | 6m 30s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | compile | 0m 56s | the patch passed | | +1 :green_heart: | javac | 0m 56s | the patch passed | | +1 :green_heart: | shadedjars | 6m 30s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 158m 14s | hbase-server in the patch passed. | | | | 184m 35s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2728 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a247883c01b3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | Default Java | AdoptOpenJDK-1.8.0_232-b09 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/testReport/ | | Max. process+thread count | 3652 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2728: HBASE-25334 TestRSGroupsFallback.testFallback is flaky
Apache-HBase commented on pull request #2728: URL: https://github.com/apache/hbase/pull/2728#issuecomment-736534241 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 27s | master passed | | +1 :green_heart: | compile | 1m 9s | master passed | | +1 :green_heart: | shadedjars | 6m 51s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 10s | the patch passed | | +1 :green_heart: | compile | 1m 8s | the patch passed | | +1 :green_heart: | javac | 1m 8s | the patch passed | | +1 :green_heart: | shadedjars | 7m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 148m 10s | hbase-server in the patch passed. | | | | 177m 14s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2728 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 817e25fa2876 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | Default Java | AdoptOpenJDK-11.0.6+10 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/testReport/ | | Max. process+thread count | 4317 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25341) Fix ErrorProne error which causes nightly to fail
[ https://issues.apache.org/jira/browse/HBASE-25341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241497#comment-17241497 ] Hudson commented on HBASE-25341: Results for branch master [build #145 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Fix ErrorProne error which causes nightly to fail > - > > Key: HBASE-25341 > URL: https://issues.apache.org/jira/browse/HBASE-25341 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:testCompile > (default-testCompile) on project hbase-server: Compilation failure > [ERROR] > /home/jenkins/jenkins-home/workspace/HBase_HBase_Nightly_master/component/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java:[7536,38] > error: [ArrayToString] Calling toString on an array does not provide useful > information > [ERROR] (see https://errorprone.info/bugpattern/ArrayToString) > [ERROR] Did you mean 'fail("the qualifier " + Arrays.toString(q1) + " > should be " + v1 + " or " + v2 + ", but " + q1Value);'? > [ERROR] -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25127) Enhance PerformanceEvaluation to profile meta replica performance.
[ https://issues.apache.org/jira/browse/HBASE-25127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241499#comment-17241499 ] Hudson commented on HBASE-25127: Results for branch master [build #145 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Enhance PerformanceEvaluation to profile meta replica performance. > -- > > Key: HBASE-25127 > URL: https://issues.apache.org/jira/browse/HBASE-25127 > Project: HBase > Issue Type: Sub-task >Reporter: Huaxiang Sun >Assignee: Clara Xiong >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > Attachments: Screen Shot 2020-11-13 at 5.30.11 PM.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25339) Method parameter and member variable are duplicated in checkSplittable() of SplitTableRegionProcedure
[ https://issues.apache.org/jira/browse/HBASE-25339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241498#comment-17241498 ] Hudson commented on HBASE-25339: Results for branch master [build #145 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/145/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Method parameter and member variable are duplicated in checkSplittable() of > SplitTableRegionProcedure > - > > Key: HBASE-25339 > URL: https://issues.apache.org/jira/browse/HBASE-25339 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1 >Reporter: Zhuoyue Huang >Assignee: Zhuoyue Huang >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > We input a 'this.bestSplitRow' as 'splitRow' of checkSplittable() > {code:java} > private void checkSplittable(final MasterProcedureEnv env, > final RegionInfo regionToSplit, final byte[] splitRow) > {code} > But this private method could used 'bestSplitRow' directly -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2699: HBASE-25287 Forgetting to unbuffer streams results in many CLOSE_WAIT…
Apache-HBase commented on pull request #2699: URL: https://github.com/apache/hbase/pull/2699#issuecomment-736529359 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 3s | master passed | | +1 :green_heart: | checkstyle | 1m 12s | master passed | | +1 :green_heart: | spotbugs | 2m 8s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | -0 :warning: | checkstyle | 1m 10s | hbase-server: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 53s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 42m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2699 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux c909548ecd0c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2699/5/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #2706: [HBASE-25246] Backup/Restore hbase cell tags.
virajjasani commented on a change in pull request #2706: URL: https://github.com/apache/hbase/pull/2706#discussion_r533343566 ## File path: hbase-client/src/test/java/org/apache/hadoop/hbase/shaded/protobuf/TestProtobufUtil.java ## @@ -479,4 +487,40 @@ public void testRegionLockInfo() { + "\"sharedLockCount\":0" + "}]", lockJson); } + + /** + * Test @{@link ProtobufUtil#toCell(Cell)} and + * @{@link ProtobufUtil#toCell(ExtendedCellBuilder, CellProtos.Cell)} conversion Review comment: nit: extra `@`? ## File path: hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java ## @@ -801,4 +816,149 @@ public boolean isWALVisited() { return isVisited; } } + + /** + * Add cell tags to delete mutations, run export and import tool and + * verify that tags are present in import table also. + * @throws Throwable throws Throwable. + */ + @Test + public void testTagsAddition() throws Throwable { +final TableName exportTable = TableName.valueOf(name.getMethodName()); +TableDescriptor desc = TableDescriptorBuilder + .newBuilder(exportTable) + .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(FAMILYA) +.setMaxVersions(5) +.setKeepDeletedCells(KeepDeletedCells.TRUE) +.build()) + .setCoprocessor(MetadataController.class.getName()) + .build(); +UTIL.getAdmin().createTable(desc); + +Table exportT = UTIL.getConnection().getTable(exportTable); + +//Add first version of QUAL +Put p = new Put(ROW1); +p.addColumn(FAMILYA, QUAL, now, QUAL); +exportT.put(p); + +//Add Delete family marker +Delete d = new Delete(ROW1, now+3); +// Add test attribute to delete mutation. +d.setAttribute(TEST_ATTR, Bytes.toBytes(TEST_TAG)); +exportT.delete(d); + +// Run export too with KeyValueCodecWithTags as Codec. This will ensure that export tool +// will use KeyValueCodecWithTags. +String[] args = new String[] { + "-D" + ExportUtils.RAW_SCAN + "=true", + // This will make sure that codec will encode and decode tags in rpc call. + "-Dhbase.client.rpc.codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags", + exportTable.getNameAsString(), + FQ_OUTPUT_DIR, + "1000", // max number of key versions per key to export +}; +assertTrue(runExport(args)); + +// Create an import table with MetadataController. +final TableName importTable = TableName.valueOf("importWithTestTagsAddition"); +TableDescriptor importTableDesc = TableDescriptorBuilder + .newBuilder(importTable) + .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(FAMILYA) +.setMaxVersions(5) +.setKeepDeletedCells(KeepDeletedCells.TRUE) +.build()) + .setCoprocessor(MetadataController.class.getName()) + .build(); +UTIL.getAdmin().createTable(importTableDesc); + +// Run import tool. +args = new String[] { + // This will make sure that codec will encode and decode tags in rpc call. + "-Dhbase.client.rpc.codec=org.apache.hadoop.hbase.codec.KeyValueCodecWithTags", + importTable.getNameAsString(), + FQ_OUTPUT_DIR +}; +assertTrue(runImport(args)); +// Make sure that tags exists in both exported and imported table. +assertTagExists(exportTable); +assertTagExists(importTable); + } + + private void assertTagExists(TableName table) throws IOException { +List values = new ArrayList<>(); +for (HRegion region : UTIL.getHBaseCluster().getRegions(table)) { + values.clear(); Review comment: nit: redundant ## File path: hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java ## @@ -801,4 +816,149 @@ public boolean isWALVisited() { return isVisited; } } + + /** + * Add cell tags to delete mutations, run export and import tool and + * verify that tags are present in import table also. + * @throws Throwable throws Throwable. + */ + @Test + public void testTagsAddition() throws Throwable { +final TableName exportTable = TableName.valueOf(name.getMethodName()); +TableDescriptor desc = TableDescriptorBuilder + .newBuilder(exportTable) + .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(FAMILYA) +.setMaxVersions(5) +.setKeepDeletedCells(KeepDeletedCells.TRUE) +.build()) + .setCoprocessor(MetadataController.class.getName()) + .build(); +UTIL.getAdmin().createTable(desc); + +Table exportT = UTIL.getConnection().getTable(exportTable); + +//Add first version of QUAL +Put p = new Put(ROW1); +p.addColumn(FAMILYA, QUAL, now, QUAL); +exportT.put(p); + +//Add Delete family marker +Delete d = new Delete(ROW1, now+3); +// Add test attribute to delete mutation. +d.setAttribute(TEST_ATTR, Bytes.toBytes(TEST_TAG)); +exportT.delete(d); + +
[GitHub] [hbase] Apache-HBase commented on pull request #2716: HBASE-25336 Use Address instead of InetSocketAddress in RpcClient imp…
Apache-HBase commented on pull request #2716: URL: https://github.com/apache/hbase/pull/2716#issuecomment-736466282 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 13s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 45s | master passed | | +1 :green_heart: | compile | 3m 5s | master passed | | +1 :green_heart: | shadedjars | 7m 28s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 28s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | compile | 3m 6s | the patch passed | | +1 :green_heart: | javac | 3m 6s | the patch passed | | +1 :green_heart: | shadedjars | 7m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 27s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 206m 28s | root in the patch failed. | | | | 247m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2716 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 415f4428194d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / f71eb27be1 | | Default Java | AdoptOpenJDK-11.0.6+10 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/testReport/ | | Max. process+thread count | 2881 (vs. ulimit of 3) | | modules | C: hbase-client . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2716/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2728: HBASE-25334 TestRSGroupsFallback.testFallback is flaky
Apache-HBase commented on pull request #2728: URL: https://github.com/apache/hbase/pull/2728#issuecomment-736400912 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 46s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 25s | master passed | | +1 :green_heart: | checkstyle | 1m 23s | master passed | | +1 :green_heart: | spotbugs | 2m 48s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 36s | the patch passed | | +1 :green_heart: | checkstyle | 1m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 22m 43s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 16s | The patch does not generate ASF License warnings. | | | | 50m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2728 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 10fa8c8d5249 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 86bb037eb0 | | Max. process+thread count | 95 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2728/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25346) hbase2.x the performance is lower than hbase 1.x ?
[ https://issues.apache.org/jira/browse/HBASE-25346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nilonealex updated HBASE-25346: --- Attachment: hbase-pe-performace-test.log > hbase2.x the performance is lower than hbase 1.x ? > --- > > Key: HBASE-25346 > URL: https://issues.apache.org/jira/browse/HBASE-25346 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.2 >Reporter: nilonealex >Priority: Critical > Attachments: hbase-pe-performace-test.log, hbase-site.xml > > > Recently we found that the newly built production hbase cluster is running a > bit slow , the hadoop version is Hbase2.0.2 ( HDP3.1.1) and it has 100 > nodes.Then we begin to do load & query performance verification between > Hbase2.0.2 ( HDP3.1.1) & Hbase1.2.0 ( CDH5.13.3 ) test environment (4nodes), > found that : put data based on hbase2.0 is much slower than hbase1.x (the > former is almost half of the latter), I use BufferedMutator and > BufferedMutatorParams term for batch put to improve efficiency. More > confusing is the performance of the production environment is worse than my > test environment > Some of the codes are as follows: > --- > {color:#4C9AFF}List mutator = new ArrayList<>(); > BufferedMutator table = null; > BufferedMutatorParams params = new > BufferedMutatorParams(TableName.valueOf(fileHbRule.getHbaseTableName())); > params.writeBufferSize(fileHbRule.getFlushBuffer().intValue()*1024*1024); > table = connection.getBufferedMutator(params); > > mutator.add(p); > if(totalCnts % 5000 == 0 ) { > table.mutate(mutator); > mutator.clear(); > }{color} > --- > The file to put is a text format file: 2 million rows comma-separated text > file, each row records 110 columns, total size is about 1G. In addition to > the main parameter configuration such as heap memory, I kept the default > parameter values ??for most of the hbase services. > The load program is designed for single thread. > The following is the progress information : > --- Hbase1.2.0 ( CDH5.13.3 ) > > 2020-12-01 16:48:18 inserted: 10 > 2020-12-01 16:48:36 inserted: 20 > 2020-12-01 16:48:52 inserted: 30 > 2020-12-01 16:49:08 inserted: 40 > 2020-12-01 16:49:23 inserted: 50 > 2020-12-01 16:49:39 inserted: 60 > 2020-12-01 16:49:56 inserted: 70 > 2020-12-01 16:50:12 inserted: 80 > 2020-12-01 16:50:29 inserted: 90 > 2020-12-01 16:50:45 inserted: 100 > 2020-12-01 16:51:01 inserted: 110 > 2020-12-01 16:51:17 inserted: 120 > 2020-12-01 16:51:34 inserted: 130 > 2020-12-01 16:51:49 inserted: 140 > 2020-12-01 16:52:05 inserted: 150 > 2020-12-01 16:52:21 inserted: 160 > 2020-12-01 16:52:40 inserted: 170 > 2020-12-01 16:52:57 inserted: 180 > 2020-12-01 16:53:19 inserted: 190 > 2020-12-01 16:53:42 inserted: 200 > 2020-12-01 16:53:48 inserted: 200 > imp finished ok! > --job finished-- > ---Hbase.2.0.2 ( > HDP3.1.1)- > 2020-12-01 17:25:24 inserted: 10 > 2020-12-01 17:26:03 inserted: 20 > 2020-12-01 17:26:39 inserted: 30 > 2020-12-01 17:27:13 inserted: 40 > 2020-12-01 17:27:47 inserted: 50 > 2020-12-01 17:28:23 inserted: 60 > 2020-12-01 17:29:03 inserted: 70 > 2020-12-01 17:29:40 inserted: 80 > 2020-12-01 17:30:15 inserted: 90 > 2020-12-01 17:30:51 inserted: 100 > 2020-12-01 17:31:27 inserted: 110 > 2020-12-01 17:32:03 inserted: 120 > 2020-12-01 17:32:39 inserted: 130 > 2020-12-01 17:33:14 inserted: 140 > 2020-12-01 17:33:50 inserted: 150 > 2020-12-01 17:34:25 inserted: 160 > 2020-12-01 17:35:01 inserted: 170 > 2020-12-01 17:35:38 inserted: 180 > 2020-12-01 17:36:14 inserted: 190 > 2020-12-01 17:36:51 inserted: 200 > 2020-12-01 17:36:55 inserted: 200 > imp finished ok! > --job finished-- > returnCode=0 > In addition, we also did some benchmark tests on the production cluster.The > delay is seem to be a bit high. The detailed report is in the attachment. > Are there any key points that I have not done configuration? or,, this > version has performance defects ? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25346) hbase2.x the performance is lower than hbase 1.x ?
[ https://issues.apache.org/jira/browse/HBASE-25346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241422#comment-17241422 ] ramkrishna.s.vasudevan commented on HBASE-25346: The WAL sits on HDFS and that is same in both the clusters ? > hbase2.x the performance is lower than hbase 1.x ? > --- > > Key: HBASE-25346 > URL: https://issues.apache.org/jira/browse/HBASE-25346 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.2 >Reporter: nilonealex >Priority: Critical > Attachments: hbase-site.xml > > > Recently we found that the newly built production hbase cluster is running a > bit slow , the hadoop version is Hbase2.0.2 ( HDP3.1.1) and it has 100 > nodes.Then we begin to do load & query performance verification between > Hbase2.0.2 ( HDP3.1.1) & Hbase1.2.0 ( CDH5.13.3 ) test environment (4nodes), > found that : put data based on hbase2.0 is much slower than hbase1.x (the > former is almost half of the latter), I use BufferedMutator and > BufferedMutatorParams term for batch put to improve efficiency. More > confusing is the performance of the production environment is worse than my > test environment > Some of the codes are as follows: > --- > {color:#4C9AFF}List mutator = new ArrayList<>(); > BufferedMutator table = null; > BufferedMutatorParams params = new > BufferedMutatorParams(TableName.valueOf(fileHbRule.getHbaseTableName())); > params.writeBufferSize(fileHbRule.getFlushBuffer().intValue()*1024*1024); > table = connection.getBufferedMutator(params); > > mutator.add(p); > if(totalCnts % 5000 == 0 ) { > table.mutate(mutator); > mutator.clear(); > }{color} > --- > The file to put is a text format file: 2 million rows comma-separated text > file, each row records 110 columns, total size is about 1G. In addition to > the main parameter configuration such as heap memory, I kept the default > parameter values ??for most of the hbase services. > The load program is designed for single thread. > The following is the progress information : > --- Hbase1.2.0 ( CDH5.13.3 ) > > 2020-12-01 16:48:18 inserted: 10 > 2020-12-01 16:48:36 inserted: 20 > 2020-12-01 16:48:52 inserted: 30 > 2020-12-01 16:49:08 inserted: 40 > 2020-12-01 16:49:23 inserted: 50 > 2020-12-01 16:49:39 inserted: 60 > 2020-12-01 16:49:56 inserted: 70 > 2020-12-01 16:50:12 inserted: 80 > 2020-12-01 16:50:29 inserted: 90 > 2020-12-01 16:50:45 inserted: 100 > 2020-12-01 16:51:01 inserted: 110 > 2020-12-01 16:51:17 inserted: 120 > 2020-12-01 16:51:34 inserted: 130 > 2020-12-01 16:51:49 inserted: 140 > 2020-12-01 16:52:05 inserted: 150 > 2020-12-01 16:52:21 inserted: 160 > 2020-12-01 16:52:40 inserted: 170 > 2020-12-01 16:52:57 inserted: 180 > 2020-12-01 16:53:19 inserted: 190 > 2020-12-01 16:53:42 inserted: 200 > 2020-12-01 16:53:48 inserted: 200 > imp finished ok! > --job finished-- > ---Hbase.2.0.2 ( > HDP3.1.1)- > 2020-12-01 17:25:24 inserted: 10 > 2020-12-01 17:26:03 inserted: 20 > 2020-12-01 17:26:39 inserted: 30 > 2020-12-01 17:27:13 inserted: 40 > 2020-12-01 17:27:47 inserted: 50 > 2020-12-01 17:28:23 inserted: 60 > 2020-12-01 17:29:03 inserted: 70 > 2020-12-01 17:29:40 inserted: 80 > 2020-12-01 17:30:15 inserted: 90 > 2020-12-01 17:30:51 inserted: 100 > 2020-12-01 17:31:27 inserted: 110 > 2020-12-01 17:32:03 inserted: 120 > 2020-12-01 17:32:39 inserted: 130 > 2020-12-01 17:33:14 inserted: 140 > 2020-12-01 17:33:50 inserted: 150 > 2020-12-01 17:34:25 inserted: 160 > 2020-12-01 17:35:01 inserted: 170 > 2020-12-01 17:35:38 inserted: 180 > 2020-12-01 17:36:14 inserted: 190 > 2020-12-01 17:36:51 inserted: 200 > 2020-12-01 17:36:55 inserted: 200 > imp finished ok! > --job finished-- > returnCode=0 > In addition, we also did some benchmark tests on the production cluster.The > delay is seem to be a bit high. The detailed report is in the attachment. > Are there any key points that I have not done configuration? or,, this > version has performance defects ? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25346) hbase2.x the performance is lower than hbase 1.x ?
[ https://issues.apache.org/jira/browse/HBASE-25346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nilonealex updated HBASE-25346: --- Description: Recently we found that the newly built production hbase cluster is running a bit slow , the hadoop version is Hbase2.0.2 ( HDP3.1.1) and it has 100 nodes.Then we begin to do load & query performance verification between Hbase2.0.2 ( HDP3.1.1) & Hbase1.2.0 ( CDH5.13.3 ) test environment (4nodes), found that : put data based on hbase2.0 is much slower than hbase1.x (the former is almost half of the latter), I use BufferedMutator and BufferedMutatorParams term for batch put to improve efficiency. More confusing is the performance of the production environment is worse than my test environment Some of the codes are as follows: --- {color:#4C9AFF}List mutator = new ArrayList<>(); BufferedMutator table = null; BufferedMutatorParams params = new BufferedMutatorParams(TableName.valueOf(fileHbRule.getHbaseTableName())); params.writeBufferSize(fileHbRule.getFlushBuffer().intValue()*1024*1024); table = connection.getBufferedMutator(params); mutator.add(p); if(totalCnts % 5000 == 0 ) { table.mutate(mutator); mutator.clear(); }{color} --- The file to put is a text format file: 2 million rows comma-separated text file, each row records 110 columns, total size is about 1G. In addition to the main parameter configuration such as heap memory, I kept the default parameter values ??for most of the hbase services. The load program is designed for single thread. The following is the progress information : --- Hbase1.2.0 ( CDH5.13.3 ) 2020-12-01 16:48:18 inserted: 10 2020-12-01 16:48:36 inserted: 20 2020-12-01 16:48:52 inserted: 30 2020-12-01 16:49:08 inserted: 40 2020-12-01 16:49:23 inserted: 50 2020-12-01 16:49:39 inserted: 60 2020-12-01 16:49:56 inserted: 70 2020-12-01 16:50:12 inserted: 80 2020-12-01 16:50:29 inserted: 90 2020-12-01 16:50:45 inserted: 100 2020-12-01 16:51:01 inserted: 110 2020-12-01 16:51:17 inserted: 120 2020-12-01 16:51:34 inserted: 130 2020-12-01 16:51:49 inserted: 140 2020-12-01 16:52:05 inserted: 150 2020-12-01 16:52:21 inserted: 160 2020-12-01 16:52:40 inserted: 170 2020-12-01 16:52:57 inserted: 180 2020-12-01 16:53:19 inserted: 190 2020-12-01 16:53:42 inserted: 200 2020-12-01 16:53:48 inserted: 200 imp finished ok! --job finished-- ---Hbase.2.0.2 ( HDP3.1.1)- 2020-12-01 17:25:24 inserted: 10 2020-12-01 17:26:03 inserted: 20 2020-12-01 17:26:39 inserted: 30 2020-12-01 17:27:13 inserted: 40 2020-12-01 17:27:47 inserted: 50 2020-12-01 17:28:23 inserted: 60 2020-12-01 17:29:03 inserted: 70 2020-12-01 17:29:40 inserted: 80 2020-12-01 17:30:15 inserted: 90 2020-12-01 17:30:51 inserted: 100 2020-12-01 17:31:27 inserted: 110 2020-12-01 17:32:03 inserted: 120 2020-12-01 17:32:39 inserted: 130 2020-12-01 17:33:14 inserted: 140 2020-12-01 17:33:50 inserted: 150 2020-12-01 17:34:25 inserted: 160 2020-12-01 17:35:01 inserted: 170 2020-12-01 17:35:38 inserted: 180 2020-12-01 17:36:14 inserted: 190 2020-12-01 17:36:51 inserted: 200 2020-12-01 17:36:55 inserted: 200 imp finished ok! --job finished-- returnCode=0 In addition, we also did some benchmark tests on the production cluster.The delay is seem to be a bit high. The detailed report is in the attachment. Are there any key points that I have not done configuration? or,, this version has performance defects ? was: Recently we found that the newly built production hbase cluster is running a bit slow , the hadoop version is Hbase2.0.2 ( HDP3.1.1) and it has 100 nodes.Then we begin to do load & query performance verification between Hbase2.0.2 ( HDP3.1.1) & Hbase1.2.0 ( CDH5.13.3 ) test environment (4nodes), found that : put data based on hbase2.0 is much slower than hbase1.x (the former is almost half of the latter), I use BufferedMutator and BufferedMutatorParams term for batch put to improve efficiency. Some of the codes are as follows: --- {color:#4C9AFF}List mutator = new ArrayList<>(); BufferedMutator table = null; BufferedMutatorParams params = new BufferedMutatorParams(TableName.valueOf(fileHbRule.getHbaseTableName())); params.writeBufferSize(fileHbRule.getFlushBuffer().intValue()*1024*1024); table = connection.getBufferedMutator(params); mutator.add(p); if(totalCnts % 5000 == 0 ) {