[GitHub] [hbase] binlijin commented on a change in pull request #1110: HBASE-23761: The new cache entry can overflow the maxSize in CachedEn…
binlijin commented on a change in pull request #1110: HBASE-23761: The new cache entry can overflow the maxSize in CachedEn… URL: https://github.com/apache/hbase/pull/1110#discussion_r374547943 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruCachedBlockQueue.java ## @@ -65,14 +65,15 @@ public LruCachedBlockQueue(long maxSize, long blockSize) { * @param cb block to try to add to the queue */ public void add(LruCachedBlock cb) { -if(heapSize < maxSize) { +long cbSize = cb.heapSize(); +if(heapSize + cbSize < maxSize) { Review comment: format code This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] maoling commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation
maoling commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation URL: https://github.com/apache/hbase/pull/1121#discussion_r375085410 ## File path: src/main/asciidoc/_chapters/datamodel.adoc ## @@ -471,6 +471,26 @@ Caution: the version timestamp is used internally by HBase for things like time- It's usually best to avoid setting this timestamp yourself. Prefer using a separate timestamp attribute of the row, or have the timestamp as a part of the row key, or both. += Cell Version Example + +The following Put uses a method getCellBuilder() to get a CellBuilder instance +that already has relevant Type and Row set. + +[source,java] + + +public static final byte[] CF = "cf".getBytes(); +public static final byte[] ATTR = "attr".getBytes(); +... + +Put put = new Put(Bytes.toBytes(row)); +put.add(put.getCellBuilder().setQualifier(ATTR) + .setFamily(CF) + .setValue(Bytes.toBytes(data)) + .build()); Review comment: @saintstack Thanks for your review. - Yes, `put.addColumn()` can have the same effect. - this design was discussed in this [email thread](https://lists.apache.org/thread.html/d05bfaa0134502a47f6e1aca56cb0b096d4dd32ddefbbdf28db4952a@%3Cdev.hbase.apache.org%3E) which had a user case provided by Sean Busbey. AFAIU, it wants to simplify the `put.add(cell)` api, because sometimes when users use this cell api: ``` CellBuilder cb = CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY); cb.setRow(Bytes.toBytes("row3")); cb.setFamily(Bytes.toBytes("cf")); cb.setQualifier("qualifier1".getBytes()); cb.setValue(Bytes.toBytes("mjj2")); cb.setType(Type.Put); Cell cell = cb.build(); Put p = new Put(Bytes.toBytes("row3")); p.add(cell); ``` `cb.setType(Type.Put)` is a little redundant and `getCellBuilder()` can help users to reuse the row, even Family and Qualifier they set last time to make the code short and clean. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase
[ https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030389#comment-17030389 ] Hudson commented on HBASE-22514: Results for branch HBASE-22514 [build #263 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/263/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/263//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/263//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/263//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Move rsgroup feature into core of HBase > --- > > Key: HBASE-22514 > URL: https://issues.apache.org/jira/browse/HBASE-22514 > Project: HBase > Issue Type: Umbrella > Components: Admin, Client, rsgroup >Reporter: Yechao Chen >Assignee: Duo Zhang >Priority: Major > Attachments: HBASE-22514.master.001.patch, > image-2019-05-31-18-25-38-217.png > > > The class RSGroupAdminClient is not public > we need to use java api RSGroupAdminClient to manager RSG > so RSGroupAdminClient should be public > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on issue #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog…
saintstack commented on issue #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog… URL: https://github.com/apache/hbase/pull/1123#issuecomment-582251199 Merged to branch-2 and master manually. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack closed pull request #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog…
saintstack closed pull request #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog… URL: https://github.com/apache/hbase/pull/1123 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23779) Up the default fork count to make builds complete faster; make count relative to CPU count
[ https://issues.apache.org/jira/browse/HBASE-23779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030355#comment-17030355 ] Michael Stack commented on HBASE-23779: --- Tests passed on branch-2 so pushed it (Thanks for review [~vjasani]). Started https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2451/ Lets see if any difference. If all good, will push on Master and other branches too. Its a start. > Up the default fork count to make builds complete faster; make count relative > to CPU count > -- > > Key: HBASE-23779 > URL: https://issues.apache.org/jira/browse/HBASE-23779 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > Tests take a long time. Our fork count running all tests are conservative -- > 1 (small) for first part and 5 for second part (medium and large). Rather > than hardcoding we should set the fork count to be relative to machine size. > Suggestion here is 0.75C where C is CPU count. This ups the CPU use on my box. > Looking up at jenkins, it seems like the boxes are 24 cores... at least going > by my random survey. The load reported on a few seems low though this not > representative (looking at machine/uptime). > More parallelism willl probably mean more test failure. Let me take a look > see. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on issue #1108: HBASE-23779 Up the default fork count; make count relative to CPU count
saintstack commented on issue #1108: HBASE-23779 Up the default fork count; make count relative to CPU count URL: https://github.com/apache/hbase/pull/1108#issuecomment-582238760 Argh... forgot to add 'Signed-off-by: Viraj Jasani '. Pushed on branch-2 for now. WIll see if helps. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-23779) Up the default fork count to make builds complete faster; make count relative to CPU count
[ https://issues.apache.org/jira/browse/HBASE-23779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23779: -- Fix Version/s: 2.3.0 3.0.0 > Up the default fork count to make builds complete faster; make count relative > to CPU count > -- > > Key: HBASE-23779 > URL: https://issues.apache.org/jira/browse/HBASE-23779 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > Tests take a long time. Our fork count running all tests are conservative -- > 1 (small) for first part and 5 for second part (medium and large). Rather > than hardcoding we should set the fork count to be relative to machine size. > Suggestion here is 0.75C where C is CPU count. This ups the CPU use on my box. > Looking up at jenkins, it seems like the boxes are 24 cores... at least going > by my random survey. The load reported on a few seems low though this not > representative (looking at machine/uptime). > More parallelism willl probably mean more test failure. Let me take a look > see. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack merged pull request #1108: HBASE-23779 Up the default fork count; make count relative to CPU count
saintstack merged pull request #1108: HBASE-23779 Up the default fork count; make count relative to CPU count URL: https://github.com/apache/hbase/pull/1108 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23783) Address tests writing and reading SSL/Security files in a common location.
[ https://issues.apache.org/jira/browse/HBASE-23783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030344#comment-17030344 ] Hudson commented on HBASE-23783: Results for branch branch-2 [build #2450 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2450/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2450//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2450//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2450//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Address tests writing and reading SSL/Security files in a common location. > -- > > Key: HBASE-23783 > URL: https://issues.apache.org/jira/browse/HBASE-23783 > Project: HBase > Issue Type: Test >Reporter: Mark Robert Miller >Assignee: Mark Robert Miller >Priority: Minor > Fix For: 3.0.0, 2.3.0 > > > This is causing me issues with parallel test runs because multiple tests can > write and read the same files in the test-classes directory. Some tests write > files in test-classes instead of their test data directory so that they can > put the files on the classpath. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23779) Up the default fork count to make builds complete faster; make count relative to CPU count
[ https://issues.apache.org/jira/browse/HBASE-23779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030336#comment-17030336 ] Mark Robert Miller commented on HBASE-23779: bq. More parallelism willl probably mean more test failure. Let me take a look see. I'm going to convince you this is a good thing! But maybe not on main branches until it's a bit smooth. > Up the default fork count to make builds complete faster; make count relative > to CPU count > -- > > Key: HBASE-23779 > URL: https://issues.apache.org/jira/browse/HBASE-23779 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Michael Stack >Priority: Major > > Tests take a long time. Our fork count running all tests are conservative -- > 1 (small) for first part and 5 for second part (medium and large). Rather > than hardcoding we should set the fork count to be relative to machine size. > Suggestion here is 0.75C where C is CPU count. This ups the CPU use on my box. > Looking up at jenkins, it seems like the boxes are 24 cores... at least going > by my random survey. The load reported on a few seems low though this not > representative (looking at machine/uptime). > More parallelism willl probably mean more test failure. Let me take a look > see. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23783) Address tests writing and reading SSL/Security files in a common location.
[ https://issues.apache.org/jira/browse/HBASE-23783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030335#comment-17030335 ] Mark Robert Miller commented on HBASE-23783: Thank you Mr Stack! I'll figure out all these checks some day - the whitespace slipped by me. With this committed, it will be easier for me to track down if there are any glaring remaining issues around HBASE-23779. > Address tests writing and reading SSL/Security files in a common location. > -- > > Key: HBASE-23783 > URL: https://issues.apache.org/jira/browse/HBASE-23783 > Project: HBase > Issue Type: Test >Reporter: Mark Robert Miller >Assignee: Mark Robert Miller >Priority: Minor > Fix For: 3.0.0, 2.3.0 > > > This is causing me issues with parallel test runs because multiple tests can > write and read the same files in the test-classes directory. Some tests write > files in test-classes instead of their test data directory so that they can > put the files on the classpath. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on issue #1108: HBASE-23779 Up the default fork count; make count relative to CPU count
Apache-HBase commented on issue #1108: HBASE-23779 Up the default fork count; make count relative to CPU count URL: https://github.com/apache/hbase/pull/1108#issuecomment-582215036 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 7m 45s | branch-2 passed | | +1 :green_heart: | compile | 3m 59s | branch-2 passed | | +0 :ok: | refguide | 13m 26s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | shadedjars | 5m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 4s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 52s | the patch passed | | +1 :green_heart: | compile | 3m 25s | the patch passed | | +1 :green_heart: | javac | 3m 25s | the patch passed | | +1 :green_heart: | shellcheck | 0m 3s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 1s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +0 :ok: | refguide | 7m 20s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | shadedjars | 4m 55s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 17m 32s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 2m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 149m 30s | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 8s | The patch does not generate ASF License warnings. | | | | 235m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1108/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1108 | | Optional Tests | dupname asflicense shellcheck shelldocs javac javadoc unit shadedjars hadoopcheck xml compile refguide | | uname | Linux 872cc3c8c0ad 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1108/out/precommit/personality/provided.sh | | git revision | branch-2 / e385fd97e0 | | Default Java | 1.8.0_181 | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1108/7/artifact/out/branch-site/book.html | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1108/7/artifact/out/patch-site/book.html | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1108/7/testReport/ | | Max. process+thread count | 7460 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1108/7/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) shellcheck=0.7.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (HBASE-23800) Add documentation about the CECPs changes
Duo Zhang created HBASE-23800: - Summary: Add documentation about the CECPs changes Key: HBASE-23800 URL: https://issues.apache.org/jira/browse/HBASE-23800 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23798) Remove hbase-prototcol module
Duo Zhang created HBASE-23798: - Summary: Remove hbase-prototcol module Key: HBASE-23798 URL: https://issues.apache.org/jira/browse/HBASE-23798 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23799) Make our core coprocessors use shaded protobuf
Duo Zhang created HBASE-23799: - Summary: Make our core coprocessors use shaded protobuf Key: HBASE-23799 URL: https://issues.apache.org/jira/browse/HBASE-23799 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23797) Let CECPs also use our shaded protobuf
[ https://issues.apache.org/jira/browse/HBASE-23797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23797: -- Issue Type: Umbrella (was: Task) > Let CECPs also use our shaded protobuf > -- > > Key: HBASE-23797 > URL: https://issues.apache.org/jira/browse/HBASE-23797 > Project: HBase > Issue Type: Umbrella > Components: Coprocessors, Protobufs >Reporter: Duo Zhang >Priority: Blocker > Fix For: 3.0.0 > > > See this discussion thread: > https://lists.apache.org/thread.html/abd60a8985a4898bae03b2c3c51d43a6b83d67c00caff82ba9ab2712%40%3Cdev.hbase.apache.org%3E -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23797) Let CECPs also use our shaded protobuf
[ https://issues.apache.org/jira/browse/HBASE-23797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23797: -- Description: See this discussion thread: https://lists.apache.org/thread.html/abd60a8985a4898bae03b2c3c51d43a6b83d67c00caff82ba9ab2712%40%3Cdev.hbase.apache.org%3E > Let CECPs also use our shaded protobuf > -- > > Key: HBASE-23797 > URL: https://issues.apache.org/jira/browse/HBASE-23797 > Project: HBase > Issue Type: Task > Components: Coprocessors, Protobufs >Reporter: Duo Zhang >Priority: Blocker > Fix For: 3.0.0 > > > See this discussion thread: > https://lists.apache.org/thread.html/abd60a8985a4898bae03b2c3c51d43a6b83d67c00caff82ba9ab2712%40%3Cdev.hbase.apache.org%3E -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23797) Let CECPs also use our shaded protobuf
Duo Zhang created HBASE-23797: - Summary: Let CECPs also use our shaded protobuf Key: HBASE-23797 URL: https://issues.apache.org/jira/browse/HBASE-23797 Project: HBase Issue Type: Task Components: Coprocessors, Protobufs Reporter: Duo Zhang Fix For: 3.0.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate…
Apache9 commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate… URL: https://github.com/apache/hbase/pull/1120#discussion_r375011027 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSyncTimeRangeTracker.java ## @@ -34,7 +36,7 @@ public static final HBaseClassTestRule CLASS_RULE = HBaseClassTestRule.forClass(TestSyncTimeRangeTracker.class); - private static final int NUM_KEYS = 1000; + private static final int NUM_KEYS = 100; Review comment: So this is the actual fix right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache9 commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate…
Apache9 commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate… URL: https://github.com/apache/hbase/pull/1120#discussion_r375010910 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSyncTimeRangeTracker.java ## @@ -84,23 +86,23 @@ public void run() { assertTrue(trr.getMin() == 0); } - static class RandomTestData { -private long[] keys = new long[NUM_KEYS]; -private long min = Long.MAX_VALUE; -private long max = 0; + static class RandomTestData { +private final AtomicLongArray keys = new AtomicLongArray(NUM_KEYS); Review comment: But this array will only be accessed by one thread? Why do we need to use AtomicLongArray? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (HBASE-23789) [Flakey Tests] ERROR [Time-limited test] balancer.HeterogeneousRegionCountCostFunction(199): cannot read rules file located at ' /tmp/hbase-balancer.rules '
[ https://issues.apache.org/jira/browse/HBASE-23789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-23789. --- Fix Version/s: 2.3.0 3.0.0 Assignee: Michael Stack Resolution: Fixed Pushed fix on branch-2 and master. Lets see how it does. > [Flakey Tests] ERROR [Time-limited test] > balancer.HeterogeneousRegionCountCostFunction(199): cannot read rules file > located at ' /tmp/hbase-balancer.rules ' > > > Key: HBASE-23789 > URL: https://issues.apache.org/jira/browse/HBASE-23789 > Project: HBase > Issue Type: Bug > Components: flakies >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > We can't find the balancer rules we just read in the > HeterogeneousRegionCountCostFunction test in high load conditions > {code} > 2020-02-03 20:51:00,774 ERROR [Time-limited test] > balancer.HeterogeneousRegionCountCostFunction(199): cannot read rules file > located at ' /tmp/hbase-balancer.rules ':File /tmp/hbase-balancer.rules does > not exist > 2020-02-03 20:51:00,774 WARN [Time-limited test] > balancer.HeterogeneousRegionCountCostFunction(155): cannot load rules file, > keeping latest rules file which has 1 rules > {code} > Test then goes on to fail with: > {code} > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testOneGroup > Time elapsed: 15.223 s <<< FAILURE! > junit.framework.AssertionFailedError: Host rs0 should be below 0.0% >at > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testWithCluster(TestStochasticLoadBalancerHeterogeneousCost.java:209) >at > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testHeterogeneousWithCluster(TestStochasticLoadBalancerHeterogeneousCost.java:160) >at > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testOneGroup(TestStochasticLoadBalancerHeterogeneousCost.java:102) > {code} > Instead, have tests write rules to local test dir. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client
[ https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030251#comment-17030251 ] Hudson commented on HBASE-18095: Results for branch HBASE-18095/client-locate-meta-no-zookeeper [build #62 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/62/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/62//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/62//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/62//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} --Failed when running client tests on top of Hadoop 2. [see log for details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/62//artifact/output-integration/hadoop-2.log]. (note that this means we didn't run on Hadoop 3) > Provide an option for clients to find the server hosting META that does not > involve the ZooKeeper client > > > Key: HBASE-18095 > URL: https://issues.apache.org/jira/browse/HBASE-18095 > Project: HBase > Issue Type: New Feature > Components: Client >Reporter: Andrew Kyle Purtell >Assignee: Bharath Vissapragada >Priority: Major > Fix For: 3.0.0, 2.3.0, 1.6.0 > > Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch > > > Clients are required to connect to ZooKeeper to find the location of the > regionserver hosting the meta table region. Site configuration provides the > client a list of ZK quorum peers and the client uses an embedded ZK client to > query meta location. Timeouts and retry behavior of this embedded ZK client > are managed orthogonally to HBase layer settings and in some cases the ZK > cannot manage what in theory the HBase client can, i.e. fail fast upon outage > or network partition. > We should consider new configuration settings that provide a list of > well-known master and backup master locations, and with this information the > client can contact any of the master processes directly. Any master in either > active or passive state will track meta location and respond to requests for > it with its cached last known location. If this location is stale, the client > can ask again with a flag set that requests the master refresh its location > cache and return the up-to-date location. Every client interaction with the > cluster thus uses only HBase RPC as transport, with appropriate settings > applied to the connection. The configuration toggle that enables this > alternative meta location lookup should be false by default. > This removes the requirement that HBase clients embed the ZK client and > contact the ZK service directly at the beginning of the connection lifecycle. > This has several benefits. ZK service need not be exposed to clients, and > their potential abuse, yet no benefit ZK provides the HBase server cluster is > compromised. Normalizing HBase client and ZK client timeout settings and > retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer > necessary. > And, from [~ghelmling]: There is an additional complication here for > token-based authentication. When a delegation token is used for SASL > authentication, the client uses the cluster ID obtained from Zookeeper to > select the token identifier to use. So there would also need to be some > Zookeeper-less, unauthenticated way to obtain the cluster ID as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (HBASE-23304) Implement RPCs needed for master based registry
[ https://issues.apache.org/jira/browse/HBASE-23304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reopened HBASE-23304: -- > Implement RPCs needed for master based registry > --- > > Key: HBASE-23304 > URL: https://issues.apache.org/jira/browse/HBASE-23304 > Project: HBase > Issue Type: Sub-task > Components: master >Affects Versions: 3.0.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > We need to implement RPCs on masters needed by client to fetch information > like clusterID, active master server name, meta locations etc. These RPCs are > used by clients during connection init. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22470) Corrupt Surefire test reports
[ https://issues.apache.org/jira/browse/HBASE-22470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030244#comment-17030244 ] Nick Dimiduk commented on HBASE-22470: -- Interesting. these "failed-to-read" errors apparently don't fail the build. https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2449/ > Corrupt Surefire test reports > - > > Key: HBASE-22470 > URL: https://issues.apache.org/jira/browse/HBASE-22470 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.2.0, 2.1.5 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Attachments: > TEST-org.apache.hadoop.hbase.replication.TestMasterReplication.xml, > TEST-org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS.xml > > > Jenkins is not able to read surefire test reports occasionally because the > generated XML file is corrupted. In this case Jenkins shows the following > error message: > TEST-org.apache.hadoop.hbase.replication.TestMasterReplication.xml.[failed-to-read] > https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2.1/1176/testReport/junit/TEST-org.apache.hadoop.hbase.replication.TestMasterReplication/xml/_failed_to_read_/ > {noformat} > Failed to read test report file > /home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2.1/output-jdk8-hadoop3/archiver/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestMasterReplication.xml > org.dom4j.DocumentException: Error on line 86 of document : XML document > structures must start and end within the same entity. Nested exception: XML > document structures must start and end within the same entity.{noformat} > The specific XML file is not complete, however, the output file for the test > contains stdout and stderr output. > {noformat} > classname="org.apache.hadoop.hbase.replication.TestMasterReplication" > time="95.334"/> > classname="org.apache.hadoop.hbase.replication.TestMasterReplication" > time="26.5"/> > classname="org.apache.hadoop.hbase.replication.TestMasterReplication" > time="27.244"/> > classname="org.apache.hadoop.hbase.replication.TestMasterReplication" > time="46.921"/> > classname="org.apache.hadoop.hbase.replication.TestMasterReplication" > time="43.147"/> > classname="org.apache.hadoop.hbase.replication.TestMasterReplication" > time="11.119"/> > classname="org.apache.hadoop.hbase.replication.TestMasterReplication" > time="44.022"> > type="java.lang.AssertionError">java.lang.AssertionError: Waited too much > time for bulkloaded data replication. Current count=200, expected count=600 > at > org.apache.hadoop.hbase.replication.TestMasterReplication.wait(TestMasterReplication.java:641) > at > org.apache.hadoop.hbase.replication.TestMasterReplication.loadAndValidateHFileReplication(TestMasterReplication.java:631) > at > org.apache.hadoop.hbase.replication.TestMasterReplication.testHFileMultiSlaveReplication(TestMasterReplication.java:371) > >
[GitHub] [hbase] ndimiduk merged pull request #1119: Revert "HBASE-23304: RPCs needed for client meta information lookup (…
ndimiduk merged pull request #1119: Revert "HBASE-23304: RPCs needed for client meta information lookup (… URL: https://github.com/apache/hbase/pull/1119 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache-HBase commented on issue #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog…
Apache-HBase commented on issue #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog… URL: https://github.com/apache/hbase/pull/1123#issuecomment-582182509 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 11s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 51s | branch-2 passed | | +1 :green_heart: | compile | 1m 20s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 42s | branch-2 passed | | +1 :green_heart: | shadedjars | 4m 45s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 56s | branch-2 passed | | +0 :ok: | spotbugs | 3m 45s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 32s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 24s | the patch passed | | +1 :green_heart: | compile | 1m 21s | the patch passed | | +1 :green_heart: | javac | 1m 21s | the patch passed | | -1 :x: | checkstyle | 1m 16s | hbase-server: The patch generated 2 new + 217 unchanged - 0 fixed = 219 total (was 217) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 40s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 17m 12s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 1m 4s | the patch passed | | +1 :green_heart: | findbugs | 5m 21s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 3m 8s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 155m 27s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | The patch does not generate ASF License warnings. | | | | 224m 35s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1123/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1123 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 64bac877ac2d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1123/out/precommit/personality/provided.sh | | git revision | branch-2 / 36824bb504 | | Default Java | 1.8.0_181 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1123/1/artifact/out/diff-checkstyle-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1123/1/testReport/ | | Max. process+thread count | 4805 (vs. ulimit of 1) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1123/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] ndimiduk commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality
ndimiduk commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality URL: https://github.com/apache/hbase/pull/1122#issuecomment-582182662 > it has to be pushed to the branches to work for precommit, IIRC. > e.g. https://github.com/apache/hbase/blob/branch-1/dev-support/Jenkinsfile_GitHub#L163 Nightly and precommit disagree :( https://github.com/apache/hbase/blob/branch-1/dev-support/Jenkinsfile#L42 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23792: - Fix Version/s: 2.2.4 > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.4 > > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23792: - Fix Version/s: 2.1.9 > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.9, 2.2.4 > > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-23792. -- Resolution: Fixed Backported to all the 2.x branches. > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.9, 2.2.4 > > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23792: - Fix Version/s: 2.3.0 > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reassigned HBASE-23792: Assignee: Nick Dimiduk > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk merged pull request #1125: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
ndimiduk merged pull request #1125: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1125 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] ndimiduk commented on issue #1125: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
ndimiduk commented on issue #1125: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1125#issuecomment-582177206 Backport of 6ba1df3b3932ce5825cb43511a7483e64f9471f9 / #1124 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] ndimiduk opened a new pull request #1125: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
ndimiduk opened a new pull request #1125: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1125 1. Survive flakey rerunning by converting the static BeforeClass stuff into instance-level Before. 2. Break the test method into two, one for running over each of the snapshot manifest versions. Signed-off-by: stack This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23792: - Fix Version/s: 3.0.0 > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Major > Fix For: 3.0.0 > > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk merged pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
ndimiduk merged pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1124 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] ndimiduk commented on a change in pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
ndimiduk commented on a change in pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1124#discussion_r374993219 ## File path: hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java ## @@ -67,29 +64,29 @@ public static void setUpBaseConf(Configuration conf) { conf.set(HConstants.HBASE_DIR, testDir.toString()); } - @BeforeClass - public static void setUpBeforeClass() throws Exception { + @Before + public void setUpBefore() throws Exception { // Make sure testDir is on LocalFileSystem -testDir = TEST_UTIL.getDataTestDir().makeQualified(URI.create("file:///"), new Path("/")); -fs = testDir.getFileSystem(TEST_UTIL.getConfiguration()); +testDir = testUtil.getDataTestDir().makeQualified(URI.create("file:///"), new Path("/")); Review comment: `testUtil.getDataTestDir()` will generate a path with a random UUID, so by getting a fresh instance of `testUtil`, I get an isolated path. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23786) [Flakey Test] TestMasterNotCarryTable.testMasterMemStoreLAB
[ https://issues.apache.org/jira/browse/HBASE-23786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030179#comment-17030179 ] Hudson commented on HBASE-23786: Results for branch branch-2 [build #2449 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Flakey Test] TestMasterNotCarryTable.testMasterMemStoreLAB > > > Key: HBASE-23786 > URL: https://issues.apache.org/jira/browse/HBASE-23786 > Project: HBase > Issue Type: Bug > Components: flakies >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: > 0001-HBASE-23786-Flakey-Test-TestMasterNotCarryTable.test.patch > > > Interesting one. Fails only if Master gets chance to become active -- which > doesn't happen when all is easy-going. If struggling under load, it can > become active and then test asserting NO ChunkCreator instance in Master > fails because we want ChunkCreator now since ProcedureRegionStore was added: > i.e. "// always initialize the MemStoreLAB as we use a region to store > procedure now." > Here is error I've seen > {code} > [ERROR] Failures: > [ERROR] TestMasterNotCarryTable.testMasterMemStoreLAB:94 expected null, > but was: > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23782) We still reference the hard coded meta descriptor in some places when listing table descriptors
[ https://issues.apache.org/jira/browse/HBASE-23782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030180#comment-17030180 ] Hudson commented on HBASE-23782: Results for branch branch-2 [build #2449 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2449//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > We still reference the hard coded meta descriptor in some places when listing > table descriptors > --- > > Key: HBASE-23782 > URL: https://issues.apache.org/jira/browse/HBASE-23782 > Project: HBase > Issue Type: Bug > Components: meta >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0, 2.3.0 > > Attachments: HBASE-23782-addendum.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on issue #1108: HBASE-23779 Up the default fork count; make count relative to CPU count
saintstack commented on issue #1108: HBASE-23779 Up the default fork count; make count relative to CPU count URL: https://github.com/apache/hbase/pull/1108#issuecomment-582153609 Rerunning build after HBASE-23783 landed; should address the TestInfoServerACL failure above. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23783) Address tests writing and reading SSL/Security files in a common location.
[ https://issues.apache.org/jira/browse/HBASE-23783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030175#comment-17030175 ] Michael Stack commented on HBASE-23783: --- This issue should fix the blocker on HBASE-23779. Retrying. > Address tests writing and reading SSL/Security files in a common location. > -- > > Key: HBASE-23783 > URL: https://issues.apache.org/jira/browse/HBASE-23783 > Project: HBase > Issue Type: Test >Reporter: Mark Robert Miller >Assignee: Mark Robert Miller >Priority: Minor > Fix For: 3.0.0, 2.3.0 > > > This is causing me issues with parallel test runs because multiple tests can > write and read the same files in the test-classes directory. Some tests write > files in test-classes instead of their test data directory so that they can > put the files on the classpath. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23783) Address tests writing and reading SSL/Security files in a common location.
[ https://issues.apache.org/jira/browse/HBASE-23783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-23783. --- Fix Version/s: 2.3.0 3.0.0 Hadoop Flags: Reviewed Assignee: Mark Robert Miller Resolution: Fixed Merged to master and cherry-picked to branch-2. Thanks for nice patch [~markrmiller]. I added you as a contributor sir. > Address tests writing and reading SSL/Security files in a common location. > -- > > Key: HBASE-23783 > URL: https://issues.apache.org/jira/browse/HBASE-23783 > Project: HBase > Issue Type: Test >Reporter: Mark Robert Miller >Assignee: Mark Robert Miller >Priority: Minor > Fix For: 3.0.0, 2.3.0 > > > This is causing me issues with parallel test runs because multiple tests can > write and read the same files in the test-classes directory. Some tests write > files in test-classes instead of their test data directory so that they can > put the files on the classpath. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack merged pull request #1116: HBASE-23783: Address tests writing and reading SSL/Security files in …
saintstack merged pull request #1116: HBASE-23783: Address tests writing and reading SSL/Security files in … URL: https://github.com/apache/hbase/pull/1116 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache-HBase commented on issue #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
Apache-HBase commented on issue #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1124#issuecomment-582142668 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 6m 24s | master passed | | +1 :green_heart: | compile | 0m 32s | master passed | | +1 :green_heart: | checkstyle | 0m 22s | master passed | | +1 :green_heart: | shadedjars | 5m 26s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | master passed | | +0 :ok: | spotbugs | 0m 57s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 55s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 37s | the patch passed | | +1 :green_heart: | compile | 0m 31s | the patch passed | | +1 :green_heart: | javac | 0m 31s | the patch passed | | +1 :green_heart: | checkstyle | 0m 19s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 5m 16s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 18m 14s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | | +1 :green_heart: | findbugs | 1m 9s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 21m 59s | hbase-mapreduce in the patch passed. | | +1 :green_heart: | asflicense | 0m 20s | The patch does not generate ASF License warnings. | | | | 76m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1124/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1124 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux eca441a96517 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1124/out/precommit/personality/provided.sh | | git revision | master / 1cacf27d5c | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1124/1/testReport/ | | Max. process+thread count | 5286 (vs. ulimit of 1) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1124/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation
saintstack commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation URL: https://github.com/apache/hbase/pull/1121#discussion_r374946559 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMutationGetCellBuilder.java ## @@ -0,0 +1,110 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import static org.junit.Assert.assertTrue; + +import java.util.Arrays; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellBuilder; +import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.HBaseClassTestRule; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.ClassRule; +import org.junit.Rule; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.junit.rules.TestName; + +@Category({MediumTests.class, ClientTests.class}) +public class TestMutationGetCellBuilder { + + @ClassRule + public static final HBaseClassTestRule CLASS_RULE = + HBaseClassTestRule.forClass(TestMutationGetCellBuilder.class); + + private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + + @Rule + public TestName name = new TestName(); + + @BeforeClass + public static void setUpBeforeClass() throws Exception { +TEST_UTIL.startMiniCluster(); + } + + @AfterClass + public static void tearDownAfterClass() throws Exception { +TEST_UTIL.shutdownMiniCluster(); + } + + @Test + public void testMutationGetCellBuilder() throws Exception { +final TableName tableName = TableName.valueOf(name.getMethodName()); +final byte[] rowKey = Bytes.toBytes("12345678"); +final byte[] uselessRowKey = Bytes.toBytes("123"); +final byte[] family = Bytes.toBytes("cf"); +final byte[] qualifier = Bytes.toBytes("foo"); +final long now = System.currentTimeMillis(); +try (Table table = TEST_UTIL.createTable(tableName, family)) { + TEST_UTIL.waitTableAvailable(tableName.getName(), 5000); + // put one row + Put put = new Put(rowKey); + CellBuilder cellBuilder = put.getCellBuilder().setQualifier(qualifier) + .setFamily(family).setValue(Bytes.toBytes("bar")).setTimestamp(now); + //setRow is useless + cellBuilder.setRow(uselessRowKey); Review comment: Should it throw an exception? Maybe it can't? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation
saintstack commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation URL: https://github.com/apache/hbase/pull/1121#discussion_r374947794 ## File path: src/main/asciidoc/_chapters/datamodel.adoc ## @@ -471,6 +471,26 @@ Caution: the version timestamp is used internally by HBase for things like time- It's usually best to avoid setting this timestamp yourself. Prefer using a separate timestamp attribute of the row, or have the timestamp as a part of the row key, or both. += Cell Version Example + +The following Put uses a method getCellBuilder() to get a CellBuilder instance +that already has relevant Type and Row set. + +[source,java] + + +public static final byte[] CF = "cf".getBytes(); +public static final byte[] ATTR = "attr".getBytes(); +... + +Put put = new Put(Bytes.toBytes(row)); +put.add(put.getCellBuilder().setQualifier(ATTR) + .setFamily(CF) + .setValue(Bytes.toBytes(data)) + .build()); Review comment: Could write this as put.addColumn()? It'd be easier? What you see advantage of this being able to do it by Cell? Will it confuse the user? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation
saintstack commented on a change in pull request #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation URL: https://github.com/apache/hbase/pull/1121#discussion_r374943510 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java ## @@ -784,6 +787,104 @@ Mutation add(Cell cell) throws IOException { return this; } + /** + * get a CellBuilder instance that already has relevant Type and Row set. + * @param cellBuilderType e.g CellBuilderType.SHALLOW_COPY + * @return CellBuilder which already has relevant Type and Row set. + */ + public abstract CellBuilder getCellBuilder(CellBuilderType cellBuilderType); + + /** + * get a CellBuilder instance that already has relevant Type and Row set. + * the default CellBuilderType is CellBuilderType.SHALLOW_COPY + * @return CellBuilder which already has relevant Type and Row set. + */ + public CellBuilder getCellBuilder() { +return getCellBuilder(CellBuilderType.SHALLOW_COPY); + } + + /** + * get a CellBuilder instance that already has relevant Type and Row set. + * @param cellBuilderType e.g CellBuilderType.SHALLOW_COPY + * @param cellType e.g Cell.Type.Put + * @return CellBuilder which already has relevant Type and Row set. + */ + protected CellBuilder getCellBuilder(CellBuilderType cellBuilderType, Cell.Type cellType) { Review comment: So you shutdown setting row, type, etc., with the below? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on a change in pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
saintstack commented on a change in pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1124#discussion_r374926733 ## File path: hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java ## @@ -67,29 +64,29 @@ public static void setUpBaseConf(Configuration conf) { conf.set(HConstants.HBASE_DIR, testDir.toString()); } - @BeforeClass - public static void setUpBeforeClass() throws Exception { + @Before + public void setUpBefore() throws Exception { // Make sure testDir is on LocalFileSystem -testDir = TEST_UTIL.getDataTestDir().makeQualified(URI.create("file:///"), new Path("/")); -fs = testDir.getFileSystem(TEST_UTIL.getConfiguration()); +testDir = testUtil.getDataTestDir().makeQualified(URI.create("file:///"), new Path("/")); Review comment: Isolate more by getting method name into path? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030131#comment-17030131 ] Nick Dimiduk commented on HBASE-23792: -- [~liuml07], [~AK2019] you folks mind taking a look at https://github.com/apache/hbase/pull/1124 ? > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Major > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030128#comment-17030128 ] Nick Dimiduk commented on HBASE-23792: -- I see that [~liuml07] spent some time tracking a similar issue in the same test on HBASE-22607. Linking. > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Major > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk opened a new pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
ndimiduk opened a new pull request #1124: HBASE-23792 [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState URL: https://github.com/apache/hbase/pull/1124 1. Survive flakey rerunning by converting the static BeforeClass stuff into instance-level Before. 2. Break the test method into two, one for running over each of the snapshot manifest versions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030118#comment-17030118 ] Nick Dimiduk commented on HBASE-23792: -- I can't repro this locally or find a source for the filesystem implementation getting flipped to a distributed fs. However, the only place in Hadoop code where I see this "Wrong FS" message thrown as an {{IllegalArgumentException}} is in {{FileSystem#checkPath}}. Looking closer at the xml report, I see that the test failed once with the above. Surefire tried to re-run it, but it failed the rerun with {noformat} java.io.IOException: Target file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1/tableWithRefsV1 is a directory at org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:105) at org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) {noformat} which implies to me that when surefire reruns a test method, it does not run the BeforeClass business. I also notice that the test method runs the same code twice, but both times it's using {{createSnapshotV2}}... I think one of the invocations is supposed to be calling {{createSnapshotV1}}. So. # Survive flakey rerunning by converting the static {{BeforeClass}} stuff into instance-level {{Before}}. # Break the test method into two, one for running over each of the snapshot manifest versions. > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Major > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23792: - Attachment: TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Major > Attachments: > TEST-org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.xml > > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23792) [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
[ https://issues.apache.org/jira/browse/HBASE-23792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23792: - Summary: [Flakey Test] TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState (was: TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState) > [Flakey Test] > TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState > --- > > Key: HBASE-23792 > URL: https://issues.apache.org/jira/browse/HBASE-23792 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Major > > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails with > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, > expected: hdfs://localhost:44609 > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23789) [Flakey Tests] ERROR [Time-limited test] balancer.HeterogeneousRegionCountCostFunction(199): cannot read rules file located at ' /tmp/hbase-balancer.rules '
[ https://issues.apache.org/jira/browse/HBASE-23789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030110#comment-17030110 ] Michael Stack commented on HBASE-23789: --- Redid the persistence of the balancer plan so it used the test data dir rather than tmp to avoid clashes in test runs. > [Flakey Tests] ERROR [Time-limited test] > balancer.HeterogeneousRegionCountCostFunction(199): cannot read rules file > located at ' /tmp/hbase-balancer.rules ' > > > Key: HBASE-23789 > URL: https://issues.apache.org/jira/browse/HBASE-23789 > Project: HBase > Issue Type: Bug > Components: flakies >Reporter: Michael Stack >Priority: Major > > We can't find the balancer rules we just read in the > HeterogeneousRegionCountCostFunction test in high load conditions > {code} > 2020-02-03 20:51:00,774 ERROR [Time-limited test] > balancer.HeterogeneousRegionCountCostFunction(199): cannot read rules file > located at ' /tmp/hbase-balancer.rules ':File /tmp/hbase-balancer.rules does > not exist > 2020-02-03 20:51:00,774 WARN [Time-limited test] > balancer.HeterogeneousRegionCountCostFunction(155): cannot load rules file, > keeping latest rules file which has 1 rules > {code} > Test then goes on to fail with: > {code} > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testOneGroup > Time elapsed: 15.223 s <<< FAILURE! > junit.framework.AssertionFailedError: Host rs0 should be below 0.0% >at > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testWithCluster(TestStochasticLoadBalancerHeterogeneousCost.java:209) >at > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testHeterogeneousWithCluster(TestStochasticLoadBalancerHeterogeneousCost.java:160) >at > org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerHeterogeneousCost.testOneGroup(TestStochasticLoadBalancerHeterogeneousCost.java:102) > {code} > Instead, have tests write rules to local test dir. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack opened a new pull request #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog…
saintstack opened a new pull request #1123: HBASE-23789 [Flakey Tests] ERROR [Time-limited test] balancer.Heterog… URL: https://github.com/apache/hbase/pull/1123 …eneousRegionCountCostFunction(199): cannot read rules file located at ' /tmp/hbase-balancer.rules ' Had to redo storage for these few tests so they used the test data dirs rather than /tmp. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality
busbey commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality URL: https://github.com/apache/hbase/pull/1122#issuecomment-582046363 e.g. https://github.com/apache/hbase/blob/branch-1/dev-support/Jenkinsfile_GitHub#L163 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality
busbey commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality URL: https://github.com/apache/hbase/pull/1122#issuecomment-582046222 it has to be pushed to the branches to work for precommit, IIRC. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23794) Consider setting -XX:MaxDirectMemorySize in the root Maven pom.xml file.
[ https://issues.apache.org/jira/browse/HBASE-23794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030018#comment-17030018 ] Mark Robert Miller commented on HBASE-23794: I'm still working out what a good suggestion for a value might be. Very few of the tests need even more than 1g of the 2g of heap given, so I'm looking into some numbers between the two things. Largely, it is just nice to be explicit so that all devs and CI envs get the same value. Older hotspot might default to lower explicit values depending on arch/client/server, more recent hotspot defaults to Xmx, hotspot can change again, other JVM's could do whatever. So a lot of the improvement I imagine here is just consistency of the build and knowing the value has been set high enough for the tests. I've run into fails due to this while playing around with giving tests less resources - so I'd like to set it high enough to avoid any fails, but also remove this confusion around messing with Xmx and running into off heap allocation failures and that type of thing. > Consider setting -XX:MaxDirectMemorySize in the root Maven pom.xml file. > > > Key: HBASE-23794 > URL: https://issues.apache.org/jira/browse/HBASE-23794 > Project: HBase > Issue Type: Test >Reporter: Mark Robert Miller >Priority: Minor > > -XX:MaxDirectMemorySize is an artificial governor on how much off heap memory > can be allocated. > It would be nice to specify explicitly because: > # The default can vary by platform / jvm impl - some devs may see random > fails > # It's just a limiter, it won't pre allocate or anything > # A test env should normally ensure a healthy limit as would be done in > production -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23793) Increase maven heap allocation to 4G in Yetus personality
[ https://issues.apache.org/jira/browse/HBASE-23793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23793: - Summary: Increase maven heap allocation to 4G in Yetus personality (was: Apache Jenkins fails to aggregate test results to to OOM) > Increase maven heap allocation to 4G in Yetus personality > - > > Key: HBASE-23793 > URL: https://issues.apache.org/jira/browse/HBASE-23793 > Project: HBase > Issue Type: Test > Components: build, test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0 > > > I saw this over on > [https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console]. > Looks like we need to bump the memory allocation for maven. I wonder if this > is the underlying cause of HBASE-22470. > > {noformat} > 6:38:47 > > 16:38:47 > > 16:38:47Finished build. > 16:38:47 > > 16:38:47 > > 16:38:47 > 16:38:47 > Post stage > [Pipeline] stash > 16:38:48 Warning: overwriting stash 'hadoop2-result' > 16:38:48 Stashed 1 file(s) > [Pipeline] junit > 16:38:48 Recording test results > 16:38:54 Remote call on H2 failed > Error when executing always post condition: > java.io.IOException: Remote call on H2 failed > at hudson.remoting.Channel.call(Channel.java:963) > at hudson.FilePath.act(FilePath.java:1072) > at hudson.FilePath.act(FilePath.java:1061) > at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114) > at > hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137) > at > hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25) > at > org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.OutOfMemoryError: Java heap space > at > com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) > at > com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) > at > com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771) > at > com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) > at > com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) > at > com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) > at org.dom4j.io.SAXReader.read(SAXReader.java:465) > at org.dom4j.io.SAXReader.read(SAXReader.java:343) > at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178) > at hudson.tasks.junit.TestResult.parse(TestResult.java:348) > at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281) > at hudson.tasks.junit.TestResult.parse(TestResult.java:206) > at hudson.tasks.junit.TestResult.parse(TestResult.java:178) > at hudson.tasks.junit.TestResult.
[jira] [Resolved] (HBASE-23793) Apache Jenkins fails to aggregate test results to to OOM
[ https://issues.apache.org/jira/browse/HBASE-23793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-23793. -- Resolution: Fixed > Apache Jenkins fails to aggregate test results to to OOM > > > Key: HBASE-23793 > URL: https://issues.apache.org/jira/browse/HBASE-23793 > Project: HBase > Issue Type: Test > Components: build, test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0 > > > I saw this over on > [https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console]. > Looks like we need to bump the memory allocation for maven. I wonder if this > is the underlying cause of HBASE-22470. > > {noformat} > 6:38:47 > > 16:38:47 > > 16:38:47Finished build. > 16:38:47 > > 16:38:47 > > 16:38:47 > 16:38:47 > Post stage > [Pipeline] stash > 16:38:48 Warning: overwriting stash 'hadoop2-result' > 16:38:48 Stashed 1 file(s) > [Pipeline] junit > 16:38:48 Recording test results > 16:38:54 Remote call on H2 failed > Error when executing always post condition: > java.io.IOException: Remote call on H2 failed > at hudson.remoting.Channel.call(Channel.java:963) > at hudson.FilePath.act(FilePath.java:1072) > at hudson.FilePath.act(FilePath.java:1061) > at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114) > at > hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137) > at > hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25) > at > org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.OutOfMemoryError: Java heap space > at > com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) > at > com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) > at > com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771) > at > com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) > at > com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) > at > com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) > at org.dom4j.io.SAXReader.read(SAXReader.java:465) > at org.dom4j.io.SAXReader.read(SAXReader.java:343) > at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178) > at hudson.tasks.junit.TestResult.parse(TestResult.java:348) > at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281) > at hudson.tasks.junit.TestResult.parse(TestResult.java:206) > at hudson.tasks.junit.TestResult.parse(TestResult.java:178) > at hudson.tasks.junit.TestResult.(TestResult.java:143) > at > hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146)
[jira] [Created] (HBASE-23796) Consider using 127.0.0.1 instead of localhost and binding to 127.0.0.1 as well.
Mark Robert Miller created HBASE-23796: -- Summary: Consider using 127.0.0.1 instead of localhost and binding to 127.0.0.1 as well. Key: HBASE-23796 URL: https://issues.apache.org/jira/browse/HBASE-23796 Project: HBase Issue Type: Test Reporter: Mark Robert Miller This is perhaps controversial, but there are a variety of problems with counting on dns hostname resolution, especially for locahost. # It can often be slow, slow under concurrency, or slow under specific conditions. # It can often not work at all - when on a VPN, with weird DNS hijacking hi-jinks, when you have a real hostname for you machines, a custom /etc/hosts file, OS's run their own local/funny DNS server services. # This makes coming to HBase for new devs a hit or miss experience and if you miss, dealing with an diagnosing the issues is a large endeavor and not straight forward or transparent. # 99% of the difference doesn't matter in most cases - except that 127.0.0.1 works and is fast pretty much universally. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23794) Consider setting -XX:MaxDirectMemorySize in the root Maven pom.xml file.
[ https://issues.apache.org/jira/browse/HBASE-23794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030004#comment-17030004 ] Nick Dimiduk commented on HBASE-23794: -- What's your suggested value here for the rank-and-file tests [~markrmiller]? You see where this might be added to our base pom in the {{hbase-surefire.argLine}} property? It's quite possible we have {{LargeTest}} classes that attempt to verify our off-heap data pathways. You might have a look into how those test classes manage themselves before tweaking anything in this area. > Consider setting -XX:MaxDirectMemorySize in the root Maven pom.xml file. > > > Key: HBASE-23794 > URL: https://issues.apache.org/jira/browse/HBASE-23794 > Project: HBase > Issue Type: Test >Reporter: Mark Robert Miller >Priority: Minor > > -XX:MaxDirectMemorySize is an artificial governor on how much off heap memory > can be allocated. > It would be nice to specify explicitly because: > # The default can vary by platform / jvm impl - some devs may see random > fails > # It's just a limiter, it won't pre allocate or anything > # A test env should normally ensure a healthy limit as would be done in > production -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23793) Apache Jenkins fails to aggregate test results to to OOM
[ https://issues.apache.org/jira/browse/HBASE-23793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-23793: - Fix Version/s: 3.0.0 > Apache Jenkins fails to aggregate test results to to OOM > > > Key: HBASE-23793 > URL: https://issues.apache.org/jira/browse/HBASE-23793 > Project: HBase > Issue Type: Test > Components: build, test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0 > > > I saw this over on > [https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console]. > Looks like we need to bump the memory allocation for maven. I wonder if this > is the underlying cause of HBASE-22470. > > {noformat} > 6:38:47 > > 16:38:47 > > 16:38:47Finished build. > 16:38:47 > > 16:38:47 > > 16:38:47 > 16:38:47 > Post stage > [Pipeline] stash > 16:38:48 Warning: overwriting stash 'hadoop2-result' > 16:38:48 Stashed 1 file(s) > [Pipeline] junit > 16:38:48 Recording test results > 16:38:54 Remote call on H2 failed > Error when executing always post condition: > java.io.IOException: Remote call on H2 failed > at hudson.remoting.Channel.call(Channel.java:963) > at hudson.FilePath.act(FilePath.java:1072) > at hudson.FilePath.act(FilePath.java:1061) > at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114) > at > hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137) > at > hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25) > at > org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.OutOfMemoryError: Java heap space > at > com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) > at > com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) > at > com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771) > at > com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) > at > com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) > at > com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) > at org.dom4j.io.SAXReader.read(SAXReader.java:465) > at org.dom4j.io.SAXReader.read(SAXReader.java:343) > at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178) > at hudson.tasks.junit.TestResult.parse(TestResult.java:348) > at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281) > at hudson.tasks.junit.TestResult.parse(TestResult.java:206) > at hudson.tasks.junit.TestResult.parse(TestResult.java:178) > at hudson.tasks.junit.TestResult.(TestResult.java:143) > at > hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146
[jira] [Created] (HBASE-23795) Enable all tests to be run in parallel on reused JVMs.
Mark Robert Miller created HBASE-23795: -- Summary: Enable all tests to be run in parallel on reused JVMs. Key: HBASE-23795 URL: https://issues.apache.org/jira/browse/HBASE-23795 Project: HBase Issue Type: Wish Reporter: Mark Robert Miller I'd like to be able to run HBase tests in under 30-40 minutes on good parallel hardware. It will require some small changes / fixes for that wish to come true. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk merged pull request #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality
ndimiduk merged pull request #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality URL: https://github.com/apache/hbase/pull/1122 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache-HBase commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality
Apache-HBase commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality URL: https://github.com/apache/hbase/pull/1122#issuecomment-582016241 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 40s | Maven dependency ordering for branch | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | shellcheck | 0m 3s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | ||| _ Other Tests _ | | +0 :ok: | asflicense | 0m 0s | ASF License check generated no output? | | | | 2m 23s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1122/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1122 | | Optional Tests | dupname asflicense shellcheck shelldocs | | uname | Linux 96ef265eff4e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1122/out/precommit/personality/provided.sh | | git revision | master / bb14bdad62 | | Max. process+thread count | 52 (vs. ulimit of 1) | | modules | C: U: | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1122/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) shellcheck=0.7.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] ndimiduk commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality
ndimiduk commented on issue #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality URL: https://github.com/apache/hbase/pull/1122#issuecomment-582016423 > Try it. Push on branch-2 too as I'm watching it. Just reading through the `Jenkinsfile`s on `master`, `branch-2`, and `branch-1` -- seems they all pull the personality file from `master`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] ndimiduk opened a new pull request #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality
ndimiduk opened a new pull request #1122: HBASE-23793 Increase maven heap allocation to 4G in Yetus personality URL: https://github.com/apache/hbase/pull/1122 I saw this over on https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console. Looks like we need to bump the memory allocation for maven. I wonder if this is the underlying cause of HBASE-22470. ``` 6:38:47 16:38:47 16:38:47Finished build. 16:38:47 16:38:47 16:38:47 16:38:47 Post stage [Pipeline] stash 16:38:48 Warning: overwriting stash 'hadoop2-result' 16:38:48 Stashed 1 file(s) [Pipeline] junit 16:38:48 Recording test results 16:38:54 Remote call on H2 failed Error when executing always post condition: java.io.IOException: Remote call on H2 failed at hudson.remoting.Channel.call(Channel.java:963) at hudson.FilePath.act(FilePath.java:1072) at hudson.FilePath.act(FilePath.java:1061) at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114) at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137) at hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167) at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52) at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) at org.dom4j.io.SAXReader.read(SAXReader.java:465) at org.dom4j.io.SAXReader.read(SAXReader.java:343) at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178) at hudson.tasks.junit.TestResult.parse(TestResult.java:348) at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281) at hudson.tasks.junit.TestResult.parse(TestResult.java:206) at hudson.tasks.junit.TestResult.parse(TestResult.java:178) at hudson.tasks.junit.TestResult.(TestResult.java:143) at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146) at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:118) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) ... 4 more [Pipeline] } [Pipeline] // withEnv [Pipeline] } [Pipeline] // node [Pipeline] } [P
[jira] [Assigned] (HBASE-23793) Apache Jenkins fails to aggregate test results to to OOM
[ https://issues.apache.org/jira/browse/HBASE-23793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reassigned HBASE-23793: Assignee: Nick Dimiduk > Apache Jenkins fails to aggregate test results to to OOM > > > Key: HBASE-23793 > URL: https://issues.apache.org/jira/browse/HBASE-23793 > Project: HBase > Issue Type: Test > Components: build, test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > > I saw this over on > [https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console]. > Looks like we need to bump the memory allocation for maven. I wonder if this > is the underlying cause of HBASE-22470. > > {noformat} > 6:38:47 > > 16:38:47 > > 16:38:47Finished build. > 16:38:47 > > 16:38:47 > > 16:38:47 > 16:38:47 > Post stage > [Pipeline] stash > 16:38:48 Warning: overwriting stash 'hadoop2-result' > 16:38:48 Stashed 1 file(s) > [Pipeline] junit > 16:38:48 Recording test results > 16:38:54 Remote call on H2 failed > Error when executing always post condition: > java.io.IOException: Remote call on H2 failed > at hudson.remoting.Channel.call(Channel.java:963) > at hudson.FilePath.act(FilePath.java:1072) > at hudson.FilePath.act(FilePath.java:1061) > at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114) > at > hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137) > at > hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25) > at > org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.OutOfMemoryError: Java heap space > at > com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) > at > com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) > at > com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771) > at > com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) > at > com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) > at > com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) > at org.dom4j.io.SAXReader.read(SAXReader.java:465) > at org.dom4j.io.SAXReader.read(SAXReader.java:343) > at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178) > at hudson.tasks.junit.TestResult.parse(TestResult.java:348) > at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281) > at hudson.tasks.junit.TestResult.parse(TestResult.java:206) > at hudson.tasks.junit.TestResult.parse(TestResult.java:178) > at hudson.tasks.junit.TestResult.(TestResult.java:143) > at > hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146) > at > hudson
[jira] [Work started] (HBASE-23793) Apache Jenkins fails to aggregate test results to to OOM
[ https://issues.apache.org/jira/browse/HBASE-23793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-23793 started by Nick Dimiduk. > Apache Jenkins fails to aggregate test results to to OOM > > > Key: HBASE-23793 > URL: https://issues.apache.org/jira/browse/HBASE-23793 > Project: HBase > Issue Type: Test > Components: build, test >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > > I saw this over on > [https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console]. > Looks like we need to bump the memory allocation for maven. I wonder if this > is the underlying cause of HBASE-22470. > > {noformat} > 6:38:47 > > 16:38:47 > > 16:38:47Finished build. > 16:38:47 > > 16:38:47 > > 16:38:47 > 16:38:47 > Post stage > [Pipeline] stash > 16:38:48 Warning: overwriting stash 'hadoop2-result' > 16:38:48 Stashed 1 file(s) > [Pipeline] junit > 16:38:48 Recording test results > 16:38:54 Remote call on H2 failed > Error when executing always post condition: > java.io.IOException: Remote call on H2 failed > at hudson.remoting.Channel.call(Channel.java:963) > at hudson.FilePath.act(FilePath.java:1072) > at hudson.FilePath.act(FilePath.java:1061) > at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114) > at > hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137) > at > hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52) > at > hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25) > at > org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.OutOfMemoryError: Java heap space > at > com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) > at > com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) > at > com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) > at > com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842) > at > com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771) > at > com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) > at > com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) > at > com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) > at org.dom4j.io.SAXReader.read(SAXReader.java:465) > at org.dom4j.io.SAXReader.read(SAXReader.java:343) > at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178) > at hudson.tasks.junit.TestResult.parse(TestResult.java:348) > at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281) > at hudson.tasks.junit.TestResult.parse(TestResult.java:206) > at hudson.tasks.junit.TestResult.parse(TestResult.java:178) > at hudson.tasks.junit.TestResult.(TestResult.java:143) > at > hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146) > at > hudson.tasks.junit.
[jira] [Created] (HBASE-23794) Consider setting -XX:MaxDirectMemorySize in the root Maven pom.xml file.
Mark Robert Miller created HBASE-23794: -- Summary: Consider setting -XX:MaxDirectMemorySize in the root Maven pom.xml file. Key: HBASE-23794 URL: https://issues.apache.org/jira/browse/HBASE-23794 Project: HBase Issue Type: Test Reporter: Mark Robert Miller -XX:MaxDirectMemorySize is an artificial governor on how much off heap memory can be allocated. It would be nice to specify explicitly because: # The default can vary by platform / jvm impl - some devs may see random fails # It's just a limiter, it won't pre allocate or anything # A test env should normally ensure a healthy limit as would be done in production -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23793) Apache Jenkins fails to aggregate test results to to OOM
Nick Dimiduk created HBASE-23793: Summary: Apache Jenkins fails to aggregate test results to to OOM Key: HBASE-23793 URL: https://issues.apache.org/jira/browse/HBASE-23793 Project: HBase Issue Type: Test Components: build, test Affects Versions: 2.3.0 Reporter: Nick Dimiduk I saw this over on [https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/2447/console]. Looks like we need to bump the memory allocation for maven. I wonder if this is the underlying cause of HBASE-22470. {noformat} 6:38:47 16:38:47 16:38:47Finished build. 16:38:47 16:38:47 16:38:47 16:38:47 Post stage [Pipeline] stash 16:38:48 Warning: overwriting stash 'hadoop2-result' 16:38:48 Stashed 1 file(s) [Pipeline] junit 16:38:48 Recording test results 16:38:54 Remote call on H2 failed Error when executing always post condition: java.io.IOException: Remote call on H2 failed at hudson.remoting.Channel.call(Channel.java:963) at hudson.FilePath.act(FilePath.java:1072) at hudson.FilePath.act(FilePath.java:1061) at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114) at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:137) at hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:167) at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:52) at hudson.tasks.junit.pipeline.JUnitResultsStepExecution.run(JUnitResultsStepExecution.java:25) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1515) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3014) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) at org.dom4j.io.SAXReader.read(SAXReader.java:465) at org.dom4j.io.SAXReader.read(SAXReader.java:343) at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:178) at hudson.tasks.junit.TestResult.parse(TestResult.java:348) at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:281) at hudson.tasks.junit.TestResult.parse(TestResult.java:206) at hudson.tasks.junit.TestResult.parse(TestResult.java:178) at hudson.tasks.junit.TestResult.(TestResult.java:143) at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:146) at hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:118) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.In
[jira] [Created] (HBASE-23792) TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState
Nick Dimiduk created HBASE-23792: Summary: TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState Key: HBASE-23792 URL: https://issues.apache.org/jira/browse/HBASE-23792 Project: HBase Issue Type: Test Components: test Affects Versions: 2.3.0 Reporter: Nick Dimiduk {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} fails with {noformat} java.lang.IllegalArgumentException: Wrong FS: file:/home/jenkins/jenkins-slave/workspace/HBase_Nightly_branch-2@2/component/hbase-mapreduce/target/test-data/878a5107-35a3-90ea-50ef-d2a3c32a50dc/.hbase-snapshot/tableWithRefsV1, expected: hdfs://localhost:44609 at org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) at org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate…
saintstack commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate… URL: https://github.com/apache/hbase/pull/1120#discussion_r374785836 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSyncTimeRangeTracker.java ## @@ -84,23 +86,23 @@ public void run() { assertTrue(trr.getMin() == 0); } - static class RandomTestData { -private long[] keys = new long[NUM_KEYS]; -private long min = Long.MAX_VALUE; -private long max = 0; + static class RandomTestData { +private final AtomicLongArray keys = new AtomicLongArray(NUM_KEYS); Review comment: This is the 'fix'? Using atomic array. Sounds good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate…
saintstack commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate… URL: https://github.com/apache/hbase/pull/1120#discussion_r374785224 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSyncTimeRangeTracker.java ## @@ -84,23 +86,23 @@ public void run() { assertTrue(trr.getMin() == 0); } - static class RandomTestData { -private long[] keys = new long[NUM_KEYS]; -private long min = Long.MAX_VALUE; -private long max = 0; + static class RandomTestData { +private final AtomicLongArray keys = new AtomicLongArray(NUM_KEYS); +private long min = Long.MAX_VALUE; // effectively final +private long max = 0; // effectively final public RandomTestData() { if (ThreadLocalRandom.current().nextInt(NUM_OF_THREADS) % 2 == 0) { for (int i = 0; i < NUM_KEYS; i++) { - keys[i] = i + ThreadLocalRandom.current().nextLong(NUM_OF_THREADS); - if (keys[i] < min) min = keys[i]; - if (keys[i] > max) max = keys[i]; + keys.set(i, i + ThreadLocalRandom.current().nextLong(NUM_OF_THREADS)); + if (keys.get(i) < min) min = keys.get(i); Review comment: See your checkstyle. Needs parents around the 'min = key[i];' This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate…
saintstack commented on a change in pull request #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate… URL: https://github.com/apache/hbase/pull/1120#discussion_r374784868 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSyncTimeRangeTracker.java ## @@ -84,23 +86,23 @@ public void run() { assertTrue(trr.getMin() == 0); } - static class RandomTestData { -private long[] keys = new long[NUM_KEYS]; -private long min = Long.MAX_VALUE; -private long max = 0; + static class RandomTestData { +private final AtomicLongArray keys = new AtomicLongArray(NUM_KEYS); +private long min = Long.MAX_VALUE; // effectively final +private long max = 0; // effectively final Review comment: Don't want to make them final? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on issue #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate…
saintstack commented on issue #1120: HBASE-23787: TestSyncTimeRangeTracker fails quite easily and allocate… URL: https://github.com/apache/hbase/pull/1120#issuecomment-581997764 Your branch up-to-date Mr. Miller? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-22834) Remove deprecated methods from HBaseTestingUtility
[ https://issues.apache.org/jira/browse/HBASE-22834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029965#comment-17029965 ] Nick Dimiduk commented on HBASE-22834: -- This goes back to the discussion that, I think it was [~vjasani] started, re: making some of these testing interfaces explicitly consumable to down streamers. What do you think, [~vjasani], are you up to starting designs on an {{hbase-test-support}} module that can be made part of our public interface? > Remove deprecated methods from HBaseTestingUtility > -- > > Key: HBASE-22834 > URL: https://issues.apache.org/jira/browse/HBASE-22834 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > > {{HBaseTestingUtility}} has some deprecated methods, which should be removed > for 3.0.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on issue #951: HBASE-23578 [UI] Master UI shows long stack traces when table is broken
Apache-HBase commented on issue #951: HBASE-23578 [UI] Master UI shows long stack traces when table is broken URL: https://github.com/apache/hbase/pull/951#issuecomment-581988000 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 39s | master passed | | +1 :green_heart: | javadoc | 0m 39s | master passed | | -0 :warning: | patch | 6m 32s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 59s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 151m 20s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | The patch does not generate ASF License warnings. | | | | 165m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-951/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/951 | | Optional Tests | dupname asflicense javac javadoc unit | | uname | Linux 7390e8a52c38 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-951/out/precommit/personality/provided.sh | | git revision | master / bb14bdad62 | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-951/3/testReport/ | | Max. process+thread count | 5123 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-951/3/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase-operator-tools] wchevreuil opened a new pull request #53: HBASE-23791 [operator tools] Remove reference to I.A. Private interfa…
wchevreuil opened a new pull request #53: HBASE-23791 [operator tools] Remove reference to I.A. Private interfa… URL: https://github.com/apache/hbase-operator-tools/pull/53 …ce MetaTableAccessor This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (HBASE-23791) [operator tools] Remove reference to I.A. Private interface MetaTableAccessor
Wellington Chevreuil created HBASE-23791: Summary: [operator tools] Remove reference to I.A. Private interface MetaTableAccessor Key: HBASE-23791 URL: https://issues.apache.org/jira/browse/HBASE-23791 Project: HBase Issue Type: Improvement Reporter: Wellington Chevreuil Assignee: Wellington Chevreuil While trying to use newly command _extraRegionsInMeta_ added by HBASE-23371, [~daisuke.kobayashi] noticed it was not working properly on some deployments not including another patch merged in HBASE-22758, that changed *MetaTableAccessor* interface: {noformat} $ hbase hbck -j hbase-operator-tools-1.0.0.1.0.0.0-11/hbase-hbck2/hbase-hbck2.jar extraRegionsInMeta -f default:cluster_test Regions that had no dir on the FileSystem and got removed from Meta: 0 ERROR: There were following errors on at least one table thread: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.MetaTableAccessor.deleteRegionInfos(Lorg/apache/hadoop/hbase/client/Connection;Ljava/util/List;) {noformat} Since *MetaTableAccessor* is IA Private, and HBCK2 is aimed to evolve independently of hbase project, ideally we should not rely in IA Private interfaces. There's already an existing *HBCKMetaTableAccessor* on hbck2, with some *MetaTableAccessor* original methods used in hbck2 re-implemented. This PR removes all dependencies to *MetaTableAccessor* currently existing in hbck2, re-implementing some of the required methods on *HBCKMetaTableAccessor*. Thanks for finding and reporting it, [~daisuke.kobayashi]! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23780) Edit of test classifications
[ https://issues.apache.org/jira/browse/HBASE-23780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029892#comment-17029892 ] Hudson commented on HBASE-23780: Results for branch master [build #1619 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1619/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Edit of test classifications > > > Key: HBASE-23780 > URL: https://issues.apache.org/jira/browse/HBASE-23780 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > Our test classifications have drifted. You can see for yourself running each > of the small/medium and large test suites. See test complete times. See how > even some large tests should be small and vice versa. > The more small tests we can run inside the single JVM, the faster we'll get > through the build. Tests that are Medium start their own JVM for each test. > Tests that are Medium but only last a second or two are expensive and should > be aggregated with other single, short tests to amortize the JVM startup. > Anyways, let me edit the test categories and try and clean them up some. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23786) [Flakey Test] TestMasterNotCarryTable.testMasterMemStoreLAB
[ https://issues.apache.org/jira/browse/HBASE-23786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029894#comment-17029894 ] Hudson commented on HBASE-23786: Results for branch master [build #1619 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1619/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [Flakey Test] TestMasterNotCarryTable.testMasterMemStoreLAB > > > Key: HBASE-23786 > URL: https://issues.apache.org/jira/browse/HBASE-23786 > Project: HBase > Issue Type: Bug > Components: flakies >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: > 0001-HBASE-23786-Flakey-Test-TestMasterNotCarryTable.test.patch > > > Interesting one. Fails only if Master gets chance to become active -- which > doesn't happen when all is easy-going. If struggling under load, it can > become active and then test asserting NO ChunkCreator instance in Master > fails because we want ChunkCreator now since ProcedureRegionStore was added: > i.e. "// always initialize the MemStoreLAB as we use a region to store > procedure now." > Here is error I've seen > {code} > [ERROR] Failures: > [ERROR] TestMasterNotCarryTable.testMasterMemStoreLAB:94 expected null, > but was: > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23772) Remove deprecated getTimeStampOfLastShippedOp from MetricsSource
[ https://issues.apache.org/jira/browse/HBASE-23772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029893#comment-17029893 ] Hudson commented on HBASE-23772: Results for branch master [build #1619 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1619/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove deprecated getTimeStampOfLastShippedOp from MetricsSource > > > Key: HBASE-23772 > URL: https://issues.apache.org/jira/browse/HBASE-23772 > Project: HBase > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Fix For: 3.0.0 > > > {{MetricsSource}} defines the deprecated method > {{getTimeStampOfLastShippedOp}}, which should be removed for 3.0.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23782) We still reference the hard coded meta descriptor in some places when listing table descriptors
[ https://issues.apache.org/jira/browse/HBASE-23782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029895#comment-17029895 ] Hudson commented on HBASE-23782: Results for branch master [build #1619 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1619/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1619//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > We still reference the hard coded meta descriptor in some places when listing > table descriptors > --- > > Key: HBASE-23782 > URL: https://issues.apache.org/jira/browse/HBASE-23782 > Project: HBase > Issue Type: Bug > Components: meta >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0, 2.3.0 > > Attachments: HBASE-23782-addendum.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-20623) Introduce the helper method "getCellBuilder()" to Mutation
[ https://issues.apache.org/jira/browse/HBASE-20623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029858#comment-17029858 ] HBase QA commented on HBASE-20623: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 33s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 29s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 4s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 51s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 4m 11s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 20m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 19s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 5s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 17m 29s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 21m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}237m 7s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}360m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1121/1/artif
[GitHub] [hbase] Apache-HBase commented on issue #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation
Apache-HBase commented on issue #1121: HBASE-20623: [WIP]Introduce the helper method "getCellBuilder()" to Mutation URL: https://github.com/apache/hbase/pull/1121#issuecomment-581926649 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 13s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 55s | master passed | | +1 :green_heart: | compile | 3m 25s | master passed | | +1 :green_heart: | checkstyle | 2m 33s | master passed | | +0 :ok: | refguide | 5m 29s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | shadedjars | 5m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 51s | master passed | | +0 :ok: | spotbugs | 4m 11s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 20m 51s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 34s | the patch passed | | +1 :green_heart: | compile | 3m 21s | the patch passed | | +1 :green_heart: | javac | 3m 21s | the patch passed | | +1 :green_heart: | checkstyle | 2m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +0 :ok: | refguide | 5m 19s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | shadedjars | 5m 5s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 17m 29s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 3m 50s | the patch passed | | +1 :green_heart: | findbugs | 21m 11s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 237m 7s | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 48s | The patch does not generate ASF License warnings. | | | | 360m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1121/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1121 | | JIRA Issue | HBASE-20623 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide | | uname | Linux 75a924866bd5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1121/out/precommit/personality/provided.sh | | git revision | master / bb14bdad62 | | Default Java | 1.8.0_181 | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1121/1/artifact/out/branch-site/book.html | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1121/1/artifact/out/patch-site/book.html | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1121/1/testReport/ | | Max. process+thread count | 5381 (vs. ulimit of 1) | | modules | C: hbase-client hbase-server . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1121/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-23790) Bump netty version to 4.1.45.Final in hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Penzes updated HBASE-23790: - Affects Version/s: (was: thirdparty-3.0.0) hbase-thirdparty-3.2.0 > Bump netty version to 4.1.45.Final in hbase-thirdparty > -- > > Key: HBASE-23790 > URL: https://issues.apache.org/jira/browse/HBASE-23790 > Project: HBase > Issue Type: Improvement > Components: hbase-thirdparty >Affects Versions: hbase-thirdparty-3.2.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > > We do have a new netty version 4.1.45.Final which we could update to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23790) Bump netty version to 4.1.45.Final in hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Penzes updated HBASE-23790: - Affects Version/s: thirdparty-3.0.0 > Bump netty version to 4.1.45.Final in hbase-thirdparty > -- > > Key: HBASE-23790 > URL: https://issues.apache.org/jira/browse/HBASE-23790 > Project: HBase > Issue Type: Improvement > Components: hbase-thirdparty >Affects Versions: thirdparty-3.0.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > > We do have a new netty version 4.1.45.Final which we could update to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-thirdparty] tamaashu opened a new pull request #12: HBASE-23790: Bump netty version to 4.1.45.Final in hbase-thirdparty
tamaashu opened a new pull request #12: HBASE-23790: Bump netty version to 4.1.45.Final in hbase-thirdparty URL: https://github.com/apache/hbase-thirdparty/pull/12 Updated netty version to 4.1.45.Final Collected dependency version numbers into main pom.xml This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-23790) Bump netty version to 4.1.45.Final in hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Penzes updated HBASE-23790: - Summary: Bump netty version to 4.1.45.Final in hbase-thirdparty (was: Bump netty version to 4.1.45.Final) > Bump netty version to 4.1.45.Final in hbase-thirdparty > -- > > Key: HBASE-23790 > URL: https://issues.apache.org/jira/browse/HBASE-23790 > Project: HBase > Issue Type: Improvement > Components: hbase-thirdparty >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > > We do have a new netty version 4.1.45.Final which we could update to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23790) Bump netty version to 4.1.45.Final
Tamas Penzes created HBASE-23790: Summary: Bump netty version to 4.1.45.Final Key: HBASE-23790 URL: https://issues.apache.org/jira/browse/HBASE-23790 Project: HBase Issue Type: Improvement Components: hbase-thirdparty Reporter: Tamas Penzes We do have a new netty version 4.1.45.Final which we could update to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23790) Bump netty version to 4.1.45.Final
[ https://issues.apache.org/jira/browse/HBASE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Penzes reassigned HBASE-23790: Assignee: Tamas Penzes > Bump netty version to 4.1.45.Final > -- > > Key: HBASE-23790 > URL: https://issues.apache.org/jira/browse/HBASE-23790 > Project: HBase > Issue Type: Improvement > Components: hbase-thirdparty >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > > We do have a new netty version 4.1.45.Final which we could update to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-23790) Bump netty version to 4.1.45.Final
[ https://issues.apache.org/jira/browse/HBASE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-23790 started by Tamas Penzes. > Bump netty version to 4.1.45.Final > -- > > Key: HBASE-23790 > URL: https://issues.apache.org/jira/browse/HBASE-23790 > Project: HBase > Issue Type: Improvement > Components: hbase-thirdparty >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > > We do have a new netty version 4.1.45.Final which we could update to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] YamasakiSS commented on a change in pull request #951: HBASE-23578 [UI] Master UI shows long stack traces when table is broken
YamasakiSS commented on a change in pull request #951: HBASE-23578 [UI] Master UI shows long stack traces when table is broken URL: https://github.com/apache/hbase/pull/951#discussion_r374669235 ## File path: hbase-server/src/main/resources/hbase-webapps/master/table.jsp ## @@ -834,12 +844,19 @@ if (withReplica) { <% } -} catch(Exception ex) { +} catch(Exception ex) { %> + Unknown Issue with Regions + + Show StackTrace + + + + Close StackTrace + + <% for(StackTraceElement element : ex.getStackTrace()) { -%><%= StringEscapeUtils.escapeHtml4(element.toString()) %><% +%><%= StringEscapeUtils.escapeHtml4(element.toString() + "\n") %><% } -} finally { Review comment: Thank you for comment. I rebased HBASE-23578 with master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23783) Address tests writing and reading SSL/Security files in a common location.
[ https://issues.apache.org/jira/browse/HBASE-23783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029652#comment-17029652 ] HBase QA commented on HBASE-23783: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 6s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 2s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 4m 12s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 20m 58s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 29s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 2s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 17m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 21m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}226m 50s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 2m 7s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}339m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1116/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1116 | | JIRA Issue | HBASE-23783 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findb
[GitHub] [hbase] Apache-HBase commented on issue #1116: HBASE-23783: Address tests writing and reading SSL/Security files in …
Apache-HBase commented on issue #1116: HBASE-23783: Address tests writing and reading SSL/Security files in … URL: https://github.com/apache/hbase/pull/1116#issuecomment-581803117 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 6 new or modified test files. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 42s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 49s | master passed | | +1 :green_heart: | compile | 3m 17s | master passed | | +1 :green_heart: | checkstyle | 2m 27s | master passed | | +1 :green_heart: | shadedjars | 5m 6s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 4m 2s | master passed | | +0 :ok: | spotbugs | 4m 12s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 20m 58s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 31s | the patch passed | | +1 :green_heart: | compile | 3m 20s | the patch passed | | +1 :green_heart: | javac | 3m 20s | the patch passed | | +1 :green_heart: | checkstyle | 2m 29s | root: The patch generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedjars | 5m 2s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 17m 18s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 3m 59s | the patch passed | | +1 :green_heart: | findbugs | 21m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 226m 50s | root in the patch passed. | | +1 :green_heart: | asflicense | 2m 7s | The patch does not generate ASF License warnings. | | | | 339m 1s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1116/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1116 | | JIRA Issue | HBASE-23783 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile xml | | uname | Linux 8eca1a626def 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1116/out/precommit/personality/provided.sh | | git revision | master / bb14bdad62 | | Default Java | 1.8.0_181 | | whitespace | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1116/3/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1116/3/testReport/ | | Max. process+thread count | 5245 (vs. ulimit of 1) | | modules | C: hbase-common hbase-http hbase-server . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1116/3/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services