[GitHub] [hbase] Apache9 commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
Apache9 commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339287158
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
 ##
 @@ -1113,10 +1135,21 @@ public OptionalLong getLogFileSizeIfBeingWritten(Path 
path) {
* passed in WALKey walKey parameter. Be warned that the 
WriteEntry is not
* immediately available on return from this method. It WILL be available 
subsequent to a sync of
* this append; otherwise, you will just have to wait on the WriteEntry to 
get filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param inMemstore Always true except for case where we are writing a 
region event marker, for
+   *  example, a compaction completion record into the WAL; in this 
case the entry is just
+   *  so we can finish an unfinished compaction -- it is not an edit 
for memstore.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
+   *  region on this region server. The WAL implementation should 
remove all the related
+   *  stuff, for example, the sequence id accounting.
 
 Review comment:
   The inMemstore flag just tells us it is a marker, but does not tell us what 
is the marker...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
Apache9 commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339287158
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
 ##
 @@ -1113,10 +1135,21 @@ public OptionalLong getLogFileSizeIfBeingWritten(Path 
path) {
* passed in WALKey walKey parameter. Be warned that the 
WriteEntry is not
* immediately available on return from this method. It WILL be available 
subsequent to a sync of
* this append; otherwise, you will just have to wait on the WriteEntry to 
get filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param inMemstore Always true except for case where we are writing a 
region event marker, for
+   *  example, a compaction completion record into the WAL; in this 
case the entry is just
+   *  so we can finish an unfinished compaction -- it is not an edit 
for memstore.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
+   *  region on this region server. The WAL implementation should 
remove all the related
+   *  stuff, for example, the sequence id accounting.
 
 Review comment:
   The inMemstore flag is telling us it is a marker, but does not tell us what 
is the marker...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
Apache9 commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339287086
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
 ##
 @@ -97,44 +97,59 @@
   void close() throws IOException;
 
   /**
-   * Append a set of edits to the WAL. The WAL is not flushed/sync'd after 
this transaction
-   * completes BUT on return this edit must have its region edit/sequence id 
assigned
-   * else it messes up our unification of mvcc and sequenceid.  On return 
key will
-   * have the region edit/sequence id filled in.
+   * Append a set of data edits to the WAL. 'Data' here means that the content 
in the edits will
+   * also be added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
* @param info the regioninfo associated with append
* @param key Modified by this call; we add to it this edits region 
edit/sequence id.
* @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
-   * sequence id that is after all currently appended edits.
-   * @param inMemstore Always true except for case where we are writing a 
compaction completion
-   * record into the WAL; in this case the entry is just so we can finish an 
unfinished compaction
-   * -- it is not an edit for memstore.
+   *  sequence id that is after all currently appended edits.
* @return Returns a 'transaction id' and key will have the 
region edit/sequence id
-   * in it.
+   * in it.
+   * @see #appendMarker(RegionInfo, WALKeyImpl, WALEdit, boolean)
*/
-  long append(RegionInfo info, WALKeyImpl key, WALEdit edits, boolean 
inMemstore) throws IOException;
+  long appendData(RegionInfo info, WALKeyImpl key, WALEdit edits) throws 
IOException;
+
+  /**
+   * Append a marker edit to the WAL. A marker could be a FlushDescriptor, a 
compaction marker, or
+   * region event marker. The difference here is that, a marker will not be 
added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
 
 Review comment:
   Adding more works to the disruptor thread is a no no, it is on the write 
critical path, all edits will be processed by this thread. And since this is an 
IA.Private interface, let's use follow on issues to polish it? It can be 
changed even in a patched release.
   
   What I can imagine is that, we still have the closeRegion flag in 
FSWALEntry, but remove it in the method parameters and in appendMarker we test 
whether it is a close marker. But it is still a bit annoying that, we just 
serialize the protobuf message before calling the method, and right in the 
method, we deserialize it, which really makes me unhappy...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23220) Release 1.5.1

2019-10-25 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960265#comment-16960265
 ] 

Sean Busbey commented on HBASE-23220:
-

tracked those down to HBASE-23185 and reverted it. both tests now pass locally. 
started some flaky test runs  to confirm.

> Release 1.5.1
> -
>
> Key: HBASE-23220
> URL: https://issues.apache.org/jira/browse/HBASE-23220
> Project: HBase
>  Issue Type: Task
>  Components: community
>Affects Versions: 1.5.1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.5.1
>
>
> let's roll 1.5.1 to get HBASE-23174 out on 1.5.z.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-10-25 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960263#comment-16960263
 ] 

Sean Busbey commented on HBASE-23185:
-

I reverted this from branch-1 because it's causing TestMasterNoCluster and 
TestCatalogJanitor to fail there.

> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Fix For: 1.6.0
>
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 
> https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
>  However, it's reproducing nowadays by bugs or leakages after many code 
> evoluations between 0.9x and 1.x.
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
>  ** finishSetup is called every new HTable() e.g. every con.getTable()
>  ** So getInt is called everytime and it does regex
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
>  ** BufferedMutatorImpl is created every first put for HTable e.g. 
> con.getTable().put()
>  ** Create ConnectionConf every time in BufferedMutatorImpl constructor
>  ** ConnectionConf gets config value in the constructor
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
>  ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
> AsyncProcess is created by con.getTable().put()
>  ** AsyncProcess parse many configurations
> So, con.getTable().put() is heavy operation for CPU because of getting config 
> value.
>  
> With in-house patch for this issue, we observed about 10% improvement on 
> max-throughput (e.g. CPU usage) at client-side:
> !Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
>  
> Seems branch-2 is not affected because client implementation has been changed 
> dramatically.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23220) Release 1.5.1

2019-10-25 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960258#comment-16960258
 ] 

Sean Busbey commented on HBASE-23220:
-

current flaky looks unacceptable:

https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1/564/artifact/dashboard.html

In paticular {{TestMasterNoCluster}} and {{TestCatalogJanitor}} are failing all 
the time.

> Release 1.5.1
> -
>
> Key: HBASE-23220
> URL: https://issues.apache.org/jira/browse/HBASE-23220
> Project: HBase
>  Issue Type: Task
>  Components: community
>Affects Versions: 1.5.1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.5.1
>
>
> let's roll 1.5.1 to get HBASE-23174 out on 1.5.z.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HBASE-23220) Release 1.5.1

2019-10-25 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-23220 started by Sean Busbey.
---
> Release 1.5.1
> -
>
> Key: HBASE-23220
> URL: https://issues.apache.org/jira/browse/HBASE-23220
> Project: HBase
>  Issue Type: Task
>  Components: community
>Affects Versions: 1.5.1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.5.1
>
>
> let's roll 1.5.1 to get HBASE-23174 out on 1.5.z.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23220) Release 1.5.1

2019-10-25 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960257#comment-16960257
 ] 

Sean Busbey commented on HBASE-23220:
-

moved things out to 1.5.2 in jira.

> Release 1.5.1
> -
>
> Key: HBASE-23220
> URL: https://issues.apache.org/jira/browse/HBASE-23220
> Project: HBase
>  Issue Type: Task
>  Components: community
>Affects Versions: 1.5.1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.5.1
>
>
> let's roll 1.5.1 to get HBASE-23174 out on 1.5.z.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23220) Release 1.5.1

2019-10-25 Thread Sean Busbey (Jira)
Sean Busbey created HBASE-23220:
---

 Summary: Release 1.5.1
 Key: HBASE-23220
 URL: https://issues.apache.org/jira/browse/HBASE-23220
 Project: HBase
  Issue Type: Task
  Components: community
Affects Versions: 1.5.1
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 1.5.1


let's roll 1.5.1 to get HBASE-23174 out on 1.5.z.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339284660
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
 ##
 @@ -97,44 +97,59 @@
   void close() throws IOException;
 
   /**
-   * Append a set of edits to the WAL. The WAL is not flushed/sync'd after 
this transaction
-   * completes BUT on return this edit must have its region edit/sequence id 
assigned
-   * else it messes up our unification of mvcc and sequenceid.  On return 
key will
-   * have the region edit/sequence id filled in.
+   * Append a set of data edits to the WAL. 'Data' here means that the content 
in the edits will
+   * also be added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
* @param info the regioninfo associated with append
* @param key Modified by this call; we add to it this edits region 
edit/sequence id.
* @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
-   * sequence id that is after all currently appended edits.
-   * @param inMemstore Always true except for case where we are writing a 
compaction completion
-   * record into the WAL; in this case the entry is just so we can finish an 
unfinished compaction
-   * -- it is not an edit for memstore.
+   *  sequence id that is after all currently appended edits.
* @return Returns a 'transaction id' and key will have the 
region edit/sequence id
-   * in it.
+   * in it.
+   * @see #appendMarker(RegionInfo, WALKeyImpl, WALEdit, boolean)
*/
-  long append(RegionInfo info, WALKeyImpl key, WALEdit edits, boolean 
inMemstore) throws IOException;
+  long appendData(RegionInfo info, WALKeyImpl key, WALEdit edits) throws 
IOException;
+
+  /**
+   * Append a marker edit to the WAL. A marker could be a FlushDescriptor, a 
compaction marker, or
+   * region event marker. The difference here is that, a marker will not be 
added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
 
 Review comment:
   If we did above, would we need the appendData/appendMarker distinction?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339284636
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
 ##
 @@ -97,44 +97,59 @@
   void close() throws IOException;
 
   /**
-   * Append a set of edits to the WAL. The WAL is not flushed/sync'd after 
this transaction
-   * completes BUT on return this edit must have its region edit/sequence id 
assigned
-   * else it messes up our unification of mvcc and sequenceid.  On return 
key will
-   * have the region edit/sequence id filled in.
+   * Append a set of data edits to the WAL. 'Data' here means that the content 
in the edits will
+   * also be added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
* @param info the regioninfo associated with append
* @param key Modified by this call; we add to it this edits region 
edit/sequence id.
* @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
-   * sequence id that is after all currently appended edits.
-   * @param inMemstore Always true except for case where we are writing a 
compaction completion
-   * record into the WAL; in this case the entry is just so we can finish an 
unfinished compaction
-   * -- it is not an edit for memstore.
+   *  sequence id that is after all currently appended edits.
* @return Returns a 'transaction id' and key will have the 
region edit/sequence id
-   * in it.
+   * in it.
+   * @see #appendMarker(RegionInfo, WALKeyImpl, WALEdit, boolean)
*/
-  long append(RegionInfo info, WALKeyImpl key, WALEdit edits, boolean 
inMemstore) throws IOException;
+  long appendData(RegionInfo info, WALKeyImpl key, WALEdit edits) throws 
IOException;
+
+  /**
+   * Append a marker edit to the WAL. A marker could be a FlushDescriptor, a 
compaction marker, or
+   * region event marker. The difference here is that, a marker will not be 
added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
 
 Review comment:
   I messed w/ the patch. I see how the appendData and appendMarker don't take 
us far enough down... down to AbstractFSWAL#appendEntry where we could ask if a 
close marker. But looking at patch, what would be wrong w/ doing something like 
this?
   
   diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
   index 2eb7c7436a..bc31204500 100644
   --- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
   +++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
   @@ -1001,7 +1001,7 @@ public abstract class AbstractFSWAL implements WAL {
doAppend(writer, entry);
assert highestUnsyncedTxid < entry.getTxid();
highestUnsyncedTxid = entry.getTxid();
   -if (entry.isCloseRegion()) {
   +if (!entry.isInMemStore() && entry.isCloseMarker()) {
  // let's clean all the records of this region
  sequenceIdAccounting.onRegionClose(encodedRegionName);
} else {
   
   ... where entry.isCloseMarker would do something like WALEdit.isMetaEdit...
   
 public boolean isMetaEdit() {
   for (Cell cell: cells) {
 if (!isMetaEditFamily(cell)) {
   return false;
 }
   }
   return true;
 }
   
   ... only instead we'd look for METAFAMILY:HBASE::CLOSE, a new define added 
on WALEdit?
   
   Thanks for taking a look.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22991) Release 1.4.11

2019-10-25 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-22991.
-
Resolution: Fixed

website changes were merged, the updated website was published, and the release 
email is now up:

https://lists.apache.org/thread.html/9e5b43ad54ceaa21ca68d3f00e1e092e083ab2105cf84876e408d7b3@%3Cuser.hbase.apache.org%3E

> Release 1.4.11
> --
>
> Key: HBASE-22991
> URL: https://issues.apache.org/jira/browse/HBASE-22991
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11
>
> Attachments: Flaky_20Test_20Report.zip
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] busbey closed pull request #756: HBASE-22991 add HBase 1.4.11 to the downloads page.

2019-10-25 Thread GitBox
busbey closed pull request #756: HBASE-22991 add HBase 1.4.11 to the downloads 
page.
URL: https://github.com/apache/hbase/pull/756
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339203834
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
 ##
 @@ -97,44 +97,59 @@
   void close() throws IOException;
 
   /**
-   * Append a set of edits to the WAL. The WAL is not flushed/sync'd after 
this transaction
-   * completes BUT on return this edit must have its region edit/sequence id 
assigned
-   * else it messes up our unification of mvcc and sequenceid.  On return 
key will
-   * have the region edit/sequence id filled in.
+   * Append a set of data edits to the WAL. 'Data' here means that the content 
in the edits will
+   * also be added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
* @param info the regioninfo associated with append
* @param key Modified by this call; we add to it this edits region 
edit/sequence id.
* @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
-   * sequence id that is after all currently appended edits.
-   * @param inMemstore Always true except for case where we are writing a 
compaction completion
-   * record into the WAL; in this case the entry is just so we can finish an 
unfinished compaction
-   * -- it is not an edit for memstore.
+   *  sequence id that is after all currently appended edits.
* @return Returns a 'transaction id' and key will have the 
region edit/sequence id
-   * in it.
+   * in it.
+   * @see #appendMarker(RegionInfo, WALKeyImpl, WALEdit, boolean)
*/
-  long append(RegionInfo info, WALKeyImpl key, WALEdit edits, boolean 
inMemstore) throws IOException;
+  long appendData(RegionInfo info, WALKeyImpl key, WALEdit edits) throws 
IOException;
+
+  /**
+   * Append a marker edit to the WAL. A marker could be a FlushDescriptor, a 
compaction marker, or
+   * region event marker. The difference here is that, a marker will not be 
added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
 
 Review comment:
   Markers are rare. This method is rarely run. Checking for the type of the 
WALEdit shouldn't be that bad?
   
   It just seems odd passing in an extra parameter specification of the Edit 
type when we have a mechanism for flagging special edits.
   
   Overarching concern is that this stuff is complicated as you know and would 
be cool if we could avoid adding more combination types.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23219) Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)

2019-10-25 Thread Thiruvel Thirumoolan (Jira)
Thiruvel Thirumoolan created HBASE-23219:


 Summary: Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)
 Key: HBASE-23219
 URL: https://issues.apache.org/jira/browse/HBASE-23219
 Project: HBase
  Issue Type: Task
  Components: test
Affects Versions: 1.3.6
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
 Fix For: 1.4.12, 1.3.7


Since we are using zkless in our production setup, we would like to enable 
these tests back in apache on branch-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23219) Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)

2019-10-25 Thread Thiruvel Thirumoolan (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-23219:
-
Status: Patch Available  (was: Open)

> Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)
> 
>
> Key: HBASE-23219
> URL: https://issues.apache.org/jira/browse/HBASE-23219
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Trivial
> Fix For: 1.4.12, 1.3.7
>
> Attachments: HBASE-23219.branch-1.001.patch
>
>
> Since we are using zkless in our production setup, we would like to enable 
> these tests back in apache on branch-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23219) Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)

2019-10-25 Thread Thiruvel Thirumoolan (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-23219:
-
Fix Version/s: 1.5.1
   1.6.0

> Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)
> 
>
> Key: HBASE-23219
> URL: https://issues.apache.org/jira/browse/HBASE-23219
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Trivial
> Fix For: 1.6.0, 1.4.12, 1.3.7, 1.5.1
>
> Attachments: HBASE-23219.branch-1.001.patch
>
>
> Since we are using zkless in our production setup, we would like to enable 
> these tests back in apache on branch-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23219) Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)

2019-10-25 Thread Thiruvel Thirumoolan (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-23219:
-
Attachment: HBASE-23219.branch-1.001.patch

> Re-enable ZKLess tests for branch-1 (Revert HBASE-14622)
> 
>
> Key: HBASE-23219
> URL: https://issues.apache.org/jira/browse/HBASE-23219
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Trivial
> Fix For: 1.4.12, 1.3.7
>
> Attachments: HBASE-23219.branch-1.001.patch
>
>
> Since we are using zkless in our production setup, we would like to enable 
> these tests back in apache on branch-1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22991) Release 1.4.11

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960054#comment-16960054
 ] 

Hudson commented on HBASE-22991:


Results for branch branch-1.4
[build #1067 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1067/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1067//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1067//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1067//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Release 1.4.11
> --
>
> Key: HBASE-22991
> URL: https://issues.apache.org/jira/browse/HBASE-22991
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11
>
> Attachments: Flaky_20Test_20Report.zip
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] jojochuang commented on issue #746: HBASE-23195 FSDataInputStreamWrapper unbuffer can NOT invoke the clas…

2019-10-25 Thread GitBox
jojochuang commented on issue #746: HBASE-23195 FSDataInputStreamWrapper 
unbuffer can NOT invoke the clas…
URL: https://github.com/apache/hbase/pull/746#issuecomment-546504552
 
 
   LGTM +1
   @Apache9 what do you say


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23175) Yarn unable to acquire delegation token for HBase Spark jobs

2019-10-25 Thread Ankit Singhal (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960008#comment-16960008
 ] 

Ankit Singhal commented on HBASE-23175:
---

yeah, HBASE-23194 removes all the deprecated methods from 
InterfaceAudiende.Public class, don't we remove methods from Public classes 
until the 1-2 major releases?

> Yarn unable to acquire delegation token for HBase Spark jobs
> 
>
> Key: HBASE-23175
> URL: https://issues.apache.org/jira/browse/HBASE-23175
> Project: HBase
>  Issue Type: Bug
>  Components: security, spark
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: HBASE-23175.master.001.patch
>
>
> Spark rely on the TokenUtil.obtainToken(conf) API which is removed in 
> HBase-2.0, though it has been fixed in SPARK-26432 to use the new API but 
> planned for Spark-3.0, hence we need the fix in HBase until they release it 
> and we upgrade it
> {code}
> 18/03/20 20:39:07 ERROR ApplicationMaster: User class threw exception: 
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:346)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:121)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:118)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:118)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:272)
> at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:533)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:73)
> at 
> org.apache.hadoop.hbase.spark.JavaHBaseContext.(JavaHBaseContext.scala:46)
> at 
> org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseBulkDeleteExample.main(JavaHBaseBulkDeleteExample.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:706)
> Caused by: com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:71)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:81)
> ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339203834
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
 ##
 @@ -97,44 +97,59 @@
   void close() throws IOException;
 
   /**
-   * Append a set of edits to the WAL. The WAL is not flushed/sync'd after 
this transaction
-   * completes BUT on return this edit must have its region edit/sequence id 
assigned
-   * else it messes up our unification of mvcc and sequenceid.  On return 
key will
-   * have the region edit/sequence id filled in.
+   * Append a set of data edits to the WAL. 'Data' here means that the content 
in the edits will
+   * also be added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
* @param info the regioninfo associated with append
* @param key Modified by this call; we add to it this edits region 
edit/sequence id.
* @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
-   * sequence id that is after all currently appended edits.
-   * @param inMemstore Always true except for case where we are writing a 
compaction completion
-   * record into the WAL; in this case the entry is just so we can finish an 
unfinished compaction
-   * -- it is not an edit for memstore.
+   *  sequence id that is after all currently appended edits.
* @return Returns a 'transaction id' and key will have the 
region edit/sequence id
-   * in it.
+   * in it.
+   * @see #appendMarker(RegionInfo, WALKeyImpl, WALEdit, boolean)
*/
-  long append(RegionInfo info, WALKeyImpl key, WALEdit edits, boolean 
inMemstore) throws IOException;
+  long appendData(RegionInfo info, WALKeyImpl key, WALEdit edits) throws 
IOException;
+
+  /**
+   * Append a marker edit to the WAL. A marker could be a FlushDescriptor, a 
compaction marker, or
+   * region event marker. The difference here is that, a marker will not be 
added to memstore.
+   * 
+   * The WAL is not flushed/sync'd after this transaction completes BUT on 
return this edit must
+   * have its region edit/sequence id assigned else it messes up our 
unification of mvcc and
+   * sequenceid. On return key will have the region edit/sequence 
id filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
 
 Review comment:
   Markers are rare. This method is rarely run. Checking for the type of the 
WALEdit shouldn't be that bad?
   
   It just seems odd passing in an extra parameter specification of the Edit 
type when we have a mechanism for flagging special edits.
   
   Overarching concern is that this stuff is complicated as you know and would 
be cool if we could avoid adding more combination types.
   
   Not going to block the patch... just for consideration.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339194328
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
 ##
 @@ -1113,10 +1135,21 @@ public OptionalLong getLogFileSizeIfBeingWritten(Path 
path) {
* passed in WALKey walKey parameter. Be warned that the 
WriteEntry is not
* immediately available on return from this method. It WILL be available 
subsequent to a sync of
* this append; otherwise, you will just have to wait on the WriteEntry to 
get filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param inMemstore Always true except for case where we are writing a 
region event marker, for
+   *  example, a compaction completion record into the WAL; in this 
case the entry is just
+   *  so we can finish an unfinished compaction -- it is not an edit 
for memstore.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
+   *  region on this region server. The WAL implementation should 
remove all the related
+   *  stuff, for example, the sequence id accounting.
 
 Review comment:
   Pushing back some. Doesn't inMemstore flag do this? It is false when we are 
writing markers? Markers are rare so a check on the WALEdit if a marker 
shouldn't slow us down.
   
   The IOE is a problem though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339194328
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
 ##
 @@ -1113,10 +1135,21 @@ public OptionalLong getLogFileSizeIfBeingWritten(Path 
path) {
* passed in WALKey walKey parameter. Be warned that the 
WriteEntry is not
* immediately available on return from this method. It WILL be available 
subsequent to a sync of
* this append; otherwise, you will just have to wait on the WriteEntry to 
get filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param inMemstore Always true except for case where we are writing a 
region event marker, for
+   *  example, a compaction completion record into the WAL; in this 
case the entry is just
+   *  so we can finish an unfinished compaction -- it is not an edit 
for memstore.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
+   *  region on this region server. The WAL implementation should 
remove all the related
+   *  stuff, for example, the sequence id accounting.
 
 Review comment:
   Doesn't inMemstore flag do this? It is false when we are writing markers?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-25 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r339200463
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -183,105 +270,166 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 boolean hasMore;
 Path path = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
 byte[] fileName = null;
-StoreFileWriter mobFileWriter = null, delFileWriter = null;
-long mobCells = 0, deleteMarkersCount = 0;
+StoreFileWriter mobFileWriter = null;
+long mobCells = 0;
 long cellsCountCompactedToMob = 0, cellsCountCompactedFromMob = 0;
 long cellsSizeCompactedToMob = 0, cellsSizeCompactedFromMob = 0;
 boolean finished = false;
+
 ScannerContext scannerContext =
 ScannerContext.newBuilder().setBatchLimit(compactionKVMax).build();
 throughputController.start(compactionName);
-KeyValueScanner kvs = (scanner instanceof KeyValueScanner)? 
(KeyValueScanner)scanner : null;
-long shippedCallSizeLimit = (long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+KeyValueScanner kvs = (scanner instanceof KeyValueScanner) ? 
(KeyValueScanner) scanner : null;
+long shippedCallSizeLimit =
+(long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+
+MobCell mobCell = null;
 try {
   try {
 // If the mob file writer could not be created, directly write the 
cell to the store file.
 mobFileWriter = mobStore.createWriterInTmp(new Date(fd.latestPutTs), 
fd.maxKeyCount,
   compactionCompression, store.getRegionInfo().getStartKey(), true);
 fileName = Bytes.toBytes(mobFileWriter.getPath().getName());
   } catch (IOException e) {
-LOG.warn("Failed to create mob writer, "
-   + "we will continue the compaction by writing MOB cells 
directly in store files", e);
+// Bailing out
+LOG.error("Failed to create mob writer, ", e);
+throw e;
   }
-  if (major) {
-try {
-  delFileWriter = mobStore.createDelFileWriterInTmp(new 
Date(fd.latestPutTs),
-fd.maxKeyCount, compactionCompression, 
store.getRegionInfo().getStartKey());
-} catch (IOException e) {
-  LOG.warn(
-"Failed to create del writer, "
-+ "we will continue the compaction by writing delete markers 
directly in store files",
-e);
-}
+  if (compactMOBs) {
+// Add the only reference we get for compact MOB case
+// because new store file will have only one MOB reference
+// in this case - of newly compacted MOB file
+mobRefSet.get().add(mobFileWriter.getPath().getName());
   }
   do {
 hasMore = scanner.next(cells, scannerContext);
 if (LOG.isDebugEnabled()) {
   now = EnvironmentEdgeManager.currentTime();
 }
 for (Cell c : cells) {
-  if (major && CellUtil.isDelete(c)) {
-if (MobUtils.isMobReferenceCell(c) || delFileWriter == null) {
-  // Directly write it to a store file
-  writer.append(c);
+
+  if (compactMOBs) {
+if (MobUtils.isMobReferenceCell(c)) {
+  String fName = MobUtils.getMobFileName(c);
+  Path pp = new Path(new Path(fs.getUri()), new Path(path, fName));
+
+  // Added to support migration
+  try {
+mobCell = mobStore.resolve(c, true, false);
+  } catch (FileNotFoundException fnfe) {
+if (discardMobMiss) {
+  LOG.error("Missing MOB cell: file=" + pp + " not found");
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-25 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r339199054
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -183,105 +270,166 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 boolean hasMore;
 Path path = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
 byte[] fileName = null;
-StoreFileWriter mobFileWriter = null, delFileWriter = null;
-long mobCells = 0, deleteMarkersCount = 0;
+StoreFileWriter mobFileWriter = null;
+long mobCells = 0;
 long cellsCountCompactedToMob = 0, cellsCountCompactedFromMob = 0;
 long cellsSizeCompactedToMob = 0, cellsSizeCompactedFromMob = 0;
 boolean finished = false;
+
 ScannerContext scannerContext =
 ScannerContext.newBuilder().setBatchLimit(compactionKVMax).build();
 throughputController.start(compactionName);
-KeyValueScanner kvs = (scanner instanceof KeyValueScanner)? 
(KeyValueScanner)scanner : null;
-long shippedCallSizeLimit = (long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+KeyValueScanner kvs = (scanner instanceof KeyValueScanner) ? 
(KeyValueScanner) scanner : null;
+long shippedCallSizeLimit =
+(long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+
+MobCell mobCell = null;
 try {
   try {
 // If the mob file writer could not be created, directly write the 
cell to the store file.
 mobFileWriter = mobStore.createWriterInTmp(new Date(fd.latestPutTs), 
fd.maxKeyCount,
   compactionCompression, store.getRegionInfo().getStartKey(), true);
 fileName = Bytes.toBytes(mobFileWriter.getPath().getName());
   } catch (IOException e) {
-LOG.warn("Failed to create mob writer, "
-   + "we will continue the compaction by writing MOB cells 
directly in store files", e);
+// Bailing out
+LOG.error("Failed to create mob writer, ", e);
+throw e;
   }
-  if (major) {
-try {
-  delFileWriter = mobStore.createDelFileWriterInTmp(new 
Date(fd.latestPutTs),
-fd.maxKeyCount, compactionCompression, 
store.getRegionInfo().getStartKey());
-} catch (IOException e) {
-  LOG.warn(
-"Failed to create del writer, "
-+ "we will continue the compaction by writing delete markers 
directly in store files",
-e);
-}
+  if (compactMOBs) {
+// Add the only reference we get for compact MOB case
+// because new store file will have only one MOB reference
+// in this case - of newly compacted MOB file
+mobRefSet.get().add(mobFileWriter.getPath().getName());
   }
   do {
 hasMore = scanner.next(cells, scannerContext);
 if (LOG.isDebugEnabled()) {
   now = EnvironmentEdgeManager.currentTime();
 }
 for (Cell c : cells) {
-  if (major && CellUtil.isDelete(c)) {
-if (MobUtils.isMobReferenceCell(c) || delFileWriter == null) {
-  // Directly write it to a store file
-  writer.append(c);
+
+  if (compactMOBs) {
+if (MobUtils.isMobReferenceCell(c)) {
+  String fName = MobUtils.getMobFileName(c);
+  Path pp = new Path(new Path(fs.getUri()), new Path(path, fName));
+
+  // Added to support migration
+  try {
+mobCell = mobStore.resolve(c, true, false);
+  } catch (FileNotFoundException fnfe) {
+if (discardMobMiss) {
+  LOG.error("Missing MOB cell: file=" + pp + " not found");
+  continue;
+} else {
+  throw fnfe;
+}
+  }
+
+  if (discardMobMiss && mobCell.getCell().getValueLength() == 0) {
 
 Review comment:
   Yes, 0 length MOB cel reference will go to a normal store. This is not a 
FNFE case, file is present but MOB cell has 0 length. Maximum we could do is 
spit warn message out.
   
   If someone will decide to set MOB threshold to 0, everything above 0 will go 
to MOB but 0 will goto to regular store file. Its fine with me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339194328
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
 ##
 @@ -1113,10 +1135,21 @@ public OptionalLong getLogFileSizeIfBeingWritten(Path 
path) {
* passed in WALKey walKey parameter. Be warned that the 
WriteEntry is not
* immediately available on return from this method. It WILL be available 
subsequent to a sync of
* this append; otherwise, you will just have to wait on the WriteEntry to 
get filled in.
+   * @param info the regioninfo associated with append
+   * @param key Modified by this call; we add to it this edits region 
edit/sequence id.
+   * @param edits Edits to append. MAY CONTAIN NO EDITS for case where we want 
to get an edit
+   *  sequence id that is after all currently appended edits.
+   * @param inMemstore Always true except for case where we are writing a 
region event marker, for
+   *  example, a compaction completion record into the WAL; in this 
case the entry is just
+   *  so we can finish an unfinished compaction -- it is not an edit 
for memstore.
+   * @param closeRegion Whether this is a region close marker, i.e, the last 
wal edit for this
+   *  region on this region server. The WAL implementation should 
remove all the related
+   *  stuff, for example, the sequence id accounting.
 
 Review comment:
   Ok. This is good justification. Perhaps if you do a new patch, add this 
justification as comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
saintstack commented on a change in pull request #753: HBASE-23181 Blocked WAL 
archive: "LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#discussion_r339193734
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ImmutableByteArray.java
 ##
 @@ -51,4 +51,8 @@ public static ImmutableByteArray wrap(byte[] b) {
   public String toStringUtf8() {
 return Bytes.toString(b);
   }
+
+  public String toStringBinary() {
+return Bytes.toStringBinary(b);
+  }
 
 Review comment:
   When would we ever want a String that had binary non-printables in it? What 
good is a toString that just does default Object#toString?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] karthikhw commented on issue #741: HBASE-23199 Error populating Table-Attribute fields

2019-10-25 Thread GitBox
karthikhw commented on issue #741: HBASE-23199 Error populating Table-Attribute 
fields
URL: https://github.com/apache/hbase/pull/741#issuecomment-54646
 
 
   Thank you very much @guangxuCheng for your review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23175) Yarn unable to acquire delegation token for HBase Spark jobs

2019-10-25 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959934#comment-16959934
 ] 

Josh Elser commented on HBASE-23175:


Actually, [~an...@apache.org], looks like HBASE-23194 landed since QA ran which 
invalidates this patch.

> Yarn unable to acquire delegation token for HBase Spark jobs
> 
>
> Key: HBASE-23175
> URL: https://issues.apache.org/jira/browse/HBASE-23175
> Project: HBase
>  Issue Type: Bug
>  Components: security, spark
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: HBASE-23175.master.001.patch
>
>
> Spark rely on the TokenUtil.obtainToken(conf) API which is removed in 
> HBase-2.0, though it has been fixed in SPARK-26432 to use the new API but 
> planned for Spark-3.0, hence we need the fix in HBase until they release it 
> and we upgrade it
> {code}
> 18/03/20 20:39:07 ERROR ApplicationMaster: User class threw exception: 
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:346)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:121)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:118)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:118)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:272)
> at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:533)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:73)
> at 
> org.apache.hadoop.hbase.spark.JavaHBaseContext.(JavaHBaseContext.scala:46)
> at 
> org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseBulkDeleteExample.main(JavaHBaseBulkDeleteExample.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:706)
> Caused by: com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:71)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:81)
> ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23175) Yarn unable to acquire delegation token for HBase Spark jobs

2019-10-25 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959921#comment-16959921
 ] 

Josh Elser commented on HBASE-23175:


Fixed the checkstyle locally (added the deprecated javadoc annotation to the 
existing comment) and will push. Thanks Ankit.

> Yarn unable to acquire delegation token for HBase Spark jobs
> 
>
> Key: HBASE-23175
> URL: https://issues.apache.org/jira/browse/HBASE-23175
> Project: HBase
>  Issue Type: Bug
>  Components: security, spark
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: HBASE-23175.master.001.patch
>
>
> Spark rely on the TokenUtil.obtainToken(conf) API which is removed in 
> HBase-2.0, though it has been fixed in SPARK-26432 to use the new API but 
> planned for Spark-3.0, hence we need the fix in HBase until they release it 
> and we upgrade it
> {code}
> 18/03/20 20:39:07 ERROR ApplicationMaster: User class threw exception: 
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:346)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:121)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:118)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:118)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:272)
> at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:533)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:73)
> at 
> org.apache.hadoop.hbase.spark.JavaHBaseContext.(JavaHBaseContext.scala:46)
> at 
> org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseBulkDeleteExample.main(JavaHBaseBulkDeleteExample.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:706)
> Caused by: com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:71)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:81)
> ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
Apache-HBase commented on issue #757: HBASE-23196 The IndexChunkPool’s 
percentage is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#issuecomment-546426169
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 24s |  master passed  |
   | :green_heart: |  compile  |   0m 55s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 20s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m  2s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   4m 58s |  the patch passed  |
   | :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 19s |  hbase-server: The patch 
generated 0 new + 66 unchanged - 2 fixed = 66 total (was 68)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 41s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 282m 49s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 35s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 341m 18s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   |   | hadoop.hbase.client.TestFromClientSide |
   |   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/757 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 4d3d04ce7fb5 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-757/out/precommit/personality/provided.sh
 |
   | git revision | master / d7b90b3199 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/2/testReport/
 |
   | Max. process+thread count | 4798 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #760: HBASE-23218 Increase the flexibility of replication walentry filter interface

2019-10-25 Thread GitBox
Apache-HBase commented on issue #760: HBASE-23218 Increase the flexibility of 
replication walentry filter interface
URL: https://github.com/apache/hbase/pull/760#issuecomment-546380132
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 23s |  master passed  |
   | :green_heart: |  compile  |   0m 57s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 21s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 10s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   4m 54s |  the patch passed  |
   | :green_heart: |  compile  |   0m 54s |  the patch passed  |
   | :green_heart: |  javac  |   0m 54s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   1m 20s |  hbase-server: The patch 
generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 39s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  16m  6s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 159m  4s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 35s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 216m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-760/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/760 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 7ab6a7cd7c0a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-760/out/precommit/personality/provided.sh
 |
   | git revision | master / d7b90b3199 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-760/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-760/1/testReport/
 |
   | Max. process+thread count | 4611 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-760/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #759: HBASE-23217 Set version as 2.2.3-SNAPSHOT in branch-2.2

2019-10-25 Thread GitBox
Apache-HBase commented on issue #759: HBASE-23217 Set version as 2.2.3-SNAPSHOT 
in branch-2.2
URL: https://github.com/apache/hbase/pull/759#issuecomment-546375692
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-2.2 Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 26s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   6m 34s |  branch-2.2 passed  |
   | :green_heart: |  compile  |   3m 26s |  branch-2.2 passed  |
   | :green_heart: |  checkstyle  |   3m  1s |  branch-2.2 passed  |
   | :green_heart: |  shadedjars  |   4m 59s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |  14m  7s |  branch-2.2 passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m  0s |  branch-2.2 passed  |
   | :blue_heart: |  mvndep  |   6m 23s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   5m 52s |  the patch passed  |
   | :green_heart: |  compile  |   3m 21s |  the patch passed  |
   | :green_heart: |  javac  |   3m 21s |  the patch passed  |
   | :green_heart: |  checkstyle  |   2m 46s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  xml  |   0m 52s |  The patch has no ill-formed XML file.  
|
   | :green_heart: |  shadedjars  |   4m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  16m 43s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |  12m 29s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 189m 24s |  root in the patch failed.  |
   | :green_heart: |  asflicense  |  17m 10s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 304m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.snapshot.TestExportSnapshotNoCluster |
   |   | hadoop.hbase.snapshot.TestSecureExportSnapshot |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-759/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/759 |
   | Optional Tests | dupname asflicense javac javadoc unit shadedjars 
hadoopcheck xml compile checkstyle |
   | uname | Linux e163e7e57816 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-759/out/precommit/personality/provided.sh
 |
   | git revision | branch-2.2 / eab65b6bed |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-759/1/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-759/1/testReport/
 |
   | Max. process+thread count | 4533 (vs. ulimit of 1) |
   | modules | C: hbase-checkstyle hbase-annotations hbase-build-configuration 
hbase-protocol-shaded hbase-common hbase-metrics-api hbase-hadoop-compat 
hbase-metrics hbase-hadoop2-compat hbase-protocol hbase-client hbase-zookeeper 
hbase-replication hbase-resource-bundle hbase-http hbase-procedure hbase-server 
hbase-mapreduce hbase-testing-util hbase-thrift hbase-rsgroup hbase-shell 
hbase-endpoint hbase-it hbase-rest hbase-examples hbase-shaded 
hbase-shaded/hbase-shaded-client hbase-shaded/hbase-shaded-client-byo-hadoop 
hbase-shaded/hbase-shaded-mapreduce hbase-external-blockcache hbase-hbtop 
hbase-assembly hbase-shaded/hbase-shaded-testing-util 
hbase-shaded/hbase-shaded-testing-util-tester 
hbase-shaded/hbase-shaded-check-invariants 
hbase-shaded/hbase-shaded-with-hadoop-check-invariants hbase-archetypes 
hbase-archetypes/hbase-client-project 
hbase-archetypes/hbase-shaded-client-project 
hbase-archetypes/hbase-archetype-builder . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-759/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[jira] [Commented] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959744#comment-16959744
 ] 

Hudson commented on HBASE-23194:


Results for branch master
[build #1516 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1516/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0
>
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23203) NPE in RSGroup info

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959743#comment-16959743
 ] 

Hudson commented on HBASE-23203:


Results for branch master
[build #1516 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1516/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> NPE in RSGroup info
> ---
>
> Key: HBASE-23203
> URL: https://issues.apache.org/jira/browse/HBASE-23203
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
>
> Rsgroup.jsp calls *Admin#listTableDescriptors((Pattern)null, true)* with 
> Pattern null but implementation *RawAsyncHBaseAdmin#listTableDescriptors* 
> don't allow null by Precondition. Also, the suggestion listTables(boolean) is 
> removed/deprecated already.
>  
> {code:java}
> HTTP ERROR 500
> Problem accessing /rsgroup.jsp. Reason:    
> Server Error
> Caused by:java.lang.NullPointerException: pattern is null. If you don't 
> specify a pattern, use listTables(boolean) instead
>  at 
> org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:897)
>  at 
> org.apache.hadoop.hbase.client.RawAsyncHBaseAdmin.listTableDescriptors(RawAsyncHBaseAdmin.java:495)
>  at 
> org.apache.hadoop.hbase.client.AdminOverAsyncAdmin.listTableDescriptors(AdminOverAsyncAdmin.java:137)
>  at 
> org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:390)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23012) Release 1.3.6

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959742#comment-16959742
 ] 

Hudson commented on HBASE-23012:


Results for branch master
[build #1516 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1516/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1516//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Release 1.3.6
> -
>
> Key: HBASE-23012
> URL: https://issues.apache.org/jira/browse/HBASE-23012
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.6
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959654#comment-16959654
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #159 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/159/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/159//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/159//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/159//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/159//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23149) hbase shouldPerformMajorCompaction logic is not correct

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959650#comment-16959650
 ] 

Hudson commented on HBASE-23149:


Results for branch branch-1
[build #1117 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1117/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1117//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1117//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1117//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> hbase shouldPerformMajorCompaction logic is not correct
> ---
>
> Key: HBASE-23149
> URL: https://issues.apache.org/jira/browse/HBASE-23149
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.4.9
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
> Fix For: 1.4.12, 1.3.7, 1.5.1
>
>
> we can know that the major compaction is skipping  from the below 
> regionserver's log, but it is compacting that region.
> and read the code and find it is not correct  and i add mark  "/*** ***/" 
> below
>  
> public boolean shouldPerformMajorCompaction(final Collection 
> filesToCompact)
>  throws IOException {
>  if (lowTimestamp > 0L && lowTimestamp < (now - mcTime)) {
>  if (filesToCompact.size() == 1) {
>  if (sf.isMajorCompaction() && (cfTTL == Long.MAX_VALUE || oldest < cfTTL)) {
>  float blockLocalityIndex =
>  sf.getHDFSBlockDistribution().getBlockLocalityIndex(
>  RSRpcServices.getHostname(comConf.conf, false));
>  if (blockLocalityIndex < comConf.getMinLocalityToForceCompact())
> { result = true; }
> else
> { LOG.debug("Skipping major compaction of " + regionInfo + " because one 
> (major) compacted file only, oldestTime " + oldest + "ms is < TTL=" + cfTTL + 
> " and blockLocalityIndex is " + blockLocalityIndex + " (min " + 
> comConf.getMinLocalityToForceCompact() + ")"); }
> } else if (cfTTL != HConstants.FOREVER && oldest > cfTTL)
> { LOG.debug("Major compaction triggered on store " + regionInfo + ", because 
> keyvalues outdated; time since last major compaction " + (now - lowTimestamp) 
> + "ms"); result = true; }
> } else
> { LOG.debug("Major compaction triggered on store " + regionInfo + "; time 
> since last major compaction " + (now - lowTimestamp) + "ms"); }
> result = true;{color:#de350b}  */**here it is not right, it should be 
> move to the above */*{color}
>  }
>  return result;
>  }
>  
> 2019-09-27 09:09:35,960 DEBUG [st129,16020,1568236573216_ChoreService_1] 
> compactions.RatioBasedCompactionPolicy: Skipping major compaction of 
> 100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
>  because one (major) compacted file only, oldestTime 3758085589ms is < 
> TTL=9223372036854775807 and blockLocalityIndex is 1.0 (min 0.0)
>  2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1] 
> compactions.SortedCompactionPolicy: Selecting compaction from 1 store files, 
> 0 compacting, 1 eligible, 100 blocking
>  2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1] 
> regionserver.HStore: 413a563092544e8df480fd601b2de71b - d: Initiating major 
> compaction (all files)
>  2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1] 
> regionserver.CompactSplitThread: Large Compaction requested: 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext@4b5582f1;
>  Because: CompactionChecker requests major compaction; use default priority; 
> compaction_queue=(1:0), split_queue=0, merge_queue=0
>  2019-09-27 09:09:35,961 INFO 
> [regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579] 
> regionserver.HRegion: Starting compaction on d in region 
> 100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
>  2019-09-27 09:09:35,961 INFO 
> [regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579] 
> regionserver.HStore: Starting compaction of 1 file(s) in d of 
> 100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
>  into 
> tmpdir=hdfs://st129:8020/hbase/data/default/100E_POINT_point_2ddata_z3_geom_GpsTime_v6/413a563092544e8df480fd601b2de71b/.tmp,
>  totalSize=5.1 G
>  2019-09-27 09:09:35,961 DEBUG 
> 

[GitHub] [hbase] chenxu14 commented on issue #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
chenxu14 commented on issue #757: HBASE-23196 The IndexChunkPool’s percentage 
is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#issuecomment-546305368
 
 
   Let me fix TestCompactingToCellFlatMapMemStore, other failed UT seems 
unrelated


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23218) Increase the flexibility of replication walentry filter interface

2019-10-25 Thread zhang peng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhang peng updated HBASE-23218:
---
Description: 
The current WALEntrySinkFilter filter interface can only filter *table* and 
*writetime*, which is not flexible enough. It often needs to get all the 
attributes of entry for comprehensive judgment and filtering.Therefore, I 
submit a pr.

 

pr : [https://github.com/apache/hbase/pull/760]

  was:The current WALEntrySinkFilter filter interface can only filter *table* 
and *writetime*, which is not flexible enough. It often needs to get all the 
attributes of entry for comprehensive judgment and filtering.Therefore, I 
submit a pr.


> Increase the flexibility of replication walentry filter interface
> -
>
> Key: HBASE-23218
> URL: https://issues.apache.org/jira/browse/HBASE-23218
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.2.0
>Reporter: zhang peng
>Priority: Major
>
> The current WALEntrySinkFilter filter interface can only filter *table* and 
> *writetime*, which is not flexible enough. It often needs to get all the 
> attributes of entry for comprehensive judgment and filtering.Therefore, I 
> submit a pr.
>  
> pr : [https://github.com/apache/hbase/pull/760]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] OpenOpened opened a new pull request #760: Increase the flexibility of replication walentry filter interface

2019-10-25 Thread GitBox
OpenOpened opened a new pull request #760: Increase the flexibility of 
replication walentry filter interface
URL: https://github.com/apache/hbase/pull/760
 
 
   jira : https://jira.apache.org/jira/browse/HBASE-23218
   
   The current WALEntrySinkFilter filter interface can only filter **table** 
and **writetime**, which is not flexible enough. It often needs to get all the 
attributes of entry for comprehensive judgment and filtering.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] chenxu14 commented on a change in pull request #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
chenxu14 commented on a change in pull request #757: HBASE-23196 The 
IndexChunkPool’s percentage is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#discussion_r338992643
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -1611,9 +1611,13 @@ protected void initializeMemStoreChunkCreator() {
   float initialCountPercentage = 
conf.getFloat(MemStoreLAB.CHUNK_POOL_INITIALSIZE_KEY,
   MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT);
   int chunkSize = conf.getInt(MemStoreLAB.CHUNK_SIZE_KEY, 
MemStoreLAB.CHUNK_SIZE_DEFAULT);
+  float indexChunkPercentage = 
conf.getFloat(MemStoreLAB.INDEX_CHUNK_PERCENTAGE_KEY,
+  MemStoreLAB.INDEX_CHUNK_PERCENTAGE_DEFAULT);
 
 Review comment:
   OK, has document this on MemStoreLAB#INDEX_CHUNK_PERCENTAGE_KEY


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23218) Increase the flexibility of replication walentry filter interface

2019-10-25 Thread zhang peng (Jira)
zhang peng created HBASE-23218:
--

 Summary: Increase the flexibility of replication walentry filter 
interface
 Key: HBASE-23218
 URL: https://issues.apache.org/jira/browse/HBASE-23218
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 2.2.0
Reporter: zhang peng


The current WALEntrySinkFilter filter interface can only filter *table* and 
*writetime*, which is not flexible enough. It often needs to get all the 
attributes of entry for comprehensive judgment and filtering.Therefore, I 
submit a pr.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] chenxu14 commented on a change in pull request #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
chenxu14 commented on a change in pull request #757: HBASE-23196 The 
IndexChunkPool’s percentage is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#discussion_r338978498
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ChunkCreator.java
 ##
 @@ -79,35 +81,33 @@
   static ChunkCreator instance;
   @VisibleForTesting
   static boolean chunkPoolDisabled = false;
-  private MemStoreChunkPool dataChunksPool;
-  private int chunkSize;
-  private MemStoreChunkPool indexChunksPool;
+  private Optional indexChunksPool;
+  private Optional dataChunksPool;
+  private final int chunkSize;
+  private final int indexChunkSize;
 
   @VisibleForTesting
   ChunkCreator(int chunkSize, boolean offheap, long globalMemStoreSize, float 
poolSizePercentage,
-   float initialCountPercentage, HeapMemoryManager 
heapMemoryManager,
-   float indexChunkSizePercentage) {
+  float initialCountPercentage, float indexChunkPercentage, int 
indexChunkSize,
+  HeapMemoryManager heapMemoryManager) {
 this.offheap = offheap;
-this.chunkSize = chunkSize; // in case pools are not allocated
-initializePools(chunkSize, globalMemStoreSize, poolSizePercentage, 
indexChunkSizePercentage,
-initialCountPercentage, heapMemoryManager);
+this.chunkSize = chunkSize;
+this.indexChunkSize = indexChunkSize;
+initializePools(globalMemStoreSize, poolSizePercentage, 
indexChunkPercentage,
+initialCountPercentage, heapMemoryManager);
   }
 
   @VisibleForTesting
-  private void initializePools(int chunkSize, long globalMemStoreSize,
-   float poolSizePercentage, float 
indexChunkSizePercentage,
-   float initialCountPercentage,
-   HeapMemoryManager heapMemoryManager) {
+  private void initializePools(long globalMemStoreSize, float 
poolSizePercentage,
+  float indexChunkPercentage, float initialCountPercentage,
+  HeapMemoryManager heapMemoryManager) {
 this.dataChunksPool = initializePool("data", globalMemStoreSize,
-(1 - indexChunkSizePercentage) * poolSizePercentage,
-initialCountPercentage, chunkSize, heapMemoryManager);
+(1 - indexChunkPercentage) * poolSizePercentage,
+initialCountPercentage, chunkSize, heapMemoryManager);
 // The index chunks pool is needed only when the index type is CCM.
 
 Review comment:
   > If MSLAB is enabled and if we use CompactingMemstore I think for now the 
code always creates CCM type
   
   Seems no way to enable ARRAY_MAP when CMS is enabled, code is not so perfect


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
Apache-HBase commented on issue #757: HBASE-23196 The IndexChunkPool’s 
percentage is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#issuecomment-546289736
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 38s |  master passed  |
   | :green_heart: |  compile  |   0m 54s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 19s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 43s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 11s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   4m 57s |  the patch passed  |
   | :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 18s |  hbase-server: The patch 
generated 0 new + 64 unchanged - 2 fixed = 64 total (was 66)  |
   | :broken_heart: |  whitespace  |   0m  0s |  The patch has 1 line(s) that 
end in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 44s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 281m 39s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 36s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 339m 37s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.regionserver.TestCompactingToCellFlatMapMemStore |
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   |   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/757 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux df27552a7a66 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-757/out/precommit/personality/provided.sh
 |
   | git revision | master / 8f92a14cd1 |
   | Default Java | 1.8.0_181 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/1/testReport/
 |
   | Max. process+thread count | 4807 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-757/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #758: HBASE-23216 Add 2.2.2 to download page

2019-10-25 Thread GitBox
Apache-HBase commented on issue #758: HBASE-23216 Add 2.2.2 to download page
URL: https://github.com/apache/hbase/pull/758#issuecomment-546276406
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m  2s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 51s |  master passed  |
   | :green_heart: |  mvnsite  |  20m 31s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 33s |  the patch passed  |
   | :green_heart: |  mvnsite  |  20m 14s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML file.  
|
   ||| _ Other Tests _ |
   | :green_heart: |  asflicense  |   0m 18s |  The patch does not generate ASF 
License warnings.  |
   |  |   |  55m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-758/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/758 |
   | Optional Tests | dupname asflicense mvnsite xml |
   | uname | Linux 256f4bca29cf 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-758/out/precommit/personality/provided.sh
 |
   | git revision | master / d7b90b3199 |
   | Max. process+thread count | 86 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-758/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 opened a new pull request #759: HBASE-23217 Set version as 2.2.3-SNAPSHOT in branch-2.2

2019-10-25 Thread GitBox
Apache9 opened a new pull request #759: HBASE-23217 Set version as 
2.2.3-SNAPSHOT in branch-2.2
URL: https://github.com/apache/hbase/pull/759
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-25 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959574#comment-16959574
 ] 

Peter Somogyi commented on HBASE-23194:
---

Thanks [~vjasani]!

> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0
>
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-25 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-23194:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Merged #737 to master.

> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0
>
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23217) Set version as 2.2.3-SNAPSHOT in branch-2.2

2019-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23217:
--
Fix Version/s: 2.2.3

> Set version as 2.2.3-SNAPSHOT in branch-2.2
> ---
>
> Key: HBASE-23217
> URL: https://issues.apache.org/jira/browse/HBASE-23217
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 2.2.3
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23055) Alter hbase:meta

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959564#comment-16959564
 ] 

Hudson commented on HBASE-23055:


Results for branch HBASE-23055
[build #25 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/25/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/25//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/25//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/25//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Alter hbase:meta
> 
>
> Key: HBASE-23055
> URL: https://issues.apache.org/jira/browse/HBASE-23055
> Project: HBase
>  Issue Type: Task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0
>
>
> hbase:meta is currently hardcoded. Its schema cannot be change.
> This issue is about allowing edits to hbase:meta schema. It will allow our 
> being able to set encodings such as the block-with-indexes which will help 
> quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first 
> step on road to being able to split meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-25 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-23194:
-
Release Note: 
Removed methods from TokenUtil:

1. public static CompletableFuture> 
obtainToken(AsyncConnection conn)
2. public static Token 
obtainToken(Connection conn)
3. public static AuthenticationProtos.Token 
toToken(Token token)
4. public static Token obtainToken( 
Connection conn, User user)
5. public static Token 
toToken(AuthenticationProtos.Token proto)
6. private static Text getClusterId(Token token)

Deprecated:
1. public static void obtainAndCacheToken(Connection conn, User user)

> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0
>
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #755: HBASE-23210 Backport HBASE-15519 (Add per-user metrics) to branch-1

2019-10-25 Thread GitBox
Apache-HBase commented on issue #755: HBASE-23210 Backport HBASE-15519 (Add 
per-user metrics) to branch-1
URL: https://github.com/apache/hbase/pull/755#issuecomment-546256129
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   2m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 4 
new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | :blue_heart: |  mvndep  |   1m 23s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   7m 57s |  branch-1 passed  |
   | :green_heart: |  compile  |   1m 32s |  branch-1 passed with JDK 
v1.8.0_232  |
   | :green_heart: |  compile  |   1m 39s |  branch-1 passed with JDK 
v1.7.0_242  |
   | :green_heart: |  checkstyle  |   2m 25s |  branch-1 passed  |
   | :green_heart: |  shadedjars  |   4m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 15s |  branch-1 passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javadoc  |   1m 31s |  branch-1 passed with JDK 
v1.7.0_242  |
   | :blue_heart: |  spotbugs  |   3m 20s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 42s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   2m 34s |  the patch passed  |
   | :green_heart: |  compile  |   1m 31s |  the patch passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javac  |   1m 31s |  the patch passed  |
   | :green_heart: |  compile  |   1m 42s |  the patch passed with JDK 
v1.7.0_242  |
   | :green_heart: |  javac  |   1m 42s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   2m  7s |  hbase-server: The patch 
generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   3m 49s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |   6m 20s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | :green_heart: |  javadoc  |   1m  9s |  the patch passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javadoc  |   1m 30s |  the patch passed with JDK 
v1.7.0_242  |
   | :green_heart: |  findbugs  |   5m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   0m 31s |  hbase-hadoop-compat in the patch 
passed.  |
   | :green_heart: |  unit  |   0m 41s |  hbase-hadoop2-compat in the patch 
passed.  |
   | :green_heart: |  unit  | 144m 16s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 57s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 207m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-755/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/755 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 103ee95c33b3 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-755/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 41f6713 |
   | Default Java | 1.7.0_242 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_232 
/usr/lib/jvm/zulu-7-amd64:1.7.0_242 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-755/3/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-755/3/testReport/
 |
   | Max. process+thread count | 4510 (vs. ulimit of 1) |
   | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-755/3/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL 

[jira] [Commented] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-25 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959544#comment-16959544
 ] 

Peter Somogyi commented on HBASE-23194:
---

[~vjasani], can you write a release note here listing the removed and 
deprecated methods?

> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0
>
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] petersomogyi merged pull request #737: HBASE-23194 : Remove unused methods from TokenUtil

2019-10-25 Thread GitBox
petersomogyi merged pull request #737: HBASE-23194 : Remove unused methods from 
TokenUtil
URL: https://github.com/apache/hbase/pull/737
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 opened a new pull request #758: HBASE-23216 Add 2.2.2 to download page

2019-10-25 Thread GitBox
Apache9 opened a new pull request #758: HBASE-23216 Add 2.2.2 to download page
URL: https://github.com/apache/hbase/pull/758
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #746: HBASE-23195 FSDataInputStreamWrapper unbuffer can NOT invoke the clas…

2019-10-25 Thread GitBox
Apache-HBase commented on issue #746: HBASE-23195 FSDataInputStreamWrapper 
unbuffer can NOT invoke the clas…
URL: https://github.com/apache/hbase/pull/746#issuecomment-546252489
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 41s |  master passed  |
   | :green_heart: |  compile  |   0m 58s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 19s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 46s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m  7s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 10s |  the patch passed  |
   | :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 16s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 51s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  16m 42s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 170m 45s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 33s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 231m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-746/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/746 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 1225152996cb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-746/out/precommit/personality/provided.sh
 |
   | git revision | master / 8f92a14cd1 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-746/6/testReport/
 |
   | Max. process+thread count | 4427 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-746/6/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-10-25 Thread Anoop Sam John (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-22460:
---
Release Note: 
Leaked store files can not be removed even after it is invalidated via 
compaction. A reasonable mitigation for a reader reference leak would be a fast 
reopen of the region on the same server.

Configs:

1. hbase.master.regions.recovery.check.interval :

Regions Recovery Chore interval in milliseconds. This chore keeps running at 
this interval to find all regions with configurable max store file ref count 
and reopens them. Defaults to 20 mins

2. hbase.regions.recovery.store.file.ref.count :

This config represents Store files Ref Count threshold value considered for 
reopening regions. Any region with store files ref count > this value would be 
eligible for reopening by master. Default value -1 indicates this feature is 
turned off. Only positive integer value should be provided to enable this 
feature. 

  was:
Leaked store files can not be removed even after it is invalidated via 
compaction. A reasonable mitigation for a reader reference leak would be a fast 
reopen of the region on the same server.

Configs:

1. hbase.master.regions.recovery.check.interval :

Regions Recovery Chore interval in milliseconds. This chore keeps running at 
this interval to find all regions with configurable max store file ref count 
and reopens them.

2. hbase.regions.recovery.store.file.ref.count :

This config represents Store files Ref Count threshold value considered for 
reopening regions. Any region with store files ref count > this value would be 
eligible for reopening by master. Default value -1 indicates this feature is 
turned off. Only positive integer value should be provided to enable this 
feature. 


> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-10-25 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22460:
-
Release Note: 
Leaked store files can not be removed even after it is invalidated via 
compaction. A reasonable mitigation for a reader reference leak would be a fast 
reopen of the region on the same server.

Configs:

1. hbase.master.regions.recovery.check.interval :

Regions Recovery Chore interval in milliseconds. This chore keeps running at 
this interval to find all regions with configurable max store file ref count 
and reopens them.

2. hbase.regions.recovery.store.file.ref.count :

This config represents Store files Ref Count threshold value considered for 
reopening regions. Any region with store files ref count > this value would be 
eligible for reopening by master. Default value -1 indicates this feature is 
turned off. Only positive integer value should be provided to enable this 
feature. 

  was:
Leaked store files can not be removed even after it is invalidated via 
compaction. A reasonable mitigation for a reader reference leak would be a fast 
reopen of the region on the same server.

Configs:

1. hbase.master.regions.recovery.check.interval :

Regions Recovery Chore interval in milliseconds. This chore keeps running at 
this interval to find all regions with configurable max store file ref count 
and reopens them.

2. hbase.regions.recovery.store.file.ref.count :

This config represents Store files Ref Count threshold value considered for 
reopening regions. Any region with store files ref count > this value would be 
eligible for reopening by master. Default value -1 indicates this feature is 
turned off. Only positive integer value should be provided to enable this 
feature.


  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ramkrish86 commented on a change in pull request #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
ramkrish86 commented on a change in pull request #757: HBASE-23196 The 
IndexChunkPool’s percentage is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#discussion_r338918229
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ChunkCreator.java
 ##
 @@ -79,35 +81,33 @@
   static ChunkCreator instance;
   @VisibleForTesting
   static boolean chunkPoolDisabled = false;
-  private MemStoreChunkPool dataChunksPool;
-  private int chunkSize;
-  private MemStoreChunkPool indexChunksPool;
+  private Optional indexChunksPool;
+  private Optional dataChunksPool;
+  private final int chunkSize;
+  private final int indexChunkSize;
 
   @VisibleForTesting
   ChunkCreator(int chunkSize, boolean offheap, long globalMemStoreSize, float 
poolSizePercentage,
-   float initialCountPercentage, HeapMemoryManager 
heapMemoryManager,
-   float indexChunkSizePercentage) {
+  float initialCountPercentage, float indexChunkPercentage, int 
indexChunkSize,
+  HeapMemoryManager heapMemoryManager) {
 this.offheap = offheap;
-this.chunkSize = chunkSize; // in case pools are not allocated
-initializePools(chunkSize, globalMemStoreSize, poolSizePercentage, 
indexChunkSizePercentage,
-initialCountPercentage, heapMemoryManager);
+this.chunkSize = chunkSize;
+this.indexChunkSize = indexChunkSize;
+initializePools(globalMemStoreSize, poolSizePercentage, 
indexChunkPercentage,
+initialCountPercentage, heapMemoryManager);
   }
 
   @VisibleForTesting
-  private void initializePools(int chunkSize, long globalMemStoreSize,
-   float poolSizePercentage, float 
indexChunkSizePercentage,
-   float initialCountPercentage,
-   HeapMemoryManager heapMemoryManager) {
+  private void initializePools(long globalMemStoreSize, float 
poolSizePercentage,
+  float indexChunkPercentage, float initialCountPercentage,
+  HeapMemoryManager heapMemoryManager) {
 this.dataChunksPool = initializePool("data", globalMemStoreSize,
-(1 - indexChunkSizePercentage) * poolSizePercentage,
-initialCountPercentage, chunkSize, heapMemoryManager);
+(1 - indexChunkPercentage) * poolSizePercentage,
+initialCountPercentage, chunkSize, heapMemoryManager);
 // The index chunks pool is needed only when the index type is CCM.
 
 Review comment:
   If MSLAB is enabled and if we use CompactingMemstore I think for now the 
code always creates CCM type. Do we have a way to enable ARRAy_TYPE when CMS is 
enabled? For the default memstore yes - we should make it 0 for index chunk 
pool (if it is still getting initialized).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ramkrish86 commented on a change in pull request #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
ramkrish86 commented on a change in pull request #757: HBASE-23196 The 
IndexChunkPool’s percentage is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#discussion_r338917839
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ChunkCreator.java
 ##
 @@ -193,34 +197,19 @@ Chunk getChunk(CompactingMemStore.IndexType 
chunkIndexType, ChunkType chunkType)
* @param size the size of the chunk to be allocated, in bytes
*/
   Chunk getChunk(CompactingMemStore.IndexType chunkIndexType, int size) {
-Chunk chunk = null;
-MemStoreChunkPool pool = null;
-
-// if the size is suitable for one of the pools
-if (dataChunksPool != null && size == dataChunksPool.getChunkSize()) {
-  pool = dataChunksPool;
-} else if (indexChunksPool != null && size == 
indexChunksPool.getChunkSize()) {
-  pool = indexChunksPool;
-}
-
-// if we have a pool
-if (pool != null) {
-  //  the pool creates the chunk internally. The chunk#init() call happens 
here
-  chunk = pool.getChunk();
-  // the pool has run out of maxCount
-  if (chunk == null) {
-if (LOG.isTraceEnabled()) {
-  LOG.trace("The chunk pool is full. Reached maxCount= " + 
pool.getMaxCount()
-  + ". Creating chunk onheap.");
+Optional pool = 
+size == this.indexChunkSize ? indexChunksPool : dataChunksPool;
+Chunk chunk = pool.map(MemStoreChunkPool::getChunk).orElseGet(
+  new Supplier() {
 
 Review comment:
   Good. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ramkrish86 commented on a change in pull request #757: HBASE-23196 The IndexChunkPool’s percentage is hard code to 0.1

2019-10-25 Thread GitBox
ramkrish86 commented on a change in pull request #757: HBASE-23196 The 
IndexChunkPool’s percentage is hard code to 0.1
URL: https://github.com/apache/hbase/pull/757#discussion_r338917668
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -1611,9 +1611,13 @@ protected void initializeMemStoreChunkCreator() {
   float initialCountPercentage = 
conf.getFloat(MemStoreLAB.CHUNK_POOL_INITIALSIZE_KEY,
   MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT);
   int chunkSize = conf.getInt(MemStoreLAB.CHUNK_SIZE_KEY, 
MemStoreLAB.CHUNK_SIZE_DEFAULT);
+  float indexChunkPercentage = 
conf.getFloat(MemStoreLAB.INDEX_CHUNK_PERCENTAGE_KEY,
+  MemStoreLAB.INDEX_CHUNK_PERCENTAGE_DEFAULT);
 
 Review comment:
   So now if ARRAY_MAP is used, the user has to make the INDEX_CHUNK+PERCENTAGE 
as 0 instead of using the defualt 0.1? May be we should document this then.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #753: HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush…

2019-10-25 Thread GitBox
Apache-HBase commented on issue #753: HBASE-23181 Blocked WAL archive: 
"LogRoller: Failed to schedule flush…
URL: https://github.com/apache/hbase/pull/753#issuecomment-546221515
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 56s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 24 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   1m 22s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   6m 56s |  master passed  |
   | :green_heart: |  compile  |   2m 10s |  master passed  |
   | :green_heart: |  checkstyle  |   2m 23s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 16s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 45s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   6m 28s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   5m 40s |  the patch passed  |
   | :green_heart: |  compile  |   1m 49s |  the patch passed  |
   | :green_heart: |  javac  |   1m 49s |  the patch passed  |
   | :green_heart: |  checkstyle  |   0m 28s |  The patch passed checkstyle in 
hbase-common  |
   | :green_heart: |  checkstyle  |   1m 34s |  hbase-server: The patch 
generated 0 new + 372 unchanged - 18 fixed = 372 total (was 390)  |
   | :green_heart: |  checkstyle  |   0m 20s |  The patch passed checkstyle in 
hbase-mapreduce  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m  5s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m 58s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   1m 13s |  the patch passed  |
   | :green_heart: |  findbugs  |   6m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   3m  1s |  hbase-common in the patch passed.  |
   | :broken_heart: |  unit  | 243m 18s |  hbase-server in the patch failed.  |
   | :green_heart: |  unit  |  23m  0s |  hbase-mapreduce in the patch passed.  
|
   | :green_heart: |  asflicense  |   1m 20s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 347m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-753/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/753 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux d28011c92316 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-753/out/precommit/personality/provided.sh
 |
   | git revision | master / 8f92a14cd1 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-753/3/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-753/3/testReport/
 |
   | Max. process+thread count | 5135 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server hbase-mapreduce U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-753/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services