[jira] [Updated] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB

2017-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17338:
---
Attachment: HBASE-17338_V5.patch

Fixing comments and test case failures

> Treat Cell data size under global memstore heap size only when that Cell can 
> not be copied to MSLAB
> ---
>
> Key: HBASE-17338
> URL: https://issues.apache.org/jira/browse/HBASE-17338
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17338.patch, HBASE-17338_V2.patch, 
> HBASE-17338_V2.patch, HBASE-17338_V4.patch, HBASE-17338_V5.patch
>
>
> We have only data size and heap overhead being tracked globally.  Off heap 
> memstore works with off heap backed MSLAB pool.  But a cell, when added to 
> memstore, not always getting copied to MSLAB.  Append/Increment ops doing an 
> upsert, dont use MSLAB.  Also based on the Cell size, we sometimes avoid 
> MSLAB copy.  But now we track these cell data size also under the global 
> memstore data size which indicated off heap size in case of off heap 
> memstore.  For global checks for flushes (against lower/upper watermark 
> levels), we check this size against max off heap memstore size.  We do check 
> heap overhead against global heap memstore size (Defaults to 40% of xmx)  But 
> for such cells the data size also should be accounted under the heap overhead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17688) MultiRowRangeFilter not working correctly if given same start and stop RowKey

2017-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885240#comment-15885240
 ] 

Anoop Sam John commented on HBASE-17688:


Same has to happen if any FIlter is present on Scan ( Even MultiRowRangeFilter  
with same start/stop keys)

> MultiRowRangeFilter not working correctly if given same start and stop RowKey
> -
>
> Key: HBASE-17688
> URL: https://issues.apache.org/jira/browse/HBASE-17688
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.4, 0.98.24, 1.1.8
>Reporter: Ravi Ahuj
>Assignee: Jingcheng Du
>Priority: Minor
> Attachments: HBASE-17688.master.patch
>
>
>   
>   
> try (final Connection conn = ConnectionFactory.createConnection(conf);
>final Table scanTable = conn.getTable(table)){
>   ArrayList rowRangesList = new 
> ArrayList<>();  
>
> String startRowkey="abc";
> String stopRowkey="abc";
>   rowRangesList.add(new MultiRowRangeFilter.RowRange(startRowkey, 
> true, stopRowkey, true));
>   Scan scan = new Scan();
>   scan.setFilter(new MultiRowRangeFilter(rowRangesList));
>   
>ResultScanner scanner=scanTable.getScanner(scan);
>
> for (Result result : scanner) {
> String rowkey=new String(result.getRow());
>System.out.println(rowkey);
>
> } 
> }
>   
> In Hbase API of Java, we want to do multiple scans in the table using 
> MultiRowRangeFilter.
> When we give multiple filters of startRowKey and stopRowKey, it is not 
> working Properly with same StartRowKey and StopRowkey.
> Ideally, it should give only one Row with that Rowkey, but instead it is 
> giving all the rows starting from that Rowkey in that Hbase Table



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2017-02-26 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885239#comment-15885239
 ] 

Vladimir Rodionov commented on HBASE-14123:
---

{quote}
That is right
{quote}

With close to 300 of review board comments during past 5 months, with numerous 
small and large refactoring of code due to these "requests", with multiple 
changes in command line tools format, the feature itself has not become more 
mature and robust but less functional. We  have stripped down some functions  
due to moving everything to a client side, such as automatic Master/RS 
configuration, rich client API, extended security, robust execution pipeline, 
as a result of these code reviews. We have prepared last summer very good user 
guide, which included configuration section:
https://issues.apache.org/jira/secure/attachment/12829269/Backup-and-Restore-Apache_19Sep2016.pdf

Now it is absolutely outdated and obsolete after numerous rounds of reviews. 
What other upcoming 2.0 feature has had such a good release doc long before the 
release? 

We can't keep up on a doc side with a constant flow of a code changes. 

I am wondering how many more review rounds will be required to get this feature 
finally committed? Can we set a limit on a maximum review rounds? Can we get 
final a list of comments/requests? That is pretty simple questions/requests 
from our side of a fence.

Otherwise this is becoming more and more like a never ending story.





> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: HBASE-7912
>
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v20.txt, 14123-master.v21.txt, 
> 14123-master.v24.txt, 14123-master.v25.txt, 14123-master.v27.txt, 
> 14123-master.v28.txt, 14123-master.v29.full.txt, 14123-master.v2.txt, 
> 14123-master.v30.txt, 14123-master.v31.txt, 14123-master.v32.txt, 
> 14123-master.v33.txt, 14123-master.v34.txt, 14123-master.v35.txt, 
> 14123-master.v36.txt, 14123-master.v37.txt, 14123-master.v38.txt, 
> 14123.master.v39.patch, 14123-master.v3.txt, 14123.master.v40.patch, 
> 14123.master.v41.patch, 14123.master.v42.patch, 14123.master.v44.patch, 
> 14123.master.v45.patch, 14123.master.v46.patch, 14123.master.v48.patch, 
> 14123.master.v49.patch, 14123.master.v50.patch, 14123.master.v51.patch, 
> 14123.master.v52.patch, 14123.master.v54.patch, 14123.master.v56.patch, 
> 14123.master.v57.patch, 14123.master.v58.patch, 14123-master.v5.txt, 
> 14123-master.v6.txt, 14123-master.v7.txt, 14123-master.v8.txt, 
> 14123-master.v9.txt, 14123-v14.txt, Backup-restoreinHBase2.0 (1).pdf, 
> Backup-restoreinHBase2.0.pdf, HBASE-14123-for-7912-v1.patch, 
> HBASE-14123-for-7912-v6.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, 
> HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, 
> HBASE-14123-v16.patch, HBASE-14123-v1.patch, HBASE-14123-v2.patch, 
> HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, 
> HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17688) MultiRowRangeFilter not working correctly if given same start and stop RowKey

2017-02-26 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885237#comment-15885237
 ] 

Jingcheng Du commented on HBASE-17688:
--

bq. If make a Scan request (with no filters) and with same start and end keys, 
what will be the behavior?
If start and stop keys are equal and the {{includeStopRow}} is true, it is a 
get.
If {{includeStopRow}} is false,  nothing returns.

> MultiRowRangeFilter not working correctly if given same start and stop RowKey
> -
>
> Key: HBASE-17688
> URL: https://issues.apache.org/jira/browse/HBASE-17688
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.4, 0.98.24, 1.1.8
>Reporter: Ravi Ahuj
>Assignee: Jingcheng Du
>Priority: Minor
> Attachments: HBASE-17688.master.patch
>
>
>   
>   
> try (final Connection conn = ConnectionFactory.createConnection(conf);
>final Table scanTable = conn.getTable(table)){
>   ArrayList rowRangesList = new 
> ArrayList<>();  
>
> String startRowkey="abc";
> String stopRowkey="abc";
>   rowRangesList.add(new MultiRowRangeFilter.RowRange(startRowkey, 
> true, stopRowkey, true));
>   Scan scan = new Scan();
>   scan.setFilter(new MultiRowRangeFilter(rowRangesList));
>   
>ResultScanner scanner=scanTable.getScanner(scan);
>
> for (Result result : scanner) {
> String rowkey=new String(result.getRow());
>System.out.println(rowkey);
>
> } 
> }
>   
> In Hbase API of Java, we want to do multiple scans in the table using 
> MultiRowRangeFilter.
> When we give multiple filters of startRowKey and stopRowKey, it is not 
> working Properly with same StartRowKey and StopRowkey.
> Ideally, it should give only one Row with that Rowkey, but instead it is 
> giving all the rows starting from that Rowkey in that Hbase Table



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885224#comment-15885224
 ] 

stack commented on HBASE-17662:
---

+1 (after addressing [~anoop.hbase]) comment.

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch, HBASE-17662-V06.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17495) TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails due to assertion error

2017-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885204#comment-15885204
 ] 

ramkrishna.s.vasudevan commented on HBASE-17495:


I think we need to check this before making Compacting Memstore as default in 
2.0.
Trying to run this test but it passes every time. Need to see. 
[~tedyu]
Does the default memstore counterpart of this TestHregion also fail 
intermittently? 

> TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails 
> due to assertion error
> 
>
> Key: HBASE-17495
> URL: https://issues.apache.org/jira/browse/HBASE-17495
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 17495-testHRegionWithInMemoryFlush-output-2.0123, 
> testHRegionWithInMemoryFlush-flush-output.0123, 
> TestHRegionWithInMemoryFlush-out.0222.tar.gz, 
> testHRegionWithInMemoryFlush-output.0119
>
>
> Looping through the test (based on commit 
> 76dc957f64fa38ce88694054db7dbf590f368ae7), I saw the following test failure:
> {code}
> testFlushCacheWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush)
>   Time elapsed: 0.53 sec  <<< FAILURE!
> java.lang.AssertionError: toggle=false i=940 ts=1484852861597 expected:<94> 
> but was:<92>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> {code}
> See test output for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17701) Add HadoopAuthFilterInitializer to use hadoop-auth AuthenticationFilter for hbase web ui

2017-02-26 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HBASE-17701:
---
Attachment: HBASE-17701.v1.patch

Attach a simple patch.

> Add HadoopAuthFilterInitializer to use hadoop-auth AuthenticationFilter for 
> hbase web ui
> 
>
> Key: HBASE-17701
> URL: https://issues.apache.org/jira/browse/HBASE-17701
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 1.2.4
>Reporter: Pan Yuxuan
> Attachments: HBASE-17701.v1.patch
>
>
> The HBase web UI is none secure by default, there is only one 
> StaticUserWebFilter for a fake user.
> For Hadoop, we already have AuthenticationFilter for web authentication based 
> on token or kerberos. So I think hbase can reuse the hadoop-auth 
> AuthenticationFilter by adding a HadoopAuthFilterInitializer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17701) Add HadoopAuthFilterInitializer to use hadoop-auth AuthenticationFilter for hbase web ui

2017-02-26 Thread Pan Yuxuan (JIRA)
Pan Yuxuan created HBASE-17701:
--

 Summary: Add HadoopAuthFilterInitializer to use hadoop-auth 
AuthenticationFilter for hbase web ui
 Key: HBASE-17701
 URL: https://issues.apache.org/jira/browse/HBASE-17701
 Project: HBase
  Issue Type: Improvement
  Components: UI
Affects Versions: 1.2.4
Reporter: Pan Yuxuan


The HBase web UI is none secure by default, there is only one 
StaticUserWebFilter for a fake user.
For Hadoop, we already have AuthenticationFilter for web authentication based 
on token or kerberos. So I think hbase can reuse the hadoop-auth 
AuthenticationFilter by adding a HadoopAuthFilterInitializer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885177#comment-15885177
 ] 

Hadoop QA commented on HBASE-17689:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 9s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 46s {color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 0s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.thrift.TestThriftServerCmdLine |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854815/HBASE-17689.patch |
| JIRA Issue | HBASE-17689 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a9e906d68d1a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4d90425 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5848/artifact/patchprocess/patch-unit-hbase-thrift.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5848/artifact/patchprocess/patch-unit-hbase-thrift.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5848/testReport/ |
| modules | C: hbase-thrift U: hbase-thrift |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5848/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> hbase thrift2 THBaseservice support table.existsAll
> ---
>
> Key: HBASE-17689
>   

[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2017-02-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885176#comment-15885176
 ] 

stack commented on HBASE-14123:
---

bq. You probably were not following me. We have a separate JIRA for 
documentation, which is in progress, where will have a section on configuration 
and complete user guide.

Thats right. I am not following you. I asked for links or release note because 
I'm trying to test and I have to dig around to try and figure why its not 
working with only cryptic output to go on and even now, it is up to me to go 
figure where the info is.

You seem to have a disregard for the user/operator's poor experience. If no 
effort to redress this, I'm -1.

Please help smooth this review process. Please ensure user/operator is clear on 
where they stand.

I reivewed your summary of what will be included in 2.0 but no response; are 
you going to put up a new version?

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: HBASE-7912
>
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v20.txt, 14123-master.v21.txt, 
> 14123-master.v24.txt, 14123-master.v25.txt, 14123-master.v27.txt, 
> 14123-master.v28.txt, 14123-master.v29.full.txt, 14123-master.v2.txt, 
> 14123-master.v30.txt, 14123-master.v31.txt, 14123-master.v32.txt, 
> 14123-master.v33.txt, 14123-master.v34.txt, 14123-master.v35.txt, 
> 14123-master.v36.txt, 14123-master.v37.txt, 14123-master.v38.txt, 
> 14123.master.v39.patch, 14123-master.v3.txt, 14123.master.v40.patch, 
> 14123.master.v41.patch, 14123.master.v42.patch, 14123.master.v44.patch, 
> 14123.master.v45.patch, 14123.master.v46.patch, 14123.master.v48.patch, 
> 14123.master.v49.patch, 14123.master.v50.patch, 14123.master.v51.patch, 
> 14123.master.v52.patch, 14123.master.v54.patch, 14123.master.v56.patch, 
> 14123.master.v57.patch, 14123.master.v58.patch, 14123-master.v5.txt, 
> 14123-master.v6.txt, 14123-master.v7.txt, 14123-master.v8.txt, 
> 14123-master.v9.txt, 14123-v14.txt, Backup-restoreinHBase2.0 (1).pdf, 
> Backup-restoreinHBase2.0.pdf, HBASE-14123-for-7912-v1.patch, 
> HBASE-14123-for-7912-v6.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, 
> HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, 
> HBASE-14123-v16.patch, HBASE-14123-v1.patch, HBASE-14123-v2.patch, 
> HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, 
> HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17690) Clean up MOB code

2017-02-26 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885174#comment-15885174
 ] 

Jingcheng Du commented on HBASE-17690:
--

bq. Is the above just an estimate (not all store files have the same size) ?
You are right, Ted. It is just an estimate, the same logic in 
Compactor#performCompaction.

> Clean up MOB code
> -
>
> Key: HBASE-17690
> URL: https://issues.apache.org/jira/browse/HBASE-17690
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-17690.patch
>
>
> Clean up the code in MOB.
> # Fix the incorrect description in comments.
> # Fix the warning and remove redundant import in code.
> # Remove the references to the deprecated code.
> # Add throughput controller for DefaultMobStoreFlusher and 
> DefaultMobStoreCompactor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885166#comment-15885166
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-17623 at 2/27/17 5:38 AM:
-

bq.no compaction
You mean there is not compaction at all happening now? Only flushes is it? 
You can try with compactions also something like use Performance Evaluation 
tool with 50/100 threads and default config in a single node is also fine. I 
think the above report is good. I will check the patch once too. Just to know 
the impact when there is more write load.


was (Author: ram_krish):
bq.no compaction
You mean there is not compaction at all happening now? Only flushes is it? 
You can try with compactions also something like use Performance Evaluation 
tool with 50/threads and default config in a single node is also fine. I think 
the above report is good. I will check the patch once too.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17690) Clean up MOB code

2017-02-26 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885168#comment-15885168
 ] 

Jingcheng Du commented on HBASE-17690:
--

Thanks [~yuzhih...@gmail.com] for the comments!
The RB currently goes down, I will upload it as soon as it comes back. Thanks!

> Clean up MOB code
> -
>
> Key: HBASE-17690
> URL: https://issues.apache.org/jira/browse/HBASE-17690
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-17690.patch
>
>
> Clean up the code in MOB.
> # Fix the incorrect description in comments.
> # Fix the warning and remove redundant import in code.
> # Remove the references to the deprecated code.
> # Add throughput controller for DefaultMobStoreFlusher and 
> DefaultMobStoreCompactor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885166#comment-15885166
 ] 

ramkrishna.s.vasudevan commented on HBASE-17623:


bq.no compaction
You mean there is not compaction at all happening now? Only flushes is it? 
You can try with compactions also something like use Performance Evaluation 
tool with 50/threads and default config in a single node is also fine. I think 
the above report is good. I will check the patch once too.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17428) Expand on shell commands for detailed insight

2017-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885150#comment-15885150
 ] 

Hadoop QA commented on HBASE-17428:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
0s {color} | {color:green} HBASE-16961 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} HBASE-16961 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 14m 
13s {color} | {color:green} HBASE-16961 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} HBASE-16961 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s 
{color} | {color:red} hbase-protocol-shaded in HBASE-16961 has 24 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} HBASE-16961 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 14m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 14s 
{color} | {color:red} The patch causes 306 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 24s 
{color} | {color:red} The patch causes 306 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 28s 
{color} | {color:red} The patch causes 306 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 33s 
{color} | {color:red} The patch causes 306 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 38s 
{color} | {color:red} The patch causes 306 errors with Hadoop v2.5.2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 39s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (HBASE-15302) Reenable the other tests disabled by HBASE-14678

2017-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885148#comment-15885148
 ] 

Hadoop QA commented on HBASE-15302:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 0s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 13s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 113m 9s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
|   | hadoop.hbase.regionserver.TestEncryptionKeyRotation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:66fbe99 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834142/HBASE-15302-branch-1.3-append-v1.patch
 |
| JIRA Issue | HBASE-15302 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux def8b2fa8314 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-17534) SecureBulkLoadClient squashes DoNotRetryIOExceptions from the server

2017-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885146#comment-15885146
 ] 

Hadoop QA commented on HBASE-17534:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
51s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 45s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 83m 44s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
33s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 126m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f |
| JIRA 

[jira] [Updated] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-26 Thread Yechao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-17689:

Attachment: HBASE-17689.patch

update the patch ,remove wildcard import and git diff with master

> hbase thrift2 THBaseservice support table.existsAll
> ---
>
> Key: HBASE-17689
> URL: https://issues.apache.org/jira/browse/HBASE-17689
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Yechao Chen
>  Labels: thrift2
> Attachments: HBASE-17689.patch
>
>
> hbase thrift2  support  existsAll(List gets) throws IOException;
> hbase.thrift add a method to service THBaseService like this
> list existsAll(
>   1: required binary table,
>   2: required list tgets
> ) throws (1:TIOError io)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll

2017-02-26 Thread Yechao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-17689:

Attachment: (was: HBASE-17689.patch)

> hbase thrift2 THBaseservice support table.existsAll
> ---
>
> Key: HBASE-17689
> URL: https://issues.apache.org/jira/browse/HBASE-17689
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Yechao Chen
>  Labels: thrift2
>
> hbase thrift2  support  existsAll(List gets) throws IOException;
> hbase.thrift add a method to service THBaseService like this
> list existsAll(
>   1: required binary table,
>   2: required list tgets
> ) throws (1:TIOError io)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17688) MultiRowRangeFilter not working correctly if given same start and stop RowKey

2017-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885125#comment-15885125
 ] 

Anoop Sam John commented on HBASE-17688:


If make a Scan request (with no filters) and with same start and end keys, what 
will be the behavior?

> MultiRowRangeFilter not working correctly if given same start and stop RowKey
> -
>
> Key: HBASE-17688
> URL: https://issues.apache.org/jira/browse/HBASE-17688
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.4, 0.98.24, 1.1.8
>Reporter: Ravi Ahuj
>Assignee: Jingcheng Du
>Priority: Minor
> Attachments: HBASE-17688.master.patch
>
>
>   
>   
> try (final Connection conn = ConnectionFactory.createConnection(conf);
>final Table scanTable = conn.getTable(table)){
>   ArrayList rowRangesList = new 
> ArrayList<>();  
>
> String startRowkey="abc";
> String stopRowkey="abc";
>   rowRangesList.add(new MultiRowRangeFilter.RowRange(startRowkey, 
> true, stopRowkey, true));
>   Scan scan = new Scan();
>   scan.setFilter(new MultiRowRangeFilter(rowRangesList));
>   
>ResultScanner scanner=scanTable.getScanner(scan);
>
> for (Result result : scanner) {
> String rowkey=new String(result.getRow());
>System.out.println(rowkey);
>
> } 
> }
>   
> In Hbase API of Java, we want to do multiple scans in the table using 
> MultiRowRangeFilter.
> When we give multiple filters of startRowKey and stopRowKey, it is not 
> working Properly with same StartRowKey and StopRowkey.
> Ideally, it should give only one Row with that Rowkey, but instead it is 
> giving all the rows starting from that Rowkey in that Hbase Table



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885118#comment-15885118
 ] 

Anoop Sam John commented on HBASE-17662:


Sorry for the delay.
{code}
-  public static final long DEEP_OVERHEAD = AbstractMemStore.DEEP_OVERHEAD
-  + 6 * ClassSize.REFERENCE // Store, RegionServicesForStores, 
CompactionPipeline,
-// MemStoreCompactor, inMemoryFlushInProgress, 
allowCompaction
-  + Bytes.SIZEOF_LONG // inmemoryFlushSize
+
+  public static final long DEEP_OVERHEAD = ClassSize.align( 
AbstractMemStore.DEEP_OVERHEAD
+  + 4 * ClassSize.REFERENCE // Store, RegionServicesForStores, 
CompactionPipeline,
+// MemStoreCompactor
+  + Bytes.SIZEOF_LONG   // inmemoryFlushSize
+  + 2 * Bytes.SIZEOF_BOOLEAN// compositeSnapshot and inWalReplay
   + 2 * ClassSize.ATOMIC_BOOLEAN// inMemoryFlushInProgress and 
allowCompaction
-  + CompactionPipeline.DEEP_OVERHEAD + MemStoreCompactor.DEEP_OVERHEAD;
+  + CompactionPipeline.DEEP_OVERHEAD + MemStoreCompactor.DEEP_OVERHEAD);
{code}
Why there is a change from 6 REFERENCEs to 4?   inMemoryFlushInProgress, 
allowCompaction - These are AtomicBoolean fields means they have reference 
overhead.  So 6 is the correct value.  Do we need any change in tests to change 
it back to 6 (existing code)?  Pls check and correct..  


> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch, HBASE-17662-V06.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17699) Fix TestLockProcedure

2017-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15885097#comment-15885097
 ] 

Hudson commented on HBASE-17699:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2578 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2578/])
HBASE-17699 Fix TestLockProcedure. (appy: rev 
4d90425031df35a9d0efe860020190239c587cd3)
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/AbstractProcedureScheduler.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/SimpleProcedureScheduler.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureScheduler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java


> Fix TestLockProcedure
> -
>
> Key: HBASE-17699
> URL: https://issues.apache.org/jira/browse/HBASE-17699
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Appy
>Assignee: Appy
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17699.master.001.patch, 
> HBASE-17699.master.002.patch
>
>
> TestLockProcedure is failing consistently after HBASE-17605. It's interesting 
> that HadoopQA didn't report any test failures on that jira. Anyways, need to 
> fix the test now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (HBASE-17700) Release 1.2.5

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-17700 started by Sean Busbey.
---
> Release 1.2.5
> -
>
> Key: HBASE-17700
> URL: https://issues.apache.org/jira/browse/HBASE-17700
> Project: HBase
>  Issue Type: Task
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.2.5
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17700) Release 1.2.5

2017-02-26 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-17700:
---

 Summary: Release 1.2.5
 Key: HBASE-17700
 URL: https://issues.apache.org/jira/browse/HBASE-17700
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 1.2.5






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13603) Write test asserting desired priority of RS->Master RPCs

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13603:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Write test asserting desired priority of RS->Master RPCs
> 
>
> Key: HBASE-13603
> URL: https://issues.apache.org/jira/browse/HBASE-13603
> Project: HBase
>  Issue Type: Test
>  Components: IPC/RPC, test
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.3.1, 1.1.7, 1.2.6
>
>
> From HBASE-13351:
> {quote}
> Any way we can write a FT test to assert that the RS->Master APIs are treated 
> with higher priority. I see your UT for asserting the annotation.
> {quote}
> Write a test that verifies expected RPCs are run on the correct pools in as 
> real of an environment possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15580) Tag coprocessor limitedprivate scope to StoreFile.Reader

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15580:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Tag coprocessor limitedprivate scope to StoreFile.Reader
> 
>
> Key: HBASE-15580
> URL: https://issues.apache.org/jira/browse/HBASE-15580
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0, 1.0.4, 1.1.7, 1.2.6
>
> Attachments: HBASE-15580_branch-1.0.patch, HBASE-15580.patch
>
>
> For phoenix local indexing we need to have custom storefile reader 
> constructor(IndexHalfStoreFileReader) to distinguish from other storefile 
> readers. So wanted to mark StoreFile.Reader scope as 
> InterfaceAudience.LimitedPrivate("Coprocessor")



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14391) Empty regionserver WAL will never be deleted although the coresponding regionserver has been stale

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14391:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Empty regionserver WAL will never be deleted although the coresponding 
> regionserver has been stale
> --
>
> Key: HBASE-14391
> URL: https://issues.apache.org/jira/browse/HBASE-14391
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.2
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
> Fix For: 2.0.0, 1.3.1, 1.1.7, 1.2.6
>
> Attachments: HBASE-14391-master-v3.patch, 
> HBASE_14391_master_v4.patch, HBASE_14391_trunk_v1.patch, 
> HBASE_14391_trunk_v2.patch, WALs-leftover-dir.txt
>
>
> When I restarted the hbase cluster in which there was few data, I found there 
> are two directories for one host with different timestamp which indicates 
> that the old regionserver wal directory is not deleted.
> FHLog#989
> {code}
>  @Override
>   public void close() throws IOException {
> shutdown();
> final FileStatus[] files = getFiles();
> if (null != files && 0 != files.length) {
>   for (FileStatus file : files) {
> Path p = getWALArchivePath(this.fullPathArchiveDir, file.getPath());
> // Tell our listeners that a log is going to be archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.preLogArchive(file.getPath(), p);
>   }
> }
> if (!FSUtils.renameAndSetModifyTime(fs, file.getPath(), p)) {
>   throw new IOException("Unable to rename " + file.getPath() + " to " 
> + p);
> }
> // Tell our listeners that a log was archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.postLogArchive(file.getPath(), p);
>   }
> }
>   }
>   LOG.debug("Moved " + files.length + " WAL file(s) to " +
> FSUtils.getPath(this.fullPathArchiveDir));
> }
> LOG.info("Closed WAL: " + toString());
>   }
> {code}
> When regionserver is stopped, the hlog will be archived, so wal/regionserver 
> is empty in hdfs.
> MasterFileSystem#252
> {code}
> if (curLogFiles == null || curLogFiles.length == 0) {
> // Empty log folder. No recovery needed
> continue;
>   }
> {code}
> The regionserver directory will be not splitted, it makes sense. But it will 
> be not deleted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14223) Meta WALs are not cleared if meta region was closed and RS aborts

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14223:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Meta WALs are not cleared if meta region was closed and RS aborts
> -
>
> Key: HBASE-14223
> URL: https://issues.apache.org/jira/browse/HBASE-14223
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.0.4, 1.3.1, 1.1.7, 1.2.6
>
> Attachments: HBASE-14223logs, hbase-14223_v0.patch, 
> hbase-14223_v1-branch-1.patch, hbase-14223_v2-branch-1.patch, 
> hbase-14223_v3-branch-1.patch, hbase-14223_v3-branch-1.patch, 
> hbase-14223_v3-master.patch
>
>
> When an RS opens meta, and later closes it, the WAL(FSHlog) is not closed. 
> The last WAL file just sits there in the RS WAL directory. If RS stops 
> gracefully, the WAL file for meta is deleted. Otherwise if RS aborts, WAL for 
> meta is not cleaned. It is also not split (which is correct) since master 
> determines that the RS no longer hosts meta at the time of RS abort. 
> From a cluster after running ITBLL with CM, I see a lot of {{-splitting}} 
> directories left uncleaned: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs
> Found 31 items
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 01:14 
> /apps/hbase/data/WALs/hregion-58203265
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 07:54 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433489308745-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 09:28 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433494382959-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 10:01 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433498252205-splitting
> ...
> {code}
> The directories contain WALs from meta: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting
> Found 2 items
> -rw-r--r--   3 hbase hadoop 201608 2015-06-05 03:15 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
> -rw-r--r--   3 hbase hadoop  44420 2015-06-05 04:36 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> The RS hosted the meta region for some time: 
> {code}
> 2015-06-05 03:14:28,692 INFO  [PostOpenDeployTasks:1588230740] 
> zookeeper.MetaTableLocator: Setting hbase:meta region location in ZooKeeper 
> as os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285
> ...
> 2015-06-05 03:15:17,302 INFO  
> [RS_CLOSE_META-os-enis-dal-test-jun-4-5:16020-0] regionserver.HRegion: Closed 
> hbase:meta,,1.1588230740
> {code}
> In between, a WAL is created: 
> {code}
> 2015-06-05 03:15:11,707 INFO  
> [RS_OPEN_META-os-enis-dal-test-jun-4-5:16020-0-MetaLogRoller] wal.FSHLog: 
> Rolled WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
>  with entries=385, filesize=196.88 KB; new WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> When CM killed the region server later master did not see these WAL files: 
> {code}
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:46,075 
> INFO  [MASTER_SERVER_OPERATIONS-os-enis-dal-test-jun-4-3:16000-0] 
> master.SplitLogManager: started splitting 2 logs in 
> [hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting]
>  for [os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285]
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:47,300 
> INFO  [main-EventThread] wal.WALSplitter: Archived processed log 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
>  to 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/oldWALs/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:50,497 
> INFO  [main-EventThread] 

[jira] [Updated] (HBASE-17534) SecureBulkLoadClient squashes DoNotRetryIOExceptions from the server

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17534:

Fix Version/s: (was: 1.2.5)
   1.2.6

> SecureBulkLoadClient squashes DoNotRetryIOExceptions from the server
> 
>
> Key: HBASE-17534
> URL: https://issues.apache.org/jira/browse/HBASE-17534
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 1.4.0, 1.3.1, 1.1.10, 1.2.6
>
> Attachments: HBASE-17534.001.branch-1.patch, 
> HBASE-17534.002.branch-1.patch, HBASE-17534.003.branch-1.patch
>
>
> While writing some tests against 1.x, I noticed that what should have been a 
> DoNotRetryIOException sent to the client from a RegionServer was getting 
> retried until it reached the hbase client retries limit.
> Upon inspection, I found that the SecureBulkLoadClient was wrapping all 
> Exceptions from the RPC as an IOException. I believe this is creating a case 
> where the RPC system doesn't notice that there's a DNRIOException wrapped 
> beneath it, thinking it's a transient error.
> This results in clients having to wait for the retry limit to be reached 
> before they get acknowledgement that something failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16030) All Regions are flushed at about same time when MEMSTORE_PERIODIC_FLUSH is on, causing flush spike

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16030:

Fix Version/s: (was: 1.2.5)
   1.2.6

> All Regions are flushed at about same time when MEMSTORE_PERIODIC_FLUSH is 
> on, causing flush spike
> --
>
> Key: HBASE-16030
> URL: https://issues.apache.org/jira/browse/HBASE-16030
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.1
>Reporter: Tianying Chang
>Assignee: Tianying Chang
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.6
>
> Attachments: hbase-16030.patch, hbase-16030-v2.patch, 
> hbase-16030-v3.patch, Screen Shot 2016-06-15 at 11.35.42 PM.png, Screen Shot 
> 2016-06-15 at 11.52.38 PM.png
>
>
> In our production cluster, we observed that memstore flush spike every hour 
> for all regions/RS. (we use the default memstore periodic flush time of 1 
> hour). 
> This will happend when two conditions are met: 
> 1. the memstore does not have enough data to be flushed before 1 hour limit 
> reached;
> 2. all regions are opened around the same time, (e.g. all RS are started at 
> the same time when start a cluster). 
> With above two conditions, all the regions will be flushed around the same 
> time at: startTime+1hour-delay again and again.
> We added a flush jittering time to randomize the flush time of each region, 
> so that they don't get flushed at around the same time. We had this feature 
> running in our 94.7 and 94.26 cluster. Recently, we upgrade to 1.2, found 
> this issue still there in 1.2. So we are porting this into 1.2 branch. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14610) IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14610:

Fix Version/s: (was: 1.2.5)
   1.2.6

> IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client
> --
>
> Key: HBASE-14610
> URL: https://issues.apache.org/jira/browse/HBASE-14610
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.0.4, 1.3.1, 1.1.7, 1.2.6
>
> Attachments: output
>
>
> HBASE-14535 introduces an IT to simulate a running cluster with RPC servers 
> and RPC clients doing requests against the servers. 
> It passes with the sync client, but fails with async client. Probably we need 
> to take a look. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15691:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Stephen Yuan Jiang
> Fix For: 1.4.0, 1.3.1, 1.2.6
>
> Attachments: HBASE-15691-branch-1.patch, HBASE-15691.v2-branch-1.patch
>
>
> HBASE-10205 was committed to trunk and 0.98 branches only. To preserve 
> continuity we should commit it to branch-1. The change requires more than 
> nontrivial fixups so I will attach a backport of the change from trunk to 
> current branch-1 here. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15302) Reenable the other tests disabled by HBASE-14678

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15302:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Reenable the other tests disabled by HBASE-14678
> 
>
> Key: HBASE-15302
> URL: https://issues.apache.org/jira/browse/HBASE-15302
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.6
>
> Attachments: HBASE-15302-branch-1.3-append-v1.patch, 
> HBASE-15302-branch-1.3-append-v1.patch, HBASE-15302-branch-1-append-v1.patch, 
> HBASE-15302-branch-1-v1.patch, HBASE-15302-v1.txt, HBASE-15302-v1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17229) Backport of purge ThreadLocals

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17229:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Backport of purge ThreadLocals
> --
>
> Key: HBASE-17229
> URL: https://issues.apache.org/jira/browse/HBASE-17229
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Critical
> Fix For: 1.3.1, 1.2.6
>
>
> Backport HBASE-17072 and HBASE-16146. The former needs to be backported to 
> 1.3 ([~mantonov]) and 1.2 ([~busbey]). The latter is already in 1.3.  Needs 
> to be backported to 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17513) Thrift Server 1 uses different QOP settings than RPC and Thrift Server 2 and can easily be misconfigured so there is no encryption when the operator expects it.

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17513:

Fix Version/s: (was: 1.2.5)
   1.2.6

> Thrift Server 1 uses different QOP settings than RPC and Thrift Server 2 and 
> can easily be misconfigured so there is no encryption when the operator 
> expects it.
> 
>
> Key: HBASE-17513
> URL: https://issues.apache.org/jira/browse/HBASE-17513
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, security, Thrift, Usability
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.1, 1.1.10, 1.2.6
>
>
> As of HBASE-14400 the setting {{hbase.thrift.security.qop}} was unified to 
> behave the same as the general HBase RPC protection. However, this only 
> happened for the Thrift2 server. The Thrift server found in the thrift 
> package (aka Thrift Server 1) still hard codes the old configs of 'auth', 
> 'auth-int', and 'auth-conf'.
> Additionally, these Quality of Protection (qop) settings are used only by the 
> SASL transport. If a user configures the HBase Thrift Server to make use of 
> the HTTP transport (to enable doAs proxying e.g. for Hue) then a QOP setting 
> of 'privacy' or 'auth-conf' won't get them encryption as expected.
> We should
> 1) update {{hbase-thrift/src/main/.../thrift/ThriftServerRunner}} to rely on 
> {{SaslUtil}} to use the same 'authentication', 'integrity', 'privacy' configs 
> in a backward compatible way
> 2) also have ThriftServerRunner warn when both {{hbase.thrift.security.qop}} 
> and {{hbase.regionserver.thrift.http}} are set, since the latter will cause 
> the former to be ignored. (users should be directed to 
> {{hbase.thrift.ssl.enabled}} and related configs to ensure their transport is 
> encrypted when using the HTTP transport.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13587) TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent is flakey

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13587:

Fix Version/s: (was: 1.2.5)
   1.2.6

> TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent is flakey
> ---
>
> Key: HBASE-13587
> URL: https://issues.apache.org/jira/browse/HBASE-13587
> Project: HBase
>  Issue Type: Test
>Reporter: Nick Dimiduk
> Fix For: 2.0.0, 1.3.1, 1.1.7, 1.2.6
>
>
> Looking at our [build 
> history|https://builds.apache.org/job/HBase-1.1/buildTimeTrend], it seems 
> this test is flakey. See builds 428, 431, 432, 433.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15983) Replication improperly discards data from end-of-wal in some cases.

2017-02-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15983:

Fix Version/s: (was: 1.1.10)
   (was: 1.2.5)
   (was: 0.98.23)
   (was: 1.3.1)
   (was: 1.4.0)

> Replication improperly discards data from end-of-wal in some cases.
> ---
>
> Key: HBASE-15983
> URL: https://issues.apache.org/jira/browse/HBASE-15983
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.0, 1.0.0, 1.1.0, 1.2.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0
>
>
> In some particular deployments, the Replication code believes it has
> reached EOF for a WAL prior to successfully parsing all bytes known to
> exist in a cleanly closed file.
> The underlying issue is that several different underlying problems with a WAL 
> reader are all treated as end-of-file by the code in ReplicationSource that 
> decides if a given WAL is completed or needs to be retried.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread CHIA-PING TSAI (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884998#comment-15884998
 ] 

CHIA-PING TSAI commented on HBASE-17623:


The experiment environment is shown below.
- G1
- 1TB data
- no split
- no compaction
- v2 patch

||statistics||before||after||after(cache-on-write)||
|elapsed(s)|8880|7145|8341|
|young GC count|2292|1394|2291|
|young total GC time(s)|183|245|372|
|old GC count|574|222|416|
|old total GC time(s)|744|270|642|
|total pause time(s)|399|279|408|

[~anoop.hbase] [~ram_krish] Would you please take a look at v2 patch? Thanks.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17699) Fix TestLockProcedure

2017-02-26 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17699:
-
Fix Version/s: 2.0.0

> Fix TestLockProcedure
> -
>
> Key: HBASE-17699
> URL: https://issues.apache.org/jira/browse/HBASE-17699
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Appy
>Assignee: Appy
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17699.master.001.patch, 
> HBASE-17699.master.002.patch
>
>
> TestLockProcedure is failing consistently after HBASE-17605. It's interesting 
> that HadoopQA didn't report any test failures on that jira. Anyways, need to 
> fix the test now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17699) Fix TestLockProcedure

2017-02-26 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17699:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix TestLockProcedure
> -
>
> Key: HBASE-17699
> URL: https://issues.apache.org/jira/browse/HBASE-17699
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Appy
>Assignee: Appy
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17699.master.001.patch, 
> HBASE-17699.master.002.patch
>
>
> TestLockProcedure is failing consistently after HBASE-17605. It's interesting 
> that HadoopQA didn't report any test failures on that jira. Anyways, need to 
> fix the test now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-26 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884893#comment-15884893
 ] 

Edward Bortnikov commented on HBASE-17662:
--

Folks, 

Apologies for pushing again, but please help us turning some fire under this 
Jira and the others remaining in this project ... This one is the last exposed 
bug that prevents us from turning BASIC compaction into default. Seems like 
this is a small patch, can we commit it? 

Thanks. 

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch, HBASE-17662-V06.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17674) Major compaction may be cancelled in CompactionChecker

2017-02-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17674:
---
   Resolution: Fixed
 Assignee: Guangxu Cheng
 Hadoop Flags: Reviewed
Fix Version/s: 2.0
   1.4.0
   Status: Resolved  (was: Patch Available)

> Major compaction may be cancelled in CompactionChecker
> --
>
> Key: HBASE-17674
> URL: https://issues.apache.org/jira/browse/HBASE-17674
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 1.2.4, 0.98.24
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 1.4.0, 2.0
>
> Attachments: HBASE-17674-master-v1.patch
>
>
> CompactionChecker will periodically check whether a region should perform a 
> major compaction.
> If a region should perform a major compact, a request is submitted. But 
> before the request is submitted, the variable forceMajor is not set to true 
> by calling triggerMajorCompaction.
> When filtering a storefile, the large storefile may cause the request to be 
> canceled or became minor compact.
> {code:title=HRegionServer.java|borderStyle=solid}
> @Override
> protected void chore() {
>   for (Region r : this.instance.onlineRegions.values()) {
> if (r == null)
>   continue;
> for (Store s : r.getStores()) {
>   try {
> long multiplier = s.getCompactionCheckMultiplier();
> assert multiplier > 0;
> if (iteration % multiplier != 0) continue;
> if (s.needsCompaction()) {
>   // Queue a compaction. Will recognize if major is needed.
>   this.instance.compactSplitThread.requestSystemCompaction(r, s, 
> getName()
>   + " requests compaction");
> } else if (s.isMajorCompaction()) {  
>   if (majorCompactPriority == DEFAULT_PRIORITY
>   || majorCompactPriority > 
> ((HRegion)r).getCompactPriority()) {
> this.instance.compactSplitThread.requestCompaction(r, s, 
> getName()
> + " requests major compaction; use default priority", 
> null);
>   } else {
> this.instance.compactSplitThread.requestCompaction(r, s, 
> getName()
> + " requests major compaction; use configured priority",
>   this.majorCompactPriority, null, null);
>   }
> }
>   } catch (IOException e) {
> LOG.warn("Failed major compaction check on " + r, e);
>   }
> }
>   }
>   iteration = (iteration == Long.MAX_VALUE) ? 0 : (iteration + 1);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884823#comment-15884823
 ] 

Hadoop QA commented on HBASE-17623:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 38s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 25s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854773/HBASE-17623.v2.patch |
| JIRA Issue | HBASE-17623 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8d85083da528 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ce64e7e |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5844/testReport/ |
| modules | C: hbase-common hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5844/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Reuse the bytes array when building the hfile 

[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884797#comment-15884797
 ] 

Hadoop QA commented on HBASE-17662:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 16s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 57s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854769/HBASE-17662-V06.patch 
|
| JIRA Issue | HBASE-17662 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6d799a16c9e7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ce64e7e |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5843/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5843/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5843/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: 

[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread CHIA-PING TSAI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CHIA-PING TSAI updated HBASE-17623:
---
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread CHIA-PING TSAI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CHIA-PING TSAI updated HBASE-17623:
---
Description: 
There are two improvements.
# The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
reused when building the hfile.
# The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
need to cache the block.
# If no block need to be cached, the uncompressedBlockBytesWithHeader will 
never be created.

{code:title=HFileBlock.java|borderStyle=solid}
private void finishBlock() throws IOException {
  if (blockType == BlockType.DATA) {
this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
userDataStream,
baosInMemory.getBuffer(), blockType);
blockType = dataBlockEncodingCtx.getBlockType();
  }
  userDataStream.flush();
  // This does an array copy, so it is safe to cache this byte array when 
cache-on-write.
  // Header is still the empty, 'dummy' header that is yet to be filled out.
  uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
  prevOffset = prevOffsetByType[blockType.getId()];

  // We need to set state before we can package the block up for 
cache-on-write. In a way, the
  // block is ready, but not yet encoded or compressed.
  state = State.BLOCK_READY;
  if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) {
onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
compressAndEncrypt(uncompressedBlockBytesWithHeader);
  } else {
onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
compressAndEncrypt(uncompressedBlockBytesWithHeader);
  }
  // Calculate how many bytes we need for checksum on the tail of the block.
  int numBytes = (int) ChecksumUtil.numBytes(
  onDiskBlockBytesWithHeader.length,
  fileContext.getBytesPerChecksum());

  // Put the header for the on disk bytes; header currently is unfilled-out
  putHeader(onDiskBlockBytesWithHeader, 0,
  onDiskBlockBytesWithHeader.length + numBytes,
  uncompressedBlockBytesWithHeader.length, 
onDiskBlockBytesWithHeader.length);
  // Set the header for the uncompressed bytes (for cache-on-write) -- IFF 
different from
  // onDiskBlockBytesWithHeader array.
  if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
putHeader(uncompressedBlockBytesWithHeader, 0,
  onDiskBlockBytesWithHeader.length + numBytes,
  uncompressedBlockBytesWithHeader.length, 
onDiskBlockBytesWithHeader.length);
  }
  if (onDiskChecksum.length != numBytes) {
onDiskChecksum = new byte[numBytes];
  }
  ChecksumUtil.generateChecksums(
  onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
  onDiskChecksum, 0, fileContext.getChecksumType(), 
fileContext.getBytesPerChecksum());
}{code}


  was:
There are two improvements.
# The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
maintain a bytes array which can be reused when building the hfile.
# The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied to 
an new bytes array only when we need to cache the block.

{code:title=HFileBlock.java|borderStyle=solid}
private void finishBlock() throws IOException {
  if (blockType == BlockType.DATA) {
this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
userDataStream,
baosInMemory.getBuffer(), blockType);
blockType = dataBlockEncodingCtx.getBlockType();
  }
  userDataStream.flush();
  // This does an array copy, so it is safe to cache this byte array when 
cache-on-write.
  // Header is still the empty, 'dummy' header that is yet to be filled out.
  uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
  prevOffset = prevOffsetByType[blockType.getId()];

  // We need to set state before we can package the block up for 
cache-on-write. In a way, the
  // block is ready, but not yet encoded or compressed.
  state = State.BLOCK_READY;
  if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) {
onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
compressAndEncrypt(uncompressedBlockBytesWithHeader);
  } else {
onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
compressAndEncrypt(uncompressedBlockBytesWithHeader);
  }
  // Calculate how many bytes we need for checksum on the tail of the block.
  int numBytes = (int) ChecksumUtil.numBytes(
  onDiskBlockBytesWithHeader.length,
  fileContext.getBytesPerChecksum());

  // Put the header for the on disk bytes; header currently is unfilled-out
  putHeader(onDiskBlockBytesWithHeader, 0,
  onDiskBlockBytesWithHeader.length + numBytes,
  

[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread CHIA-PING TSAI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CHIA-PING TSAI updated HBASE-17623:
---
Attachment: HBASE-17623.v2.patch

v2 removes the uncompressedBlockBytesWithHeader member. If no block need to be 
cached, the uncompressedBlockBytesWithHeader will never be created.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-26 Thread CHIA-PING TSAI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CHIA-PING TSAI updated HBASE-17623:
---
Status: Open  (was: Patch Available)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type

2017-02-26 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884755#comment-15884755
 ] 

Anastasia Braginsky commented on HBASE-17343:
-

[~anoop.hbase], [~stack], [~ram_krish],

Thank you for your answers. We are running the performance tests and will 
provide the results this week. Meanwhile the results of 100G data size, looks 
good.
The (single line change) patch is attached, so you can also run the performance 
tests yourself, if you like.
Anyway, please pay attention that we are talking about changing the default to 
BASIC so the key duplicates are irrelevant here.

Thanks,
Anastasia


> Make Compacting Memstore default in 2.0 with BASIC as the default type
> --
>
> Key: HBASE-17343
> URL: https://issues.apache.org/jira/browse/HBASE-17343
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17343-V01.patch
>
>
> FYI [~anastas], [~eshcar] and [~ebortnik].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type

2017-02-26 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17343:

Attachment: HBASE-17343-V01.patch

> Make Compacting Memstore default in 2.0 with BASIC as the default type
> --
>
> Key: HBASE-17343
> URL: https://issues.apache.org/jira/browse/HBASE-17343
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17343-V01.patch
>
>
> FYI [~anastas], [~eshcar] and [~ebortnik].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-26 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17662:

Attachment: HBASE-17662-V06.patch

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch, HBASE-17662-V06.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-26 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884751#comment-15884751
 ] 

Anastasia Braginsky commented on HBASE-17662:
-

Just added a rebased patch

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch, HBASE-17662-V06.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17584) Expose ScanMetrics with ResultScanner rather than Scan

2017-02-26 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884746#comment-15884746
 ] 

Duo Zhang commented on HBASE-17584:
---

Adding methods to an interface will cause compilation error for sub classes. It 
requires users to modify their code when upgrading from a previous version. Is 
it acceptable on a minor version upgrading? If we allow this type of breaking 
the compatibily(it is really a pain that we can not even add methods...), at 
least we need to modify the section of 'HBase version number and compatibility' 
in our hbase book.

Thanks.

> Expose ScanMetrics with ResultScanner rather than Scan
> --
>
> Key: HBASE-17584
> URL: https://issues.apache.org/jira/browse/HBASE-17584
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, mapreduce, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17584.patch
>
>
> I think this have been discussed many times... It is a bad practice to 
> directly modify the Scan object passed in when calling getScanner. The reason 
> that we can not use a copy is we need to use the Scan object to expose scan 
> metrics. So we need to find another way to expose the metrics.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17654) RSGroup code refactoring

2017-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884725#comment-15884725
 ] 

Hudson commented on HBASE-17654:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2574 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2574/])
HBASE-17654 RSGroup refactoring. (appy: rev 
ce64e7eb6e9a6c3954b8ab6b5441ca6e4f952b26)
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManager.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminClient.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsOfflineMode.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/VerifyingRSGroupAdminClient.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupableBalancer.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdmin.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/rsgroup/IntegrationTestRSGroup.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/master/balancer/TestRSGroupBasedLoadBalancer.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java
* (delete) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupSerDe.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
* (add) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupProtobufUtil.java


> RSGroup code refactoring
> 
>
> Key: HBASE-17654
> URL: https://issues.apache.org/jira/browse/HBASE-17654
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-17654.master.001.patch, 
> HBASE-17654.master.002.patch, HBASE-17654.master.003.patch, 
> HBASE-17654.master.004.patch, HBASE-17654.master.005.patch, 
> HBASE-17654.master.006.patch
>
>
> - Making rsGroupInfoManager non-static in RSGroupAdminEndpoint
> - Encapsulate RSGroupAdminService into an internal class in 
> RSGroupAdminEndpoint (on need of inheritence).
> - Make RSGroupAdminEndpoint extend BaseMasterObserver, so got rid of unwanted 
> empty implementations.
> - Change two internal classes in RSGroupAdminServer to non-static (so outer 
> classes' variables can be shared).
> - Rename RSGroupSerDe to RSGroupProtobufUtil('ProtobufUtil' is what we use in 
> other places). Moved 2 functions to RSGroupManagerImpl because they are only 
> used there.
> - Javadoc comments
> - Improving variable names
> - Maybe other misc refactoring



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-26 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884714#comment-15884714
 ] 

Anastasia Braginsky commented on HBASE-17662:
-

Hey everyone! Here are my answers:

[~stack]:

bq. Will the thread that sets the state be same as the one reading it?

Yes, the thread that sets the 'inWalReplay' flag is the replay thread, which is 
the same thread that performs the adds and thus the same thread that is going 
to try to flush in memory and then read the 'inWalReplay' flag when the 
memstore size grows above the threshold.

bq. Is this what single-threaded presumption around wal replay means? 

No, the single-threaded presumption is that there are no two (or more) replays 
simultaneously on the same store.

bq. If single-threaded why are there concerns around in-memory flush? It only 
works if update lock taken? 

Because in-memory flush is done one a separate thread (T), which is dispatched 
once the memstore size grows above the threshold. This thread T takes the 
update lock in order to protect movement of the active segment to pipeline and 
creating the new active segment. Threre should be no concurrent adds to the 
memstore in the same time. Thread T assumes the adds are taking the update lock 
in shared mode and thus when T takes it in exclusive mode there are no 
concurrent updates. However, this assumption doesn't hold in replay case, where 
memstore is updated without taking this lock. From here, T takes the lock 
successfully and moves the active segment under the arms of ongoing concurrent 
updates. This is the concern.

bq. (Flag can't be volatile; that'd be too expensive if we have to check it on 
each update to memstore).

Flag is not volatile and I changed it to be simple boolean and not atomic 
boolean. The flag is checked only when the memstore size grows above the 
threshold. Therefore in common code path, it only will be checked once in a 
while.

[~anoop.hbase] and [~ram_krish], thanks for your comments. As I said above, 
I've changed 'inWalReplay' flag to be simple boolean and it is checked only 
under the 'if' condition that checks for the size.

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, 
> HBASE-17662-V04.patch, HBASE-17662-V05.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17495) TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails due to assertion error

2017-02-26 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884615#comment-15884615
 ] 

Anastasia Braginsky commented on HBASE-17495:
-

[~ram_krish], if you can investigate this it would be great. Because we can not 
recreate the problem and thus we can not debug it. Thanks!
We can retake the investigation if you give us some more concrete hints.

> TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails 
> due to assertion error
> 
>
> Key: HBASE-17495
> URL: https://issues.apache.org/jira/browse/HBASE-17495
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 17495-testHRegionWithInMemoryFlush-output-2.0123, 
> testHRegionWithInMemoryFlush-flush-output.0123, 
> TestHRegionWithInMemoryFlush-out.0222.tar.gz, 
> testHRegionWithInMemoryFlush-output.0119
>
>
> Looping through the test (based on commit 
> 76dc957f64fa38ce88694054db7dbf590f368ae7), I saw the following test failure:
> {code}
> testFlushCacheWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush)
>   Time elapsed: 0.53 sec  <<< FAILURE!
> java.lang.AssertionError: toggle=false i=940 ts=1484852861597 expected:<94> 
> but was:<92>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> {code}
> See test output for details.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)