[jira] [Commented] (HBASE-11009) We sync every hbase:meta table write twice

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972333#comment-13972333
 ] 

Hadoop QA commented on HBASE-11009:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640586/11009v2.txt
  against trunk revision .
  ATTACHMENT ID: 12640586

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.master.TestMasterNoCluster.testNotPullingDeadRegionServerFromZK(TestMasterNoCluster.java:298)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9313//console

This message is automatically generated.

 We sync every hbase:meta table write twice
 --

 Key: HBASE-11009
 URL: https://issues.apache.org/jira/browse/HBASE-11009
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11009.txt, 11009v2.txt


 Found by @nkeywal and [~devaraj] and noted on the tail of HBASE-10156.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11009) We sync every hbase:meta table write twice

2014-04-17 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972378#comment-13972378
 ] 

Nicolas Liochon commented on HBASE-11009:
-

+1

 We sync every hbase:meta table write twice
 --

 Key: HBASE-11009
 URL: https://issues.apache.org/jira/browse/HBASE-11009
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11009.txt, 11009v2.txt


 Found by @nkeywal and [~devaraj] and noted on the tail of HBASE-10156.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10156) FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)

2014-04-17 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972377#comment-13972377
 ] 

Nicolas Liochon commented on HBASE-10156:
-

bq. What you seeing? Slow down?
Not yet. I'm tracking the latency peaks on the write path.

bq. If a sync thread came back between your write and just before you go to 
call sync
There is still a part that I don't understand here:

Suppose that we have 4 writes by 4 different clients w1, w2, w3, w4. Scenario 
would be
w1 gets into the WAL; not sync
w2 gets into the WAL; not sync
w2 finishes: for this it needs to sync, to w1  are now sync. Client 2 is done
w3 gets into the WAL; not sync
w1 wants to finish. it calls sync, this syncs on w3

that's this last point I'm not sure about? When we sync for w1, do we skip 
immediately, or we do sync on w3 because it made it into the WAL in the 
meantime?








 FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)
 ---

 Key: HBASE-10156
 URL: https://issues.apache.org/jira/browse/HBASE-10156
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 10156.txt, 10156v10.txt, 10156v11.txt, 10156v12.txt, 
 10156v12.txt, 10156v13.txt, 10156v16.txt, 10156v17.txt, 10156v18.txt, 
 10156v19.txt, 10156v2.txt, 10156v20.txt, 10156v20.txt, 10156v21.txt, 
 10156v21.txt, 10156v21.txt, 10156v3.txt, 10156v4.txt, 10156v5.txt, 
 10156v6.txt, 10156v7.txt, 10156v9.txt, Disrupting.java


 HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972795#comment-13972795
 ] 

Hudson commented on HBASE-11010:


SUCCESS: Integrated in HBase-0.94-security #470 (See 
[https://builds.apache.org/job/HBase-0.94-security/470/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588128)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972799#comment-13972799
 ] 

Hudson commented on HBASE-11010:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #73 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/73/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588128)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972807#comment-13972807
 ] 

Hudson commented on HBASE-11010:


FAILURE: Integrated in HBase-0.94 #1350 (See 
[https://builds.apache.org/job/HBase-0.94/1350/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588128)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6466) Enable multi-thread for memstore flush

2014-04-17 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972817#comment-13972817
 ] 

Liu Shaohui commented on HBASE-6466:


[~zjushch]
Very curious about this patch. 

In current codebase,  the write of hlog is many small pieces write and sync, 
and the flush of memstore are batch write. So i think the throughput of one 
-thread flush  is larger than the throughput of write of hlog. That means one 
flush thread is enough for most cases. 

What's scenario for multi-thread memstore flush? High-throughput write without 
wal? 



 Enable multi-thread for memstore flush
 --

 Key: HBASE-6466
 URL: https://issues.apache.org/jira/browse/HBASE-6466
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.95.2
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.95.0

 Attachments: 6466-v6.patch, 6466-v7.patch, HBASE-6466-v4.patch, 
 HBASE-6466-v4.patch, HBASE-6466-v5.patch, HBASE-6466.patch, 
 HBASE-6466v2.patch, HBASE-6466v3.1.patch, HBASE-6466v3.patch


 If the KV is large or Hlog is closed with high-pressure putting, we found 
 memstore is often above the high water mark and block the putting.
 So should we enable multi-thread for Memstore Flush?
 Some performance test data for reference,
 1.test environment : 
 random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
 regions per regionserver;row len=50 bytes, value len=1024 bytes;5 
 regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
 per client for writing
 2.test results:
 one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
 regionserver, appears many aboveGlobalMemstoreLimit blocking
 two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
 regionserver,
 200 thread handler per client  two cacheFlush handlers, tps:16.1k/s per 
 regionserver, Flush:18.6MB/s per regionserver



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972849#comment-13972849
 ] 

Hudson commented on HBASE-11010:


SUCCESS: Integrated in HBase-0.94-JDK7 #117 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/117/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588128)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972856#comment-13972856
 ] 

Hudson commented on HBASE-11010:


SUCCESS: Integrated in HBase-TRUNK #5092 (See 
[https://builds.apache.org/job/HBase-TRUNK/5092/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588129)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10156) FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)

2014-04-17 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972870#comment-13972870
 ] 

Himanshu Vashishtha commented on HBASE-10156:
-

When you invoke a sync, you would get a new entry in the RingBuffer, i.e., you 
create a new SyncFuture object with a higher sequence number. So, in the above 
case, when w1 calls sync, it would also sync w3 entry.

 FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)
 ---

 Key: HBASE-10156
 URL: https://issues.apache.org/jira/browse/HBASE-10156
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 10156.txt, 10156v10.txt, 10156v11.txt, 10156v12.txt, 
 10156v12.txt, 10156v13.txt, 10156v16.txt, 10156v17.txt, 10156v18.txt, 
 10156v19.txt, 10156v2.txt, 10156v20.txt, 10156v20.txt, 10156v21.txt, 
 10156v21.txt, 10156v21.txt, 10156v3.txt, 10156v4.txt, 10156v5.txt, 
 10156v6.txt, 10156v7.txt, 10156v9.txt, Disrupting.java


 HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10278) Provide better write predictability

2014-04-17 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972910#comment-13972910
 ] 

Jonathan Hsieh commented on HBASE-10278:


I'm going to pickup work on this issue.

 Provide better write predictability
 ---

 Key: HBASE-10278
 URL: https://issues.apache.org/jira/browse/HBASE-10278
 Project: HBase
  Issue Type: New Feature
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Attachments: 10278-trunk-v2.1.patch, 10278-trunk-v2.1.patch, 
 10278-wip-1.1.patch, Multiwaldesigndoc.pdf, SwitchWriterFlow.pptx


 Currently, HBase has one WAL per region server. 
 Whenever there is any latency in the write pipeline (due to whatever reasons 
 such as n/w blip, a node in the pipeline having a bad disk, etc), the overall 
 write latency suffers. 
 Jonathan Hsieh and I analyzed various approaches to tackle this issue. We 
 also looked at HBASE-5699, which talks about adding concurrent multi WALs. 
 Along with performance numbers, we also focussed on design simplicity, 
 minimum impact on MTTR  Replication, and compatibility with 0.96 and 0.98. 
 Considering all these parameters, we propose a new HLog implementation with 
 WAL Switching functionality.
 Please find attached the design doc for the same. It introduces the WAL 
 Switching feature, and experiments/results of a prototype implementation, 
 showing the benefits of this feature.
 The second goal of this work is to serve as a building block for concurrent 
 multiple WALs feature.
 Please review the doc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972913#comment-13972913
 ] 

Hudson commented on HBASE-11010:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #266 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/266/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588131)
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10278) Provide better write predictability

2014-04-17 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972922#comment-13972922
 ] 

Jonathan Hsieh commented on HBASE-10278:


[~saint@gmail.com] I believe it is not related.



 Provide better write predictability
 ---

 Key: HBASE-10278
 URL: https://issues.apache.org/jira/browse/HBASE-10278
 Project: HBase
  Issue Type: New Feature
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Attachments: 10278-trunk-v2.1.patch, 10278-trunk-v2.1.patch, 
 10278-wip-1.1.patch, Multiwaldesigndoc.pdf, SwitchWriterFlow.pptx


 Currently, HBase has one WAL per region server. 
 Whenever there is any latency in the write pipeline (due to whatever reasons 
 such as n/w blip, a node in the pipeline having a bad disk, etc), the overall 
 write latency suffers. 
 Jonathan Hsieh and I analyzed various approaches to tackle this issue. We 
 also looked at HBASE-5699, which talks about adding concurrent multi WALs. 
 Along with performance numbers, we also focussed on design simplicity, 
 minimum impact on MTTR  Replication, and compatibility with 0.96 and 0.98. 
 Considering all these parameters, we propose a new HLog implementation with 
 WAL Switching functionality.
 Please find attached the design doc for the same. It introduces the WAL 
 Switching feature, and experiments/results of a prototype implementation, 
 showing the benefits of this feature.
 The second goal of this work is to serve as a building block for concurrent 
 multiple WALs feature.
 Please review the doc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-17 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13914057#comment-13914057
 ] 

rajeshbabu edited comment on HBASE-10576 at 4/17/14 1:36 PM:
-

[~jamestaylor]
Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions. No region wise coprocessor hooks needed for 
this.

It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}



was (Author: rajesh23):
Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions.
It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}


 Custom load balancer to co-locate the regions of two tables which are having 
 same split keys
 

 Key: HBASE-10576
 URL: https://issues.apache.org/jira/browse/HBASE-10576
 Project: HBase
  Issue Type: Sub-task
  Components: Balancer
Reporter: rajeshbabu
Assignee: rajeshbabu
 Attachments: HBASE-10536_v2.patch, HBASE-10576.patch


 To support local indexing both user table and index table should have same 
 split keys. This issue to provide custom balancer to colocate the regions of 
 two tables which are having same split keys. 
 This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-17 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13914057#comment-13914057
 ] 

rajeshbabu edited comment on HBASE-10576 at 4/17/14 1:36 PM:
-


Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions. 

It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}



was (Author: rajesh23):
[~jamestaylor]
Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions. No region wise coprocessor hooks needed for 
this.

It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}


 Custom load balancer to co-locate the regions of two tables which are having 
 same split keys
 

 Key: HBASE-10576
 URL: https://issues.apache.org/jira/browse/HBASE-10576
 Project: HBase
  Issue Type: Sub-task
  Components: Balancer
Reporter: rajeshbabu
Assignee: rajeshbabu
 Attachments: HBASE-10536_v2.patch, HBASE-10576.patch


 To support local indexing both user table and index table should have same 
 split keys. This issue to provide custom balancer to colocate the regions of 
 two tables which are having same split keys. 
 This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972936#comment-13972936
 ] 

Hudson commented on HBASE-11010:


SUCCESS: Integrated in HBase-0.98 #282 (See 
[https://builds.apache.org/job/HBase-0.98/282/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588131)
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10278) Provide better write predictability

2014-04-17 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972937#comment-13972937
 ] 

Himanshu Vashishtha commented on HBASE-10278:
-

Jon, Thanks for chiming in, but I am working on the core functionality here.
If you are interested in helping, I would appreciate if you can pick on the 
related tasks (as mentioned in the design doc).

 Provide better write predictability
 ---

 Key: HBASE-10278
 URL: https://issues.apache.org/jira/browse/HBASE-10278
 Project: HBase
  Issue Type: New Feature
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Attachments: 10278-trunk-v2.1.patch, 10278-trunk-v2.1.patch, 
 10278-wip-1.1.patch, Multiwaldesigndoc.pdf, SwitchWriterFlow.pptx


 Currently, HBase has one WAL per region server. 
 Whenever there is any latency in the write pipeline (due to whatever reasons 
 such as n/w blip, a node in the pipeline having a bad disk, etc), the overall 
 write latency suffers. 
 Jonathan Hsieh and I analyzed various approaches to tackle this issue. We 
 also looked at HBASE-5699, which talks about adding concurrent multi WALs. 
 Along with performance numbers, we also focussed on design simplicity, 
 minimum impact on MTTR  Replication, and compatibility with 0.96 and 0.98. 
 Considering all these parameters, we propose a new HLog implementation with 
 WAL Switching functionality.
 Please find attached the design doc for the same. It introduces the WAL 
 Switching feature, and experiments/results of a prototype implementation, 
 showing the benefits of this feature.
 The second goal of this work is to serve as a building block for concurrent 
 multiple WALs feature.
 Please review the doc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10156) FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)

2014-04-17 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972940#comment-13972940
 ] 

Nicolas Liochon commented on HBASE-10156:
-

Thanks, [~himan...@cloudera.com]. I think we should optimize this for no 
syncing w3 if we only need w1. I will give it a try. is that ok for you, 
[~saint@gmail.com]?

 FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)
 ---

 Key: HBASE-10156
 URL: https://issues.apache.org/jira/browse/HBASE-10156
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 10156.txt, 10156v10.txt, 10156v11.txt, 10156v12.txt, 
 10156v12.txt, 10156v13.txt, 10156v16.txt, 10156v17.txt, 10156v18.txt, 
 10156v19.txt, 10156v2.txt, 10156v20.txt, 10156v20.txt, 10156v21.txt, 
 10156v21.txt, 10156v21.txt, 10156v3.txt, 10156v4.txt, 10156v5.txt, 
 10156v6.txt, 10156v7.txt, 10156v9.txt, Disrupting.java


 HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11014) InputStream is not closed in two methods of JarFinder

2014-04-17 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11014:
--

 Summary: InputStream is not closed in two methods of JarFinder
 Key: HBASE-11014
 URL: https://issues.apache.org/jira/browse/HBASE-11014
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


JarFinder#jarDir() and JarFinder#zipDir() have such code:
{code}
 InputStream is = new FileInputStream(f);
 copyToZipStream(is, anEntry, zos);
{code}
The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972960#comment-13972960
 ] 

Hudson commented on HBASE-11010:


FAILURE: Integrated in hbase-0.96-hadoop2 #267 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/267/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588130)
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10993:


Attachment: HBASE-10993-v0.patch

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10993:


Attachment: (was: HBASE-10993-v0.patch)

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread Yi Deng (JIRA)
Yi Deng created HBASE-11015:
---

 Summary: Some refactoring on MemStoreFlusher
 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
 Fix For: 0.89-fb


Use `ScheduledThreadPoolExecutor`
Change some logic
Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread Yi Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Deng updated HBASE-11015:


  Labels: patch  (was: )
Release Note: 
Use `ScheduledThreadPoolExecutor`
Change some logic
Add `IHRegion` and `IHRegionServer` interfaces.
Add testcase
  Status: Patch Available  (was: Open)

This patch was supposed to fix some bugs introduced by another submit, which 
tried add function of online change the number of flushing threads. 

Some refactoring jobs are also included in this diff.

 Some refactoring on MemStoreFlusher
 ---

 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
  Labels: patch
 Fix For: 0.89-fb

 Attachments: D1264374.diff.txt


 Use `ScheduledThreadPoolExecutor`
 Change some logic
 Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread Yi Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Deng updated HBASE-11015:


Attachment: D1264374.diff.txt

 Some refactoring on MemStoreFlusher
 ---

 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
  Labels: patch
 Fix For: 0.89-fb

 Attachments: D1264374.diff.txt


 Use `ScheduledThreadPoolExecutor`
 Change some logic
 Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972985#comment-13972985
 ] 

Hadoop QA commented on HBASE-11015:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640640/D1264374.diff.txt
  against trunk revision .
  ATTACHMENT ID: 12640640

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9315//console

This message is automatically generated.

 Some refactoring on MemStoreFlusher
 ---

 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
  Labels: patch
 Fix For: 0.89-fb

 Attachments: D1264374.diff.txt


 Use `ScheduledThreadPoolExecutor`
 Change some logic
 Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread Yi Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Deng updated HBASE-11015:


Status: Open  (was: Patch Available)

 Some refactoring on MemStoreFlusher
 ---

 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
  Labels: patch
 Fix For: 0.89-fb

 Attachments: D1264374.diff.txt


 Use `ScheduledThreadPoolExecutor`
 Change some logic
 Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread Yi Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972986#comment-13972986
 ] 

Yi Deng commented on HBASE-11015:
-

I should not have click on submit patch when I wanted to attach a file.

 Some refactoring on MemStoreFlusher
 ---

 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
  Labels: patch
 Fix For: 0.89-fb

 Attachments: D1264374.diff.txt


 Use `ScheduledThreadPoolExecutor`
 Change some logic
 Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11010) TestChangingEncoding is unnecessarily slow

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972991#comment-13972991
 ] 

Hudson commented on HBASE-11010:


ABORTED: Integrated in hbase-0.96 #388 (See 
[https://builds.apache.org/job/hbase-0.96/388/])
HBASE-11010 TestChangingEncoding is unnecessarily slow. (larsh: rev 1588130)
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java


 TestChangingEncoding is unnecessarily slow
 --

 Key: HBASE-11010
 URL: https://issues.apache.org/jira/browse/HBASE-11010
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3

 Attachments: 11010-0.94.txt, 11010-trunk.txt


 The test runs for over 10m on the Jenkins boxes.
 Writing the test data is done like this:
 {code}
 for (int i = 0; i  NUM_ROWS_PER_BATCH; ++i) {
   Put put = new Put(getRowKey(batchId, i));
   for (int j = 0; j  NUM_COLS_PER_ROW; ++j) {
 put.add(CF_BYTES, getQualifier(j),
 getValue(batchId, i, j));
 table.put(put);
   }
 }
 {code}
 There are two problems:
 # the same Put is putted multiple times (once for each column added)
 # each Put issued this way causes its one RPC
 On my machine changing this bring the runtime from 247s to 169s.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972998#comment-13972998
 ] 

Hadoop QA commented on HBASE-10993:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640635/HBASE-10993-v0.patch
  against trunk revision .
  ATTACHMENT ID: 12640635

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestQosFunction

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9314//console

This message is automatically generated.

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11003) ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS

2014-04-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-11003:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

 ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS
 

 Key: HBASE-11003
 URL: https://issues.apache.org/jira/browse/HBASE-11003
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.18
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.94.19

 Attachments: HBASE-11003-v0.patch


 The code in 94 is using the inputFs as fs to write on the stagingDir, while 
 96+ is looking that up from the stagingDir path



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10993:


Attachment: HBASE-10993-v0.patch

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10993:


Attachment: (was: HBASE-10993-v0.patch)

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread Yi Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Deng updated HBASE-11015:


 Description: 
Use `ScheduledThreadPoolExecutor`
Change some logic
Add testcase


  was:
Use `ScheduledThreadPoolExecutor`
Change some logic
Add testcase

Release Note:   (was: Use `ScheduledThreadPoolExecutor`
Change some logic
Add `IHRegion` and `IHRegionServer` interfaces.
Add testcase)

 Some refactoring on MemStoreFlusher
 ---

 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
  Labels: patch
 Fix For: 0.89-fb

 Attachments: D1264374.diff.txt


 Use `ScheduledThreadPoolExecutor`
 Change some logic
 Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11016) Remove Filter#filterRow(List)

2014-04-17 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11016:
--

 Summary: Remove Filter#filterRow(List)
 Key: HBASE-11016
 URL: https://issues.apache.org/jira/browse/HBASE-11016
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Priority: Minor
 Fix For: 0.99.0


0.96+ the filterRow(List) method is deprecated:
{code}
   * WARNING: please to not override this method.  Instead override {@link 
#filterRowCells(List)}.
   * This is for transition from 0.94 - 0.96
   **/
  @Deprecated
  abstract public void filterRow(ListKeyValue kvs) throws IOException;
{code}
This method should be removed from Filter classes for 1.0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11005) Remove dead code in HalfStoreFileReader#getScanner#seekBefore()

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11005:
---

Fix Version/s: 0.99.0

 Remove dead code in HalfStoreFileReader#getScanner#seekBefore()
 ---

 Key: HBASE-11005
 URL: https://issues.apache.org/jira/browse/HBASE-11005
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Fix For: 0.99.0

 Attachments: HBASE-11005.patch


 Here is related code:
 {code}
   Cell fk = new KeyValue.KeyOnlyKeyValue(getFirstKey(), 0, 
 getFirstKey().length);
   // This will be null when the file is empty in which we can not
   // seekBefore to any key
   if (fk == null)
 return false;
 {code}
 fk wouldn't be null.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11005) Remove dead code in HalfStoreFileReader#getScanner#seekBefore()

2014-04-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973095#comment-13973095
 ] 

Ted Yu commented on HBASE-11005:


Test failure was not related to the patch.

 Remove dead code in HalfStoreFileReader#getScanner#seekBefore()
 ---

 Key: HBASE-11005
 URL: https://issues.apache.org/jira/browse/HBASE-11005
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Attachments: HBASE-11005.patch


 Here is related code:
 {code}
   Cell fk = new KeyValue.KeyOnlyKeyValue(getFirstKey(), 0, 
 getFirstKey().length);
   // This will be null when the file is empty in which we can not
   // seekBefore to any key
   if (fk == null)
 return false;
 {code}
 fk wouldn't be null.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11003) ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973103#comment-13973103
 ] 

Hudson commented on HBASE-11003:


FAILURE: Integrated in HBase-0.94-security #471 (See 
[https://builds.apache.org/job/HBase-0.94-security/471/])
HBASE-11003 ExportSnapshot is using the wrong fs when staging dir is not in 
fs.defaultFS (mbertozzi: rev 1588280)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS
 

 Key: HBASE-11003
 URL: https://issues.apache.org/jira/browse/HBASE-11003
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.18
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.94.19

 Attachments: HBASE-11003-v0.patch


 The code in 94 is using the inputFs as fs to write on the stagingDir, while 
 96+ is looking that up from the stagingDir path



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11017) TestHRegionBusyWait.testWritesWhileScanning fails frequently in 0.94

2014-04-17 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-11017:
-

 Summary: TestHRegionBusyWait.testWritesWhileScanning fails 
frequently in 0.94
 Key: HBASE-11017
 URL: https://issues.apache.org/jira/browse/HBASE-11017
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.94.19


Have seen a few of these:
{code}
Error Message

Failed clearing memory after 6 attempts on region: 
testWritesWhileScanning,,1397727647509.2c968a587c4cb7e84a52c7aa8d2afcac.

Stacktrace

org.apache.hadoop.hbase.DroppedSnapshotException: Failed clearing memory after 
6 attempts on region: 
testWritesWhileScanning,,1397727647509.2c968a587c4cb7e84a52c7aa8d2afcac.
at 
org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1087)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1024)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:989)
at 
org.apache.hadoop.hbase.regionserver.HRegion.closeHRegion(HRegion.java:4346)
at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileScanning(TestHRegion.java:3406)
{code}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973146#comment-13973146
 ] 

Hadoop QA commented on HBASE-10993:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640646/HBASE-10993-v0.patch
  against trunk revision .
  ATTACHMENT ID: 12640646

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestAdmin

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9316//console

This message is automatically generated.

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11004) Extend traces through FSHLog#sync

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973156#comment-13973156
 ] 

Nick Dimiduk commented on HBASE-11004:
--

FYI, [~nkeywal], [~eclark], [~iwasakims], [~fenghh].

 Extend traces through FSHLog#sync
 -

 Key: HBASE-11004
 URL: https://issues.apache.org/jira/browse/HBASE-11004
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11004.00.patch, spans.png, spans.txt


 Changes introduced in HBASE-8755 decouple wal append from wal sync. A gap was 
 left in the tracing of these requests. I believe this means are spans are 
 decoupled from the work happening over on HDFS-5274. This ticket is to close 
 the air-gap between threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11004) Extend traces through FSHLog#sync

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973154#comment-13973154
 ] 

Nick Dimiduk edited comment on HBASE-11004 at 4/17/14 5:18 PM:
---

Here's an initial patch. It requires a couple tweaks I made to htrace, [pull 
request|https://github.com/cloudera/htrace/pull/27].

I've verified it works, doesn't leak spans, etc via the included changes to 
HLogPerfEval. For example this command

{noformat}
./bin/hbase org.apache.hadoop.hbase.regionserver.wal.HLogPerformanceEvaluation 
-Dhbase.trace.spanreceiver.classes=org.htrace.impl.LocalFileSpanReceiver 
-Dhbase.local-file-span-receiver.path=/tmp/spans.txt -threads 1 -iterations 100 
-syncInterval 10
{noformat}

Produces this highly illegible chat, also attached. More convincing, a grep 
through the spans.txt reports 211 total entries: 100 loop iterations, 100 
appends, 10 syncs, and 1 thread. Increasing to 2 threads produces 422 entries 
of similarly doubled proportions.


was (Author: ndimiduk):
Here's an initial patch. It requires a couple tweaks I made to htrace, [pull 
request|https://github.com/cloudera/htrace/pull/27].

I've verified it works, doesn't leak spans, etc via the included changes to 
HLogPerfEval. For example this command

{noformat}
./bin/hbase org.apache.hadoop.hbase.regionserver.wal.HLogPerformanceEvaluation 
-Dhbase.trace.spanreceiver.classes=org.htrace.impl.LocalFileSpanReceiver 
-Dhbase.local-file-span-rec
eiver.path=/tmp/spans.txt -threads 1 -iterations 100 -syncInterval 10
{noformat}

Produces this highly illegible chat, also attached. More convincing, a grep 
through the spans.txt reports 211 total entries: 100 loop iterations, 100 
appends, 10 syncs, and 1 thread. Increasing to 2 threads produces 422 entries 
of similarly doubled proportions.

 Extend traces through FSHLog#sync
 -

 Key: HBASE-11004
 URL: https://issues.apache.org/jira/browse/HBASE-11004
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11004.00.patch, spans.png, spans.txt


 Changes introduced in HBASE-8755 decouple wal append from wal sync. A gap was 
 left in the tracing of these requests. I believe this means are spans are 
 decoupled from the work happening over on HDFS-5274. This ticket is to close 
 the air-gap between threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11004) Extend traces through FSHLog#sync

2014-04-17 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11004:
-

Attachment: spans.png
spans.txt
HBASE-11004.00.patch

Here's an initial patch. It requires a couple tweaks I made to htrace, [pull 
request|https://github.com/cloudera/htrace/pull/27].

I've verified it works, doesn't leak spans, etc via the included changes to 
HLogPerfEval. For example this command

{noformat}
./bin/hbase org.apache.hadoop.hbase.regionserver.wal.HLogPerformanceEvaluation 
-Dhbase.trace.spanreceiver.classes=org.htrace.impl.LocalFileSpanReceiver 
-Dhbase.local-file-span-rec
eiver.path=/tmp/spans.txt -threads 1 -iterations 100 -syncInterval 10
{noformat}

Produces this highly illegible chat, also attached. More convincing, a grep 
through the spans.txt reports 211 total entries: 100 loop iterations, 100 
appends, 10 syncs, and 1 thread. Increasing to 2 threads produces 422 entries 
of similarly doubled proportions.

 Extend traces through FSHLog#sync
 -

 Key: HBASE-11004
 URL: https://issues.apache.org/jira/browse/HBASE-11004
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11004.00.patch, spans.png, spans.txt


 Changes introduced in HBASE-8755 decouple wal append from wal sync. A gap was 
 left in the tracing of these requests. I believe this means are spans are 
 decoupled from the work happening over on HDFS-5274. This ticket is to close 
 the air-gap between threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11004) Extend traces through FSHLog#sync

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973159#comment-13973159
 ] 

Nick Dimiduk commented on HBASE-11004:
--

On rb: https://reviews.apache.org/r/20453/

 Extend traces through FSHLog#sync
 -

 Key: HBASE-11004
 URL: https://issues.apache.org/jira/browse/HBASE-11004
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11004.00.patch, spans.png, spans.txt


 Changes introduced in HBASE-8755 decouple wal append from wal sync. A gap was 
 left in the tracing of these requests. I believe this means are spans are 
 decoupled from the work happening over on HDFS-5274. This ticket is to close 
 the air-gap between threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11003) ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973165#comment-13973165
 ] 

Hudson commented on HBASE-11003:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #74 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/74/])
HBASE-11003 ExportSnapshot is using the wrong fs when staging dir is not in 
fs.defaultFS (mbertozzi: rev 1588280)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS
 

 Key: HBASE-11003
 URL: https://issues.apache.org/jira/browse/HBASE-11003
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.18
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.94.19

 Attachments: HBASE-11003-v0.patch


 The code in 94 is using the inputFs as fs to write on the stagingDir, while 
 96+ is looking that up from the stagingDir path



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11003) ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973169#comment-13973169
 ] 

Hudson commented on HBASE-11003:


FAILURE: Integrated in HBase-0.94 #1351 (See 
[https://builds.apache.org/job/HBase-0.94/1351/])
HBASE-11003 ExportSnapshot is using the wrong fs when staging dir is not in 
fs.defaultFS (mbertozzi: rev 1588280)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS
 

 Key: HBASE-11003
 URL: https://issues.apache.org/jira/browse/HBASE-11003
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.18
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.94.19

 Attachments: HBASE-11003-v0.patch


 The code in 94 is using the inputFs as fs to write on the stagingDir, while 
 96+ is looking that up from the stagingDir path



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11011) Avoid extra getFileStatus() calls on Region startup

2014-04-17 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973168#comment-13973168
 ] 

Jean-Daniel Cryans commented on HBASE-11011:


What should one do when seeing this message?

{code}
+  LOG.debug(compaction file missing:  + outputPath);
{code}

The fact that a file is missing seems pretty bad, yet it's at DEBUG.

 Avoid extra getFileStatus() calls on Region startup
 ---

 Key: HBASE-11011
 URL: https://issues.apache.org/jira/browse/HBASE-11011
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.2, 0.98.1, 1.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0, 0.98.2, 0.96.3

 Attachments: HBASE-11011-v0.patch


 On load we already have a StoreFileInfo and we create it from the path,
 this will result in an extra fs.getFileStatus() call.
 In completeCompactionMarker() we do a fs.exists() and later a 
 fs.getFileStatus()
 to create the StoreFileInfo, we can avoid the exists.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10934) Provide HBaseAdminInterface to abstract HBaseAdmin

2014-04-17 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973172#comment-13973172
 ] 

Enis Soztutar commented on HBASE-10934:
---

This patch looks good. There are a couple of changes that I want to make in the 
Admin interface before 1.0: 
 - Remove all methods that accept table name as byte[] and string. We should 
only have the TableName arguments. No need to bloat the interfaces. 
 - remove the methods that accept tableNameOrRegionName. This is unacceptable. 
We should have xxxRegion(), xxxTable() methods instead. 
 - rethink some of the methods whether they should be exposed (like 
getMasterCoprocessors() as above) 

However, I think we can commit this patch, and make the changes above in a 
follow up patch, or as a part of HBASE-10602. I'll commit this shortly. 


 Provide HBaseAdminInterface to abstract HBaseAdmin
 --

 Key: HBASE-10934
 URL: https://issues.apache.org/jira/browse/HBASE-10934
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Carter
Priority: Blocker
  Labels: patch
 Fix For: 0.99.0

 Attachments: HBASE_10934.patch, HBASE_10934_2.patch


 As HBaseAdmin is essentially the administrative API, it would seem to follow 
 Java best practices to provide an interface to access it instead of requiring 
 applications to use the raw object.
 I am proposing (and would be happy to develop):
  * A new interface, HBaseAdminInterface, that captures the signatures of the 
 API (HBaseAdmin will implement this interface)
  * A new method, HConnection.getHBaseAdmin(), that returns an instance of the 
 interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11012) InputStream is not closed in two methods of JarFinder

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973190#comment-13973190
 ] 

Nick Dimiduk commented on HBASE-11012:
--

copyToZipStream(InputStream, ZipEntry, ZipOutputStream) closes the InputStream 
for the caller. This patch should result in a double-close. Better to put the 
try/finally in copyToZipStream or return responsibility to the caller. Ugly as 
it is, the current implementation is probably not broken.

 InputStream is not closed in two methods of JarFinder
 -

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v1.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11003) ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973189#comment-13973189
 ] 

Hudson commented on HBASE-11003:


FAILURE: Integrated in HBase-0.94-JDK7 #118 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/118/])
HBASE-11003 ExportSnapshot is using the wrong fs when staging dir is not in 
fs.defaultFS (mbertozzi: rev 1588280)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 ExportSnapshot is using the wrong fs when staging dir is not in fs.defaultFS
 

 Key: HBASE-11003
 URL: https://issues.apache.org/jira/browse/HBASE-11003
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.18
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.94.19

 Attachments: HBASE-11003-v0.patch


 The code in 94 is using the inputFs as fs to write on the stagingDir, while 
 96+ is looking that up from the stagingDir path



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11011) Avoid extra getFileStatus() calls on Region startup

2014-04-17 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973186#comment-13973186
 ] 

Matteo Bertozzi commented on HBASE-11011:
-

{quote}The fact that a file is missing seems pretty bad, yet it's at 
DEBUG.{quote}
In this case is not bad, is an expected situation, where the RS died before 
moving the compacted file.
Anyway, I think that all the for(compactionOutputs) loop can be removed,
since is trying to lookup files in /table/region/family/ and if they are there 
are already loaded for sure.

 Avoid extra getFileStatus() calls on Region startup
 ---

 Key: HBASE-11011
 URL: https://issues.apache.org/jira/browse/HBASE-11011
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.2, 0.98.1, 1.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0, 0.98.2, 0.96.3

 Attachments: HBASE-11011-v0.patch


 On load we already have a StoreFileInfo and we create it from the path,
 this will result in an extra fs.getFileStatus() call.
 In completeCompactionMarker() we do a fs.exists() and later a 
 fs.getFileStatus()
 to create the StoreFileInfo, we can avoid the exists.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11011) Avoid extra getFileStatus() calls on Region startup

2014-04-17 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973196#comment-13973196
 ] 

Jean-Daniel Cryans commented on HBASE-11011:


bq. In this case is not bad, is an expected situation, where the RS died 
before moving the compacted file.

This might be, but the log message doesn't convey any of that. Taken out of 
context (so most users reading our logs), it just says a file is missing.

bq. since is trying to lookup files in /table/region/family/ and if they are 
there are already loaded for sure.

I'm trusting you on this one :)

 Avoid extra getFileStatus() calls on Region startup
 ---

 Key: HBASE-11011
 URL: https://issues.apache.org/jira/browse/HBASE-11011
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.2, 0.98.1, 1.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0, 0.98.2, 0.96.3

 Attachments: HBASE-11011-v0.patch


 On load we already have a StoreFileInfo and we create it from the path,
 this will result in an extra fs.getFileStatus() call.
 In completeCompactionMarker() we do a fs.exists() and later a 
 fs.getFileStatus()
 to create the StoreFileInfo, we can avoid the exists.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10950) Add a configuration point for MaxVersion of Column Family

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973200#comment-13973200
 ] 

Nick Dimiduk commented on HBASE-10950:
--

This isn't quite right.

{noformat}
-  public static final int DEFAULT_VERSIONS = 1;
+  public static final int DEFAULT_VERSIONS = 
HBaseConfiguration.create().getInt(
+hbase.column.max.version, 1);
{noformat}

DEFAULT_VERSIONS shouldn't change. Instead, you should change the places it's 
referenced to retrieve the site-configured default value first, and then fall 
back to the column descriptor.

Also, creating a config inplace like this is goofy. It means people won't have 
a chance to add customizations to their conf object before it's parsed. Better 
to use the appropriate conf object for the context.

 Add  a configuration point for MaxVersion of Column Family
 --

 Key: HBASE-10950
 URL: https://issues.apache.org/jira/browse/HBASE-10950
 Project: HBase
  Issue Type: Improvement
  Components: Admin
Affects Versions: 0.98.0, 0.96.0
Reporter: Demai Ni
Assignee: Enoch Hsu
 Fix For: 0.99.0, 0.98.2, 0.96.3

 Attachments: HBASE_10950.patch, HBASE_10950_v2.patch


 Starting on 0.96.0.  HColumnDescriptor.DEFAULT_VERSIONS change to 1 from 3. 
 So a columnfamily will be default have 1 version of data. Currently a user 
 can specifiy the maxVersion during create table time or alter the columnfam 
 later. This feature will add a configuration point in hbase-sit.xml so that 
 an admin can set the default globally. 
 a small discussion in 
 [HBASE-10941|https://issues.apache.org/jira/browse/HBASE-10941] lead to this 
 jira



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10934) Provide HBaseAdminInterface to abstract HBaseAdmin

2014-04-17 Thread Carter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carter updated HBASE-10934:
---

Status: Open  (was: Patch Available)

 Provide HBaseAdminInterface to abstract HBaseAdmin
 --

 Key: HBASE-10934
 URL: https://issues.apache.org/jira/browse/HBASE-10934
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Carter
Priority: Blocker
  Labels: patch
 Fix For: 0.99.0

 Attachments: HBASE_10934.patch, HBASE_10934_2.patch


 As HBaseAdmin is essentially the administrative API, it would seem to follow 
 Java best practices to provide an interface to access it instead of requiring 
 applications to use the raw object.
 I am proposing (and would be happy to develop):
  * A new interface, HBaseAdminInterface, that captures the signatures of the 
 API (HBaseAdmin will implement this interface)
  * A new method, HConnection.getHBaseAdmin(), that returns an instance of the 
 interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10934) Provide HBaseAdminInterface to abstract HBaseAdmin

2014-04-17 Thread Carter (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973204#comment-13973204
 ] 

Carter commented on HBASE-10934:


I can certainly remove the byte[] and string methods with this patch.  (I much 
prefer to add methods to an interface later than remove.)

Let me take one more pass before you submit.


 Provide HBaseAdminInterface to abstract HBaseAdmin
 --

 Key: HBASE-10934
 URL: https://issues.apache.org/jira/browse/HBASE-10934
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Carter
Priority: Blocker
  Labels: patch
 Fix For: 0.99.0

 Attachments: HBASE_10934.patch, HBASE_10934_2.patch


 As HBaseAdmin is essentially the administrative API, it would seem to follow 
 Java best practices to provide an interface to access it instead of requiring 
 applications to use the raw object.
 I am proposing (and would be happy to develop):
  * A new interface, HBaseAdminInterface, that captures the signatures of the 
 API (HBaseAdmin will implement this interface)
  * A new method, HConnection.getHBaseAdmin(), that returns an instance of the 
 interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10934) Provide HBaseAdminInterface to abstract HBaseAdmin

2014-04-17 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973211#comment-13973211
 ] 

Enis Soztutar commented on HBASE-10934:
---

Ok, no probs. We can still make changes to the interface before a release comes 
out. Thanks Carter for picking this up. 

 Provide HBaseAdminInterface to abstract HBaseAdmin
 --

 Key: HBASE-10934
 URL: https://issues.apache.org/jira/browse/HBASE-10934
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Carter
Priority: Blocker
  Labels: patch
 Fix For: 0.99.0

 Attachments: HBASE_10934.patch, HBASE_10934_2.patch


 As HBaseAdmin is essentially the administrative API, it would seem to follow 
 Java best practices to provide an interface to access it instead of requiring 
 applications to use the raw object.
 I am proposing (and would be happy to develop):
  * A new interface, HBaseAdminInterface, that captures the signatures of the 
 API (HBaseAdmin will implement this interface)
  * A new method, HConnection.getHBaseAdmin(), that returns an instance of the 
 interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11012) InputStream is not closed in two methods of JarFinder

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11012:
---

Attachment: (was: 11012-v1.txt)

 InputStream is not closed in two methods of JarFinder
 -

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11012) InputStream is not closed in two methods of JarFinder

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11012:
---

Attachment: 11012-v2.txt

How about patch v2 ?

Calls to is and zos may throw IOException.
Patch adds finally clauses.

 InputStream is not closed in two methods of JarFinder
 -

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10156) FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)

2014-04-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973214#comment-13973214
 ] 

stack commented on HBASE-10156:
---

[~liochon] You have a point.

We could do this:

@@ -1486,6 +1488,9 @@ class FSHLog implements HLog, Syncable {
   @Override
   // txid is unused.  txid is an implementation detail.  It should not leak 
outside of WAL.
   public void sync(long txid) throws IOException {
+// If this edit has been sync'd already, we can just return.  This is 
dangerous.  Can only
+// be for a single edit or for a sequence of edits written by this thread.
+if (this.highestSyncedSequence.get()  txid) return;
 publishSyncThenBlockOnCompletion();
   }

This is all before the ringbuffer.  It would be hard to do on other side of the 
ringbuffer unless we carried this seqid -- which would be different from the 
ringbuffers' current seqid -- over to the other side and then on the other side 
did something similar (would be a bit more involved on other side since context 
would be blown).

Good one.

(Would have to undo my 'deprecation' of the sync that takes a txid in a more 
complete patch).

 FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)
 ---

 Key: HBASE-10156
 URL: https://issues.apache.org/jira/browse/HBASE-10156
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 10156.txt, 10156v10.txt, 10156v11.txt, 10156v12.txt, 
 10156v12.txt, 10156v13.txt, 10156v16.txt, 10156v17.txt, 10156v18.txt, 
 10156v19.txt, 10156v2.txt, 10156v20.txt, 10156v20.txt, 10156v21.txt, 
 10156v21.txt, 10156v21.txt, 10156v3.txt, 10156v4.txt, 10156v5.txt, 
 10156v6.txt, 10156v7.txt, 10156v9.txt, Disrupting.java


 HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10156) FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)

2014-04-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973215#comment-13973215
 ] 

stack commented on HBASE-10156:
---

On other side of the ring buffer would be difficult as things are now as we 
don't keep track of the seqids we've 'appended' to the WAL (this we could 
change and it is changed in Himanshu's patch that introduces switching WALs).

Thanks for taking a look here [~nkeywal]

 FSHLog Refactor (WAS - Fix up the HBASE-8755 slowdown when low contention)
 ---

 Key: HBASE-10156
 URL: https://issues.apache.org/jira/browse/HBASE-10156
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 10156.txt, 10156v10.txt, 10156v11.txt, 10156v12.txt, 
 10156v12.txt, 10156v13.txt, 10156v16.txt, 10156v17.txt, 10156v18.txt, 
 10156v19.txt, 10156v2.txt, 10156v20.txt, 10156v20.txt, 10156v21.txt, 
 10156v21.txt, 10156v21.txt, 10156v3.txt, 10156v4.txt, 10156v5.txt, 
 10156v6.txt, 10156v7.txt, 10156v9.txt, Disrupting.java


 HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11009) We sync every hbase:meta table write twice

2014-04-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11009:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you [~nkeywal] for finding this and for the review.

 We sync every hbase:meta table write twice
 --

 Key: HBASE-11009
 URL: https://issues.apache.org/jira/browse/HBASE-11009
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11009.txt, 11009v2.txt


 Found by @nkeywal and [~devaraj] and noted on the tail of HBASE-10156.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11012) InputStream is not properly closed in two methods of JarFinder

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11012:
---

Summary: InputStream is not properly closed in two methods of JarFinder  
(was: InputStream is not closed in two methods of JarFinder)

 InputStream is not properly closed in two methods of JarFinder
 --

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11012) InputStream is not properly closed in two methods of JarFinder

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973228#comment-13973228
 ] 

Nick Dimiduk commented on HBASE-11012:
--

Looks good to me. On commit, please add a javadoc to the method pointing out 
that it closes the provided InputStream -- help out everyone's future selves. 
Thanks.

 InputStream is not properly closed in two methods of JarFinder
 --

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11000) Add autoflush option to PerformanceEvaluation

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973236#comment-13973236
 ] 

Nick Dimiduk commented on HBASE-11000:
--

[~nkeywal] Did you mean to also commit to hbase-10070 or can this be resolved?

 Add autoflush option to PerformanceEvaluation
 -

 Key: HBASE-11000
 URL: https://issues.apache.org/jira/browse/HBASE-11000
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11000.v1.patch


 includes some very minor cleanup



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10984) Add description about setting up htrace-zipkin to documentation

2014-04-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10984:
--

Attachment: 10984v2.txt

What I committed.  Masatake's doc with pointer to maven central suggested by 
Nick.

 Add description about setting up htrace-zipkin to documentation
 ---

 Key: HBASE-10984
 URL: https://issues.apache.org/jira/browse/HBASE-10984
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: 10984v2.txt, HBASE-10984-0.patch


 adding manual setup procedure of htrace-zipkin for tracing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10900) FULL table backup and restore

2014-04-17 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973245#comment-13973245
 ] 

Demai Ni commented on HBASE-10900:
--

[~mbertozzi], I posted my response to your comments on review board last 
Friday, but forgot to publish them. a rookie mistake. Sorry about that. Many 
thanks for your input. about your comments on zookeeper, I will need to do some 
more study to address it. I am aware of some efforts recently through several 
Jiras to move away from zookeeper. .. Demai 

 FULL table backup and restore
 -

 Key: HBASE-10900
 URL: https://issues.apache.org/jira/browse/HBASE-10900
 Project: HBase
  Issue Type: Task
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 1.0.0

 Attachments: HBASE-10900-fullbackup-trunk-v1.patch


 h2. Feature Description
 This is a subtask of 
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] to support FULL 
 backup/restore, and will complete the following function:
 {code:title=Backup Restore example|borderStyle=solid}
 /* backup from sourcecluster to targetcluster 
  */
 /* if no table name specified, all tables from source cluster will be 
 backuped */
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 /* restore on targetcluser, this is a local restore   
   */
 /* backup_1396650096738 - backup image name   
   */
 /* t1_dn,etc are the original table names. All tables will be restored if not 
 specified */
 /* t1_dn_restore, etc. are the restored table. if not specified, orginal 
 table name will be used*/
 [targetcluster]$ hbase restore /userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 /* restore from targetcluster back to source cluster, this is a remote restore
 [sourcecluster]$ hbase restore 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir backup_1396650096738 
 t1_dn,t2_dn,t3_dn t1_dn_restore,t2_dn_restore,t3_dn_restore
 {code}
 h2. Detail layout and frame work for the next jiras
 The patch is a wrapper of the existing snapshot and exportSnapshot, and will 
 use as the base framework for the over-all solution of  
 [HBase-7912|https://issues.apache.org/jira/browse/HBASE-7912] as described 
 below:
 * *bin/hbase*  : end-user command line interface to invoke 
 BackupClient and RestoreClient
 * *BackupClient.java*  : 'main' entry for backup operations. This patch will 
 only support 'full' backup. In future jiras, will support:
 ** *create* incremental backup
 ** *cancel* an ongoing backup
 ** *delete* an exisitng backup image
 ** *describe* the detailed informaiton of backup image
 ** show *history* of all successful backups 
 ** show the *status* of the latest backup request
 ** *convert* incremental backup WAL files into HFiles.  either on-the-fly 
 during create or after create
 ** *merge* backup image
 ** *stop* backup a table of existing backup image
 ** *show* tables of a backup image 
 * *BackupCommands.java* : a place to keep all the command usages and options
 * *BackupManager.java*  : handle backup requests on server-side, create 
 BACKUP ZOOKEEPER nodes to keep track backup. The timestamps kept in zookeeper 
 will be used for future incremental backup (not included in this jira). 
 Create BackupContext and DispatchRequest. 
 * *BackupHandler.java*  : in this patch, it is a wrapper of snapshot and 
 exportsnapshot. In future jiras, 
 ** *timestamps* info will be recorded in ZK
 ** carry on *incremental* backup.  
 ** update backup *progress*
 ** set flags of *status*
 ** build up *backupManifest* file(in this jira only limited info for 
 fullback. later on, timestamps and dependency of multipl backup images are 
 also recorded here)
 ** clean up after *failed* backup 
 ** clean up after *cancelled* backup
 ** allow on-the-fly *convert* during incremental backup 
 * *BackupContext.java* : encapsulate backup information like backup ID, table 
 names, directory info, phase, TimeStamps of backup progress, size of data, 
 ancestor info, etc. 
 * *BackupCopier.java*  : the copying operation.  Later on, to support 
 progress report and mapper estimation; and extends DisCp for progress 
 updating to ZK during backup. 
 * *BackupExcpetion.java*: to handle exception from backup/restore
 * *BackupManifest.java* : encapsulate all the backup image information. The 
 manifest info will be bundled as manifest file together with data. So that 
 each backup image will contain all the info needed for restore. 
 * *BackupStatus.java*   : encapsulate backup status at table level during 
 backup progress
 * *BackupUtil.java* : utility methods during backup 

[jira] [Updated] (HBASE-10984) Add description about setting up htrace-zipkin to documentation

2014-04-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10984:
--

   Resolution: Fixed
Fix Version/s: 0.99.0
 Release Note: How to enable tracing into zipkin
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thank you for the doc [~iwasakims]

 Add description about setting up htrace-zipkin to documentation
 ---

 Key: HBASE-10984
 URL: https://issues.apache.org/jira/browse/HBASE-10984
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Fix For: 0.99.0

 Attachments: 10984v2.txt, HBASE-10984-0.patch


 adding manual setup procedure of htrace-zipkin for tracing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11004) Extend traces through FSHLog#sync

2014-04-17 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973273#comment-13973273
 ] 

Elliott Clark commented on HBASE-11004:
---

I continue to think that we shouldn't include htrace-zipkin as a dependency.  
Though as more people start using it I could be convinced that I'm wrong.

 Extend traces through FSHLog#sync
 -

 Key: HBASE-11004
 URL: https://issues.apache.org/jira/browse/HBASE-11004
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11004.00.patch, spans.png, spans.txt


 Changes introduced in HBASE-8755 decouple wal append from wal sync. A gap was 
 left in the tracing of these requests. I believe this means are spans are 
 decoupled from the work happening over on HDFS-5274. This ticket is to close 
 the air-gap between threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11005) Remove dead code in HalfStoreFileReader#getScanner#seekBefore()

2014-04-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973299#comment-13973299
 ] 

Hudson commented on HBASE-11005:


SUCCESS: Integrated in HBase-TRUNK #5093 (See 
[https://builds.apache.org/job/HBase-TRUNK/5093/])
HBASE-11005 Remove dead code in HalfStoreFileReader#getScanner#seekBefore() 
(Gustavo Anatoly) (tedyu: rev 1588295)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java


 Remove dead code in HalfStoreFileReader#getScanner#seekBefore()
 ---

 Key: HBASE-11005
 URL: https://issues.apache.org/jira/browse/HBASE-11005
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Fix For: 0.99.0

 Attachments: HBASE-11005.patch


 Here is related code:
 {code}
   Cell fk = new KeyValue.KeyOnlyKeyValue(getFirstKey(), 0, 
 getFirstKey().length);
   // This will be null when the file is empty in which we can not
   // seekBefore to any key
   if (fk == null)
 return false;
 {code}
 fk wouldn't be null.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11004) Extend traces through FSHLog#sync

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973305#comment-13973305
 ] 

Nick Dimiduk commented on HBASE-11004:
--

Nope, you're right. That was unintentional on my part. I added it as a 
convenience for some local testing. Will remove in the next patch.

 Extend traces through FSHLog#sync
 -

 Key: HBASE-11004
 URL: https://issues.apache.org/jira/browse/HBASE-11004
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11004.00.patch, spans.png, spans.txt


 Changes introduced in HBASE-8755 decouple wal append from wal sync. A gap was 
 left in the tracing of these requests. I believe this means are spans are 
 decoupled from the work happening over on HDFS-5274. This ticket is to close 
 the air-gap between threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11012) InputStream is not properly closed in two methods of JarFinder

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11012:
---

Attachment: 11012-v3.txt

Patch v3 moves the construction of InputStream inside copyToZipStream().

 InputStream is not properly closed in two methods of JarFinder
 --

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt, 11012-v3.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973321#comment-13973321
 ] 

stack commented on HBASE-10993:
---

Should getDeadline be in the PriorityFunction Interface?  It seems like an 
implementation detail of a particular PF implementation?

On this:

+  // Comparator used by the normal callQueue.

We are changing the 'normal', or default SimpleRpcScheduler to 'deprioritize 
long-running scanners', right?   It is on by default:

+if (conf.getBoolean(DEADLINE_CALL_QUEUE_CONF_KEY, true)) {

That is good.  What 'effect' should I see now this is on?  Any?  (Since 
SCAN_VTIME_WEIGHT_CONF_KEY has a default of 1.0f?

This has no doc:

+  public static final String SCAN_VTIME_WEIGHT_CONF_KEY = 
ipc.server.scan.vtime.weight;

Somewhere we should have description of what vtime and 'weight' is about.

This class needs doc: FixedPriorityBlockingQueue   Is it 'fixed' priority?  
Doesn't it change w/ how long scan has been going on?

Yeah, some explanation here would help.. why we are sqrt'ing and rounding and 
multiplying weight ...

+  long vtime = rpcServices.getScannerVirtualTime(request.getScannerId());
+  return Math.round(Math.sqrt(vtime * scanVirtualTimeWeight));

Is the below a timestamp?

+  return scannerHolder.nextCallSeq;

Nice test in TestSimpleRpcScheduler

Patch looks great Matteo.  I did not review the queue implementation closely.  
The test for it seems fine.



 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11012) InputStream is not properly closed in two methods of JarFinder

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973324#comment-13973324
 ] 

Hadoop QA commented on HBASE-11012:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640665/11012-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12640665

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9317//console

This message is automatically generated.

 InputStream is not properly closed in two methods of JarFinder
 --

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt, 11012-v3.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.

2014-04-17 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-5697:
---

Attachment: HBASE-5697_v2.patch

 Audit HBase for usage of deprecated hadoop 0.20.x property names.
 -

 Key: HBASE-5697
 URL: https://issues.apache.org/jira/browse/HBASE-5697
 Project: HBase
  Issue Type: Task
Reporter: Jonathan Hsieh
Assignee: Srikanth Srungarapu
  Labels: noob
 Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, 
 deprecated_properties


 Many xml config properties in Hadoop have changed in 0.23.  We should audit 
 hbase to insulate it from hadoop property name changes.
 Here is a list of the hadoop property name changes:
 http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10934) Provide HBaseAdminInterface to abstract HBaseAdmin

2014-04-17 Thread Carter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carter updated HBASE-10934:
---

Attachment: HBASE_10934_3.patch

Removed duplicate string/byte[] - TableName helper methods to keep the 
interface tight.

Left tableOrRegionName methods in place until we can figure out what to do with 
them.

 Provide HBaseAdminInterface to abstract HBaseAdmin
 --

 Key: HBASE-10934
 URL: https://issues.apache.org/jira/browse/HBASE-10934
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Carter
Priority: Blocker
  Labels: patch
 Fix For: 0.99.0

 Attachments: HBASE_10934.patch, HBASE_10934_2.patch, 
 HBASE_10934_3.patch


 As HBaseAdmin is essentially the administrative API, it would seem to follow 
 Java best practices to provide an interface to access it instead of requiring 
 applications to use the raw object.
 I am proposing (and would be happy to develop):
  * A new interface, HBaseAdminInterface, that captures the signatures of the 
 API (HBaseAdmin will implement this interface)
  * A new method, HConnection.getHBaseAdmin(), that returns an instance of the 
 interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.

2014-04-17 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973338#comment-13973338
 ] 

Srikanth Srungarapu commented on HBASE-5697:


Sorry for overlooking it. I uploaded new patch with no '.' between 
max.attempts. Also, I want to add that the relevant Hadoop 2.2 and Hadoop 2.3 
property changes are same. And yeah, I grepped once again for all the ones on 
the page and made sure that no one goes missing.

 Audit HBase for usage of deprecated hadoop 0.20.x property names.
 -

 Key: HBASE-5697
 URL: https://issues.apache.org/jira/browse/HBASE-5697
 Project: HBase
  Issue Type: Task
Reporter: Jonathan Hsieh
Assignee: Srikanth Srungarapu
  Labels: noob
 Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, 
 deprecated_properties


 Many xml config properties in Hadoop have changed in 0.23.  We should audit 
 hbase to insulate it from hadoop property name changes.
 Here is a list of the hadoop property name changes:
 http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-5697) Audit HBase for usage of deprecated hadoop 0.20.x property names.

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973334#comment-13973334
 ] 

Hadoop QA commented on HBASE-5697:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640689/HBASE-5697_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12640689

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 99 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9319//console

This message is automatically generated.

 Audit HBase for usage of deprecated hadoop 0.20.x property names.
 -

 Key: HBASE-5697
 URL: https://issues.apache.org/jira/browse/HBASE-5697
 Project: HBase
  Issue Type: Task
Reporter: Jonathan Hsieh
Assignee: Srikanth Srungarapu
  Labels: noob
 Attachments: HBASE-5697.patch, HBASE-5697_v2.patch, 
 deprecated_properties


 Many xml config properties in Hadoop have changed in 0.23.  We should audit 
 hbase to insulate it from hadoop property name changes.
 Here is a list of the hadoop property name changes:
 http://hadoop.apache.org/common/docs/r0.23.1/hadoop-project-dist/hadoop-common/DeprecatedProperties.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10934) Provide HBaseAdminInterface to abstract HBaseAdmin

2014-04-17 Thread Carter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carter updated HBASE-10934:
---

Status: Patch Available  (was: Open)

 Provide HBaseAdminInterface to abstract HBaseAdmin
 --

 Key: HBASE-10934
 URL: https://issues.apache.org/jira/browse/HBASE-10934
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Carter
Priority: Blocker
  Labels: patch
 Fix For: 0.99.0

 Attachments: HBASE_10934.patch, HBASE_10934_2.patch, 
 HBASE_10934_3.patch


 As HBaseAdmin is essentially the administrative API, it would seem to follow 
 Java best practices to provide an interface to access it instead of requiring 
 applications to use the raw object.
 I am proposing (and would be happy to develop):
  * A new interface, HBaseAdminInterface, that captures the signatures of the 
 API (HBaseAdmin will implement this interface)
  * A new method, HConnection.getHBaseAdmin(), that returns an instance of the 
 interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11015) Some refactoring on MemStoreFlusher

2014-04-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973342#comment-13973342
 ] 

stack commented on HBASE-11015:
---

Thanks for the patch Yi Deng.

Why you making the change?  You have numbers or a use case you are improving 
for or is it just an 'itch'?

Rather than HRegionIf, maybe call it 'Region'?  (See where we have Store and 
HStore as the implementation).

No need to put HSTORE_BLOCKING_STORE_FILES_KEY in HConstants?  Put into class 
where it is used in MSFlusher? (HConstants is a bit of anti-pattern -- good for 
constants used in many packages)


Does this need to be in HRegion as a public method:

+  public int maxStoreFilesCount() {

Would a utiltiy method do here instead?  (Should be called 
getMaxStoreFileCount?)  

Look at other Interfaces in HBase.  See how they do not have the public 
qualifiers.  I bring it up because a kind heart went through all of our 
Interfaces and removed the 'public' qualifiers intentionally (They are not 
needed on Interfaces).

On this:

HRegionServerIf

There is a RegionServerServices interface all ready?  You didn't want to use 
that?

How are these done currently?  The Master runs them IIRC?

+  /**
+   * Requests the region server to make a split on a specific region-store.
+   */
+  public boolean requestSplit(HRegionIf r);
+
+  /**
+   * Requests the region server to make a compaction on a specific 
region-store.
+   *
+   * @param r the region-store.
+   * @param why Why compaction requested -- used in debug messages
+   */
+  public void requestCompaction(HRegionIf r, String why);

I reviewed about half of the patch. 


 Some refactoring on MemStoreFlusher
 ---

 Key: HBASE-11015
 URL: https://issues.apache.org/jira/browse/HBASE-11015
 Project: HBase
  Issue Type: Bug
  Components: io
Reporter: Yi Deng
  Labels: patch
 Fix For: 0.89-fb

 Attachments: D1264374.diff.txt


 Use `ScheduledThreadPoolExecutor`
 Change some logic
 Add testcase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10999) Cross-row Transaction : Implement Percolator Algorithm on HBase

2014-04-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973351#comment-13973351
 ] 

Vladimir Rodionov commented on HBASE-10999:
---

I might be wrong in my assumptions, but it seems that you are doing cross 
region RPCs from inside Coprocessors (RegionObservers?). If this is true than 
how have you implemented deadlock prevention when all RPC threads on some RS 
can be blocked, processing incoming and outgoing requests? This subject (cross 
region RPCs from RegionObserver) has been discussed several times in the past 
and now is being considered as an anti-pattern.  

 Cross-row Transaction : Implement Percolator Algorithm on HBase
 ---

 Key: HBASE-10999
 URL: https://issues.apache.org/jira/browse/HBASE-10999
 Project: HBase
  Issue Type: New Feature
  Components: Transactions/MVCC
Affects Versions: 0.99.0
Reporter: cuijianwei
Assignee: cuijianwei

 Cross-row transaction is a desired function for database. It is not easy to 
 keep ACID characteristics of cross-row transactions in distribute databases 
 such as HBase, because data of cross-transaction might locate in different 
 machines. In the paper http://research.google.com/pubs/pub36726.html, google 
 presents an algorithm(named percolator) to implement cross-row transactions 
 on BigTable. After analyzing the algorithm, we found percolator might also be 
 a choice to provide cross-row transaction on HBase. The reasons includes:
 1. Percolator could keep the ACID of cross-row transaction as described in 
 google's paper. Percolator depends on a Global Incremental Timestamp Service 
 to define the order of transactions, this is important to keep ACID of 
 transaction.
 2. Percolator algorithm could be totally implemented in client-side. This 
 means we do not need to change the logic of server side. Users could easily 
 include percolator in their client and adopt percolator APIs only when they 
 want cross-row transaction.
 3. Percolator is a general algorithm which could be implemented based on 
 databases providing single-row transaction. Therefore, it is feasible to 
 implement percolator on HBase.
 In last few months, we have implemented percolator on HBase, did correctness 
 validation, performance test and finally successfully applied this algorithm 
 in our production environment. Our works include:
 1. percolator algorithm implementation on HBase. The current implementations 
 includes:
 a). a Transaction module to provides put/delete/get/scan interfaces to do 
 cross-row/cross-table transaction.
 b). a Global Incremental Timestamp Server to provide globally 
 monotonically increasing timestamp for transaction.
 c). a LockCleaner module to resolve conflict when concurrent transactions 
 mutate the same column.
 d). an internal module to implement prewrite/commit/get/scan logic of 
 percolator.
Although percolator logic could be totally implemented in client-side, we 
 use coprocessor framework of HBase in our implementation. This is because 
 coprocessor could provide percolator-specific Rpc interfaces such as 
 prewrite/commit to reduce Rpc rounds and improve efficiency. Another reason 
 to use coprocessor is that we want to decouple percolator's code from HBase 
 so that users will get clean HBase code if they don't need cross-row 
 transactions. In future, we will also explore the concurrent running 
 characteristic of coprocessor to do cross-row mutations more efficiently.
 2. an AccountTransfer simulation program to validate the correctness of 
 implementation. This program will distribute initial values in different 
 tables, rows and columns in HBase. Each column represents an account. Then, 
 configured client threads will be concurrently started to read out a number 
 of account values from different tables and rows by percolator's get; after 
 this, clients will randomly transfer values among these accounts while 
 keeping the sum unchanged, which simulates concurrent cross-table/cross-row 
 transactions. To check the correctness of transactions, a checker thread will 
 periodically scan account values from all columns, make sure the current 
 total value is the same as the initial total value. We run this validation 
 program while developing, this help us correct errors of implementation.
 3. performance evaluation under various test situations. We compared 
 percolator's APIs with HBase's with different data size and client thread 
 count for single-column transaction which represents the worst performance 
 case for percolator. We get the performance comparison result as (below):
 a) For read, the performance of percolator is 90% of HBase;
 b) For write, the performance of percolator is 23%  of HBase.
 The drop derives from the 

[jira] [Commented] (HBASE-11012) InputStream is not properly closed in two methods of JarFinder

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973366#comment-13973366
 ] 

Nick Dimiduk commented on HBASE-11012:
--

+1 v3, now it's all in one place.

Probably the original author intended the InputStream abstraction so that any 
source could be provided, but this is only used in 2 places, so this is good.

 InputStream is not properly closed in two methods of JarFinder
 --

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt, 11012-v3.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10924) [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges

2014-04-17 Thread Aleksandr Shulman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Shulman updated HBASE-10924:
--

Attachment: HBASE-10924-0.94-v1.patch

Attaching v1 of the patch. For 94 only.

 [region_mover]: Adjust region_mover script to retry unloading a server a 
 configurable number of times in case of region splits/merges
 -

 Key: HBASE-10924
 URL: https://issues.apache.org/jira/browse/HBASE-10924
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.94.15
Reporter: Aleksandr Shulman
Assignee: Aleksandr Shulman
  Labels: region_mover, rolling_upgrade
 Fix For: 0.94.20

 Attachments: HBASE-10924-0.94-v1.patch


 Observed behavior:
 In about 5% of cases, my rolling upgrade tests fail because of stuck regions 
 during a region server unload. My theory is that this occurs when region 
 assignment information changes between the time the region list is generated, 
 and the time when the region is to be moved.
 An example of such a region information change is a split or merge.
 Example:
 Regionserver A has 100 regions (#0-#99). The balancer is turned off and the 
 regionmover script is called to unload this regionserver. The regionmover 
 script will generate the list of 100 regions to be moved and then proceed 
 down that list, moving the regions off in series. However, there is a region, 
 #84, that has split into two daughter regions while regions 0-83 were moved. 
 The script will be stuck trying to move #84, timeout, and then the failure 
 will bubble up (attempt 1 failed).
 Proposed solution:
 This specific failure mode should be caught and the region_mover script 
 should now attempt to move off all the regions. Now, it will have 16+1 (due 
 to split) regions to move. There is a good chance that it will be able to 
 move all 17 off without issues. However, should it encounter this same issue 
 (attempt 2 failed), it will retry again. This process will continue until the 
 maximum number of unload retry attempts has been reached.
 This is not foolproof, but let's say for the sake of argument that 5% of 
 unload attempts hit this issue, then with a retry count of 3, it will reduce 
 the unload failure probability from 0.05 to 0.000125 (0.05^3).
 Next steps:
 I am looking for feedback on this approach. If it seems like a sensible 
 approach, I will create a strawman patch and test it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11018) ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated

2014-04-17 Thread Jerry He (JIRA)
Jerry He created HBASE-11018:


 Summary: ZKUtil.getChildDataAndWatchForNewChildren() will not 
return null as indicated
 Key: HBASE-11018
 URL: https://issues.apache.org/jira/browse/HBASE-11018
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.98.1, 0.96.1
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor


While working on HBase acl, I found out that 
ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated.  
Here is the code:
{code}
 /**
  
   * Returns null if the specified node does not exist.  Otherwise returns a
   * list of children of the specified node.  If the node exists but it has no
   * children, an empty list will be returned.
  
   */
  public static ListNodeAndData getChildDataAndWatchForNewChildren(
  ZooKeeperWatcher zkw, String baseNode) throws KeeperException {
ListString nodes =
  ZKUtil.listChildrenAndWatchForNewChildren(zkw, baseNode);
ListNodeAndData newNodes = new ArrayListNodeAndData();
if (nodes != null) {
  for (String node : nodes) {
String nodePath = ZKUtil.joinZNode(baseNode, node);
byte[] data = ZKUtil.getDataAndWatch(zkw, nodePath);
newNodes.add(new NodeAndData(nodePath, data));
  }
}
return newNodes;
  }
{code}
We return 'newNodes' which will never be null.

This is a deprecated method.  But it is still used in HBase code.
For example: org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.start()
{code}
  public void start() throws KeeperException {
watcher.registerListener(this);
if (ZKUtil.watchAndCheckExists(watcher, aclZNode)) {
  ListZKUtil.NodeAndData existing =
  ZKUtil.getChildDataAndWatchForNewChildren(watcher, aclZNode);
  if (existing != null) {
refreshNodes(existing);
  }
}
  }
{code}
We test the 'null' return from the call which becomes the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11009) We sync every hbase:meta table write twice

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973372#comment-13973372
 ] 

Nick Dimiduk commented on HBASE-11009:
--

One question: why not call sync instead of publishSyncThenBlockOnCompletion at 
this point? In fact, I made this very change in HBASE-11004.

 We sync every hbase:meta table write twice
 --

 Key: HBASE-11009
 URL: https://issues.apache.org/jira/browse/HBASE-11009
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11009.txt, 11009v2.txt


 Found by @nkeywal and [~devaraj] and noted on the tail of HBASE-10156.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11005) Remove dead code in HalfStoreFileReader#getScanner#seekBefore()

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11005:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Remove dead code in HalfStoreFileReader#getScanner#seekBefore()
 ---

 Key: HBASE-11005
 URL: https://issues.apache.org/jira/browse/HBASE-11005
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Fix For: 0.99.0

 Attachments: HBASE-11005.patch


 Here is related code:
 {code}
   Cell fk = new KeyValue.KeyOnlyKeyValue(getFirstKey(), 0, 
 getFirstKey().length);
   // This will be null when the file is empty in which we can not
   // seekBefore to any key
   if (fk == null)
 return false;
 {code}
 fk wouldn't be null.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11009) We sync every hbase:meta table write twice

2014-04-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973383#comment-13973383
 ] 

stack commented on HBASE-11009:
---

[~ndimiduk] That would be better.  Lets get that in on commit of HBASE-11004.  
Thanks for review.

 We sync every hbase:meta table write twice
 --

 Key: HBASE-11009
 URL: https://issues.apache.org/jira/browse/HBASE-11009
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11009.txt, 11009v2.txt


 Found by @nkeywal and [~devaraj] and noted on the tail of HBASE-10156.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11018) ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated

2014-04-17 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-11018:
-

Attachment: HBASE-11018-trunk.patch

 ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated
 -

 Key: HBASE-11018
 URL: https://issues.apache.org/jira/browse/HBASE-11018
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.1, 0.98.1
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Attachments: HBASE-11018-trunk.patch


 While working on HBase acl, I found out that 
 ZKUtil.getChildDataAndWatchForNewChildren() will not return null as 
 indicated.  Here is the code:
 {code}
  /**
   
* Returns null if the specified node does not exist.  Otherwise returns a
* list of children of the specified node.  If the node exists but it has no
* children, an empty list will be returned.
   
*/
   public static ListNodeAndData getChildDataAndWatchForNewChildren(
   ZooKeeperWatcher zkw, String baseNode) throws KeeperException {
 ListString nodes =
   ZKUtil.listChildrenAndWatchForNewChildren(zkw, baseNode);
 ListNodeAndData newNodes = new ArrayListNodeAndData();
 if (nodes != null) {
   for (String node : nodes) {
 String nodePath = ZKUtil.joinZNode(baseNode, node);
 byte[] data = ZKUtil.getDataAndWatch(zkw, nodePath);
 newNodes.add(new NodeAndData(nodePath, data));
   }
 }
 return newNodes;
   }
 {code}
 We return 'newNodes' which will never be null.
 This is a deprecated method.  But it is still used in HBase code.
 For example: 
 org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.start()
 {code}
   public void start() throws KeeperException {
 watcher.registerListener(this);
 if (ZKUtil.watchAndCheckExists(watcher, aclZNode)) {
   ListZKUtil.NodeAndData existing =
   ZKUtil.getChildDataAndWatchForNewChildren(watcher, aclZNode);
   if (existing != null) {
 refreshNodes(existing);
   }
 }
   }
 {code}
 We test the 'null' return from the call which becomes the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973382#comment-13973382
 ] 

Matteo Bertozzi commented on HBASE-10993:
-

{quote}That is good. What 'effect' should I see now this is on? Any? (Since 
SCAN_VTIME_WEIGHT_CONF_KEY has a default of 1.0f?{quote}
The weight is just a weight, more scan.next() you do more delayed you will be 
in case there are requests with no delay. If you increase the weight, each 
single next may delay you more.
The testRpcScheduler() is showing that the long scan will be executed after 
all the other requests.

{quote}
Yeah, some explanation here would help.. why we are sqrt'ing and rounding and 
multiplying weight ...
+ long vtime = rpcServices.getScannerVirtualTime(request.getScannerId());
+ return Math.round(Math.sqrt(vtime * scanVirtualTimeWeight));
{quote}
The sqrt gives you a nice curve that represent in a quite good way what we want 
to do.. after some time start delay and keep increase the delay more you are 
running (but not too much) 
http://en.wikipedia.org/wiki/File:Square_root_0_25.svg

{quote}This class needs doc: FixedPriorityBlockingQueue Is it 'fixed' priority? 
Doesn't it change w/ how long scan has been going on?{quote}
I should use bounded instead of fixed since it is referring to the number 
of elements. this is a generic priority queue that keeps the fifo order if t

{quote}Is the below a timestamp? + return scannerHolder.nextCallSeq;{quote}
No at the moment the vtime, is the number of scanner.next() calls that you do

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11018) ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated

2014-04-17 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-11018:
-

Fix Version/s: 0.99.0
   Status: Patch Available  (was: Open)

 ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated
 -

 Key: HBASE-11018
 URL: https://issues.apache.org/jira/browse/HBASE-11018
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.98.1, 0.96.1
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11018-trunk.patch


 While working on HBase acl, I found out that 
 ZKUtil.getChildDataAndWatchForNewChildren() will not return null as 
 indicated.  Here is the code:
 {code}
  /**
   
* Returns null if the specified node does not exist.  Otherwise returns a
* list of children of the specified node.  If the node exists but it has no
* children, an empty list will be returned.
   
*/
   public static ListNodeAndData getChildDataAndWatchForNewChildren(
   ZooKeeperWatcher zkw, String baseNode) throws KeeperException {
 ListString nodes =
   ZKUtil.listChildrenAndWatchForNewChildren(zkw, baseNode);
 ListNodeAndData newNodes = new ArrayListNodeAndData();
 if (nodes != null) {
   for (String node : nodes) {
 String nodePath = ZKUtil.joinZNode(baseNode, node);
 byte[] data = ZKUtil.getDataAndWatch(zkw, nodePath);
 newNodes.add(new NodeAndData(nodePath, data));
   }
 }
 return newNodes;
   }
 {code}
 We return 'newNodes' which will never be null.
 This is a deprecated method.  But it is still used in HBase code.
 For example: 
 org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.start()
 {code}
   public void start() throws KeeperException {
 watcher.registerListener(this);
 if (ZKUtil.watchAndCheckExists(watcher, aclZNode)) {
   ListZKUtil.NodeAndData existing =
   ZKUtil.getChildDataAndWatchForNewChildren(watcher, aclZNode);
   if (existing != null) {
 refreshNodes(existing);
   }
 }
   }
 {code}
 We test the 'null' return from the call which becomes the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10993) Deprioritize long-running scanners

2014-04-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973399#comment-13973399
 ] 

stack commented on HBASE-10993:
---

All of the above is good by me... shove it into next patch as explanations.  On 
Fixed vs Bounded, I thought the queue implementation was making use of your new 
deadline comparator -- I got that wrong -- so name is good as is...  Just needs 
doc saying it is a 'generic'



 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-10993-v0.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11007) BLOCKCACHE in schema descriptor seems not aptly named

2014-04-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973402#comment-13973402
 ] 

Nick Dimiduk commented on HBASE-11007:
--

On 0.94, this test uses SchemaMetrics, which was removed as part of HBASE-6410.

 BLOCKCACHE in schema descriptor seems not aptly named
 -

 Key: HBASE-11007
 URL: https://issues.apache.org/jira/browse/HBASE-11007
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.18
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Minor

 Hi,
 It seems that setting BLOCKCACHE key to false will disable the Data blocks 
 from being cached but will continue to cache bloom and index blocks. This 
 same property seems to be called cacheDataOnRead inside CacheConfig.
 Should this be called CACHE_DATA_ON_READ instead of BLOCKCACHE similar to the 
 other CACHE_DATA_ON_WRITE/CACHE_INDEX_ON_WRITE. We got quite confused and 
 ended up adding our own property CACHE_DATA_ON_READ - we also added some unit 
 tests for the same.
 What do folks think about this ?
 Thanks
 Varun



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11011) Avoid extra getFileStatus() calls on Region startup

2014-04-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-11011:


Attachment: HBASE-11011-v1.patch

 Avoid extra getFileStatus() calls on Region startup
 ---

 Key: HBASE-11011
 URL: https://issues.apache.org/jira/browse/HBASE-11011
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.2, 0.98.1, 1.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0, 0.98.2, 0.96.3

 Attachments: HBASE-11011-v0.patch, HBASE-11011-v1.patch


 On load we already have a StoreFileInfo and we create it from the path,
 this will result in an extra fs.getFileStatus() call.
 In completeCompactionMarker() we do a fs.exists() and later a 
 fs.getFileStatus()
 to create the StoreFileInfo, we can avoid the exists.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10950) Add a configuration point for MaxVersion of Column Family

2014-04-17 Thread Enoch Hsu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973410#comment-13973410
 ] 

Enoch Hsu commented on HBASE-10950:
---

isnt that what .getInt() does? my understanding is it attempts to retrieve the 
config value (which in this case is from hbase-site.xml) and if there is none 
will set it to 1. which was the original default value.

Or are you saying that places that call HColumnDescriptor.getMaxVersions() 
should instead first do some sort of call to a config object to retrieve 
hbase.column.max.version and if that is null then call 
HColumnDescriptor.getMaxVersions? 

I also am not quite sure what you mean by people won't have a chance to add 
customizations to their conf object. They can just add it to hbase-site.xml 
and it will get parsed in won't it? At least that is the behavior I observed 
when testing hbase shell


 Add  a configuration point for MaxVersion of Column Family
 --

 Key: HBASE-10950
 URL: https://issues.apache.org/jira/browse/HBASE-10950
 Project: HBase
  Issue Type: Improvement
  Components: Admin
Affects Versions: 0.98.0, 0.96.0
Reporter: Demai Ni
Assignee: Enoch Hsu
 Fix For: 0.99.0, 0.98.2, 0.96.3

 Attachments: HBASE_10950.patch, HBASE_10950_v2.patch


 Starting on 0.96.0.  HColumnDescriptor.DEFAULT_VERSIONS change to 1 from 3. 
 So a columnfamily will be default have 1 version of data. Currently a user 
 can specifiy the maxVersion during create table time or alter the columnfam 
 later. This feature will add a configuration point in hbase-sit.xml so that 
 an admin can set the default globally. 
 a small discussion in 
 [HBASE-10941|https://issues.apache.org/jira/browse/HBASE-10941] lead to this 
 jira



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11019) incCount() method should be properly stubbed in HConnectionTestingUtility#getMockedConnectionAndDecorate()

2014-04-17 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11019:
--

 Summary: incCount() method should be properly stubbed in 
HConnectionTestingUtility#getMockedConnectionAndDecorate()
 Key: HBASE-11019
 URL: https://issues.apache.org/jira/browse/HBASE-11019
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 11019-v1.txt

From 
https://builds.apache.org/job/PreCommit-HBASE-Build/9306//testReport/org.apache.hadoop.hbase.master/TestAssignmentManager/testClosingFailureDuringRecovery/
 :
{code}
org.mockito.exceptions.base.MockitoException: 
'incCount' is a *void method* and it *cannot* be stubbed with a *return value*!
Voids are usually stubbed with Throwables:
doThrow(exception).when(mock).someVoidMethod();
***
If you're unsure why you're getting above error read on.
Due to the nature of the syntax above problem might occur because:
1. The method you are trying to stub is *overloaded*. Make sure you are calling 
the right overloaded version.
2. Somewhere in your test you are stubbing *final methods*. Sorry, Mockito does 
not verify/stub final methods.
3. A spy is stubbed using when(spy.foo()).then() syntax. It is safer to stub 
spies - 
   - with doReturn|Throw() family of methods. More in javadocs for 
Mockito.spy() method.

at 
org.apache.hadoop.hbase.client.HConnectionTestingUtility.getMockedConnectionAndDecorate(HConnectionTestingUtility.java:124)
at 
org.apache.hadoop.hbase.master.TestAssignmentManager.setUpMockedAssignmentManager(TestAssignmentManager.java:1141)
at 
org.apache.hadoop.hbase.master.TestAssignmentManager.testClosingFailureDuringRecovery(TestAssignmentManager.java:1027)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
{code}
incCount() should be properly stubbed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11017) TestHRegionBusyWait.testWritesWhileScanning fails frequently in 0.94

2014-04-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973417#comment-13973417
 ] 

Lars Hofhansl commented on HBASE-11017:
---

[~stack], do you think this is normal? I know we had to fix some other test 
after this change went it.

 TestHRegionBusyWait.testWritesWhileScanning fails frequently in 0.94
 

 Key: HBASE-11017
 URL: https://issues.apache.org/jira/browse/HBASE-11017
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.94.19


 Have seen a few of these:
 {code}
 Error Message
 Failed clearing memory after 6 attempts on region: 
 testWritesWhileScanning,,1397727647509.2c968a587c4cb7e84a52c7aa8d2afcac.
 Stacktrace
 org.apache.hadoop.hbase.DroppedSnapshotException: Failed clearing memory 
 after 6 attempts on region: 
 testWritesWhileScanning,,1397727647509.2c968a587c4cb7e84a52c7aa8d2afcac.
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1087)
   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1024)
   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:989)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.closeHRegion(HRegion.java:4346)
   at 
 org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileScanning(TestHRegion.java:3406)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11011) Avoid extra getFileStatus() calls on Region startup

2014-04-17 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973419#comment-13973419
 ] 

Jean-Daniel Cryans commented on HBASE-11011:


Does the changed code in completeCompactionMarker require a unit test? Or is 
there already one?

Also fix those lines:

{quote}
+// If we scan the directory and the file is not present, may means:
+// so, we can't do anything with the compaction output list since or is
+// already loaded on startup, because in the store folder, or it may be not
{quote}

 Avoid extra getFileStatus() calls on Region startup
 ---

 Key: HBASE-11011
 URL: https://issues.apache.org/jira/browse/HBASE-11011
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.2, 0.98.1, 1.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 1.0.0, 0.98.2, 0.96.3

 Attachments: HBASE-11011-v0.patch, HBASE-11011-v1.patch


 On load we already have a StoreFileInfo and we create it from the path,
 this will result in an extra fs.getFileStatus() call.
 In completeCompactionMarker() we do a fs.exists() and later a 
 fs.getFileStatus()
 to create the StoreFileInfo, we can avoid the exists.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11019) incCount() method should be properly stubbed in HConnectionTestingUtility#getMockedConnectionAndDecorate()

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11019:
---

Attachment: 11019-v1.txt

Patch stubs incCount and decCount.

Also removes unused variable userRegionLock from ConnectionManager

 incCount() method should be properly stubbed in 
 HConnectionTestingUtility#getMockedConnectionAndDecorate()
 --

 Key: HBASE-11019
 URL: https://issues.apache.org/jira/browse/HBASE-11019
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 11019-v1.txt


 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/9306//testReport/org.apache.hadoop.hbase.master/TestAssignmentManager/testClosingFailureDuringRecovery/
  :
 {code}
 org.mockito.exceptions.base.MockitoException: 
 'incCount' is a *void method* and it *cannot* be stubbed with a *return 
 value*!
 Voids are usually stubbed with Throwables:
 doThrow(exception).when(mock).someVoidMethod();
 ***
 If you're unsure why you're getting above error read on.
 Due to the nature of the syntax above problem might occur because:
 1. The method you are trying to stub is *overloaded*. Make sure you are 
 calling the right overloaded version.
 2. Somewhere in your test you are stubbing *final methods*. Sorry, Mockito 
 does not verify/stub final methods.
 3. A spy is stubbed using when(spy.foo()).then() syntax. It is safer to stub 
 spies - 
- with doReturn|Throw() family of methods. More in javadocs for 
 Mockito.spy() method.
   at 
 org.apache.hadoop.hbase.client.HConnectionTestingUtility.getMockedConnectionAndDecorate(HConnectionTestingUtility.java:124)
   at 
 org.apache.hadoop.hbase.master.TestAssignmentManager.setUpMockedAssignmentManager(TestAssignmentManager.java:1141)
   at 
 org.apache.hadoop.hbase.master.TestAssignmentManager.testClosingFailureDuringRecovery(TestAssignmentManager.java:1027)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 {code}
 incCount() should be properly stubbed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11019) incCount() method should be properly stubbed in HConnectionTestingUtility#getMockedConnectionAndDecorate()

2014-04-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11019:
---

Status: Patch Available  (was: Open)

 incCount() method should be properly stubbed in 
 HConnectionTestingUtility#getMockedConnectionAndDecorate()
 --

 Key: HBASE-11019
 URL: https://issues.apache.org/jira/browse/HBASE-11019
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 11019-v1.txt


 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/9306//testReport/org.apache.hadoop.hbase.master/TestAssignmentManager/testClosingFailureDuringRecovery/
  :
 {code}
 org.mockito.exceptions.base.MockitoException: 
 'incCount' is a *void method* and it *cannot* be stubbed with a *return 
 value*!
 Voids are usually stubbed with Throwables:
 doThrow(exception).when(mock).someVoidMethod();
 ***
 If you're unsure why you're getting above error read on.
 Due to the nature of the syntax above problem might occur because:
 1. The method you are trying to stub is *overloaded*. Make sure you are 
 calling the right overloaded version.
 2. Somewhere in your test you are stubbing *final methods*. Sorry, Mockito 
 does not verify/stub final methods.
 3. A spy is stubbed using when(spy.foo()).then() syntax. It is safer to stub 
 spies - 
- with doReturn|Throw() family of methods. More in javadocs for 
 Mockito.spy() method.
   at 
 org.apache.hadoop.hbase.client.HConnectionTestingUtility.getMockedConnectionAndDecorate(HConnectionTestingUtility.java:124)
   at 
 org.apache.hadoop.hbase.master.TestAssignmentManager.setUpMockedAssignmentManager(TestAssignmentManager.java:1141)
   at 
 org.apache.hadoop.hbase.master.TestAssignmentManager.testClosingFailureDuringRecovery(TestAssignmentManager.java:1027)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 {code}
 incCount() should be properly stubbed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11012) InputStream is not properly closed in two methods of JarFinder

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973437#comment-13973437
 ] 

Hadoop QA commented on HBASE-11012:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12640687/11012-v3.txt
  against trunk revision .
  ATTACHMENT ID: 12640687

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9318//console

This message is automatically generated.

 InputStream is not properly closed in two methods of JarFinder
 --

 Key: HBASE-11012
 URL: https://issues.apache.org/jira/browse/HBASE-11012
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Trivial
 Attachments: 11012-v2.txt, 11012-v3.txt


 JarFinder#jarDir() and JarFinder#zipDir() have such code:
 {code}
 99 InputStream is = new FileInputStream(f);
 100 copyToZipStream(is, anEntry, zos);
 {code}
 The InputStream is not closed after copy operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10357) Failover RPC's for scans

2014-04-17 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-10357:


Attachment: 10357-1.txt

Patch. Still testing it. But can do with some early feedback.

 Failover RPC's for scans
 

 Key: HBASE-10357
 URL: https://issues.apache.org/jira/browse/HBASE-10357
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Enis Soztutar
 Fix For: 0.99.0

 Attachments: 10357-1.txt


 This is extension of HBASE-10355 to add failover support for scans. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   >