[jira] [Updated] (HBASE-12259) Bring quorum based write ahead log into HBase

2014-10-15 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12259:

Issue Type: Improvement  (was: Bug)

 Bring quorum based write ahead log into HBase
 -

 Key: HBASE-12259
 URL: https://issues.apache.org/jira/browse/HBASE-12259
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Elliott Clark

 HydraBase ( 
 https://code.facebook.com/posts/32638043166/hydrabase-the-evolution-of-hbase-facebook/
  ) Facebook's implementation of HBase with Raft for consensus will be going 
 open source shortly. We should pull in the parts of that fb-0.89 based 
 implementation, and offer it as a feature in whatever next major release is 
 next up. Right now the Hydrabase code base isn't ready to be released into 
 the wild; it should be ready soon ( for some definition of soon).
 Since Hydrabase is based upon 0.89 most of the code is not directly 
 applicable. So lots of work will probably need to be done in a feature branch 
 before a merge vote.
 Is this something that's wanted?
 Is there anything clean up that needs to be done before the log 
 implementation is able to be replaced like this?
 What's our story with upgrading to this? Are we ok with requiring down time ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12259) Bring quorum based write ahead log into HBase

2014-10-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172097#comment-14172097
 ] 

Sean Busbey commented on HBASE-12259:
-

The refactoring working in HBASE-10378 should get finished up first. It 
obviates some other WAL clean up tickets and generally provides us with a 
cleaner separation of concerns than the current HLog.

There's a simplified roadmap around WAL improvements on that ticket and a patch 
from a bit ago. Both should get updated in the next day or so with a version 
that I think is ready as a first pass implementation. One of the follow-ons is 
getting the WAL related code all into its own module, which I think will help a 
lot in getting the recovery side of things better isolated.

 Bring quorum based write ahead log into HBase
 -

 Key: HBASE-12259
 URL: https://issues.apache.org/jira/browse/HBASE-12259
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Elliott Clark

 HydraBase ( 
 https://code.facebook.com/posts/32638043166/hydrabase-the-evolution-of-hbase-facebook/
  ) Facebook's implementation of HBase with Raft for consensus will be going 
 open source shortly. We should pull in the parts of that fb-0.89 based 
 implementation, and offer it as a feature in whatever next major release is 
 next up. Right now the Hydrabase code base isn't ready to be released into 
 the wild; it should be ready soon ( for some definition of soon).
 Since Hydrabase is based upon 0.89 most of the code is not directly 
 applicable. So lots of work will probably need to be done in a feature branch 
 before a merge vote.
 Is this something that's wanted?
 Is there anything clean up that needs to be done before the log 
 implementation is able to be replaced like this?
 What's our story with upgrading to this? Are we ok with requiring down time ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12238) A few ugly exceptions on startup

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172099#comment-14172099
 ] 

Hadoop QA commented on HBASE-12238:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674928/12238.txt
  against trunk revision .
  ATTACHMENT ID: 12674928

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.procedure.TestZKProcedure

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11350//console

This message is automatically generated.

 A few ugly exceptions on startup
 

 Key: HBASE-12238
 URL: https://issues.apache.org/jira/browse/HBASE-12238
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.1
Reporter: stack
Assignee: stack
Priority: Minor
 Fix For: 2.0.0, 0.99.2

 Attachments: 12238.txt


 Let me fix a few innocuous exceptions that show on startup (saw testing 
 0.99.1), even when regular -- will throw people off.
 Here is one:
 {code}
 2014-10-12 19:07:15,251 INFO  [c2020:16020.activeMasterManager] 
 zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at 
 address=c2021.halxg.cloudera.com,16020,1413165899611, 
 exception=org.apache.hadoop.hbase.NotServingRegionException: 
 org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
 not online on c2021.halxg.cloudera.com,16020,1413166029547
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2677)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:838)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1110)
 at 
 org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20158)
 at 

[jira] [Updated] (HBASE-12259) Bring quorum based write ahead log into HBase

2014-10-15 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12259:

Component/s: wal

 Bring quorum based write ahead log into HBase
 -

 Key: HBASE-12259
 URL: https://issues.apache.org/jira/browse/HBASE-12259
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 2.0.0
Reporter: Elliott Clark

 HydraBase ( 
 https://code.facebook.com/posts/32638043166/hydrabase-the-evolution-of-hbase-facebook/
  ) Facebook's implementation of HBase with Raft for consensus will be going 
 open source shortly. We should pull in the parts of that fb-0.89 based 
 implementation, and offer it as a feature in whatever next major release is 
 next up. Right now the Hydrabase code base isn't ready to be released into 
 the wild; it should be ready soon ( for some definition of soon).
 Since Hydrabase is based upon 0.89 most of the code is not directly 
 applicable. So lots of work will probably need to be done in a feature branch 
 before a merge vote.
 Is this something that's wanted?
 Is there anything clean up that needs to be done before the log 
 implementation is able to be replaced like this?
 What's our story with upgrading to this? Are we ok with requiring down time ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12233) Bring back root table

2014-10-15 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172123#comment-14172123
 ] 

Francis Liu commented on HBASE-12233:
-

Thanks for the feedback guys. Just to reiterate, we do have a real need to 
scale and we will improve/fix what's necessary be it on hbase, hdfs, or infra 
to meet that need. So far based on our experimentation and experience from 
actual large cluster deployments, it is hbase that is preventing us from 
scaling to our expected needs and splitting meta is the clear solution. 

{quote}
We could try this just with a 0.98 based feature branch and determine if it's 
worth pursuing further and elsewhere.
{quote}
[~apurtell] Thanks for volunteering. Is this a feature branch used to determine 
whether it is worthwhile to bring back root (and split meta) or a feature 
branch to determine whether it is feasible to backport it to 0.98? What would 
be the measure of success? 

IMHO a 0.98 feature branch should be enough as a proving ground for either 
scenario.




 Bring back root table
 -

 Key: HBASE-12233
 URL: https://issues.apache.org/jira/browse/HBASE-12233
 Project: HBase
  Issue Type: Sub-task
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12233.patch


 First step towards splitting meta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12264) ImportTsv should fail fast if output is not specified and table does not exist

2014-10-15 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-12264:
-

 Summary: ImportTsv should fail fast if output is not specified and 
table does not exist
 Key: HBASE-12264
 URL: https://issues.apache.org/jira/browse/HBASE-12264
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor


ImportTsv should fail fast if {{importtsv.bulk.output}} is not specified and 
the specified table also does not exist



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-10-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172145#comment-14172145
 ] 

ramkrishna.s.vasudevan commented on HBASE-11992:


bq.Would it be possible for you to test again with two clusters where at least 
two RSes are replicating in each? Only upgrade the RSes, one at a time.
I did this test as suggested. 
1) Started two clusters with one master and 2 RSes each.
2) Created 3 tables in each cluster with REPLICATION_SCOPE and ensured that 
atleast one table is present in each RS.
3) Inserted data into all tables
4) Scanned the peer cluster and read back all the data from all the tables.
5) Stopped one RS in the main cluster.
6) Applied the patch and issued balance such that the newly started RS also has 
some table.
7) Inserted data into that table 
8) Scan the peer cluster to read back all the data
9) Stop the RS in the peer cluster.
10) Apply the patch in the peer cluster for one RS.  Read back all the data and 
do balancing such that we have tables in this RS also
11) Insert new data from the main cluster.
12) Scan the data in the peer cluster and ensure all the new data is also 
available
13) Apply patch on the other RS in the main cluster.
14) Balance the cluster and add new data to all the tables.
15) Scan the peer clusters and ensure all the data is available.
I think the basic scenarios are working fine.  What do you think 
[~apurtell],[~enis]?


 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-11992_0.98_1.patch, hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12264) ImportTsv should fail fast if output is not specified and table does not exist

2014-10-15 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-12264:
--
Attachment: HBase-12264.patch

 ImportTsv should fail fast if output is not specified and table does not exist
 --

 Key: HBASE-12264
 URL: https://issues.apache.org/jira/browse/HBASE-12264
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBase-12264.patch


 ImportTsv should fail fast if {{importtsv.bulk.output}} is not specified and 
 the specified table also does not exist



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12264) ImportTsv should fail fast if output is not specified and table does not exist

2014-10-15 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-12264:
--
Status: Patch Available  (was: Open)

Patch for master branch.
Please some one review.

 ImportTsv should fail fast if output is not specified and table does not exist
 --

 Key: HBASE-12264
 URL: https://issues.apache.org/jira/browse/HBASE-12264
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBase-12264.patch


 ImportTsv should fail fast if {{importtsv.bulk.output}} is not specified and 
 the specified table also does not exist



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12255) NPE in OpenRegionHandler after restart hdfs without stop hbase

2014-10-15 Thread Junyong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172161#comment-14172161
 ] 

Junyong Li commented on HBASE-12255:


The log only has a single line java.lang.NullPointerException, no more than 
stacktrace

 NPE in OpenRegionHandler after restart hdfs without stop hbase
 --

 Key: HBASE-12255
 URL: https://issues.apache.org/jira/browse/HBASE-12255
 Project: HBase
  Issue Type: Bug
 Environment: hadoop-2.5.1
 hbase-0.98
 phonenix-4.1.0
Reporter: Junyong Li

 I have a phoenix table 'EVENT', and the table have a index 'IDX_DATE_HOUR_X'.
 I restarted hdfs without stop hbase, after then, the hbase table cann't be 
 scaned.
 I try to restart hbase, the all hbase table still cann't be scaned.
 the regionserver log have many exception like this:
 2014-10-13 19:33:05,287 INFO  
 [A01101303447-V1,60020,1413199890407-recovery-writer--pool4-t3] 
 client.AsyncProcess: #4, waiting for some tasks to finish. Expected max=0, 
 tasksSent=9, tasksDone=8, currentTasksDone=8, retries=8 hasError=fal
 se, tableName=IDX_DATE_HOUR_X
 2014-10-13 19:33:05,298 INFO  
 [A01101303447-V1,60020,1413199890407-recovery-writer--pool4-t2] 
 client.AsyncProcess: #5, waiting for some tasks to finish. Expected max=0, 
 tasksSent=9, tasksDone=8, currentTasksDone=8, retries=8 hasError=fal
 se, tableName=IDX_DATE_HOUR_X
 2014-10-13 19:33:05,311 INFO  
 [A01101303447-V1,60020,1413199890407-recovery-writer--pool4-t1] 
 client.AsyncProcess: #6, waiting for some tasks to finish. Expected max=0, 
 tasksSent=9, tasksDone=8, currentTasksDone=8, retries=8 hasError=fal
 se, tableName=IDX_DATE_HOUR_X
 2014-10-13 19:33:06,452 INFO  [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl: Moving 
 A01101303447-V1,60020,1413199414409's hlogs to my queue
 2014-10-13 19:33:15,325 INFO  
 [A01101303447-V1,60020,1413199890407-recovery-writer--pool4-t1] 
 client.AsyncProcess: #6, waiting for some tasks to finish. Expected max=0, 
 tasksSent=10, tasksDone=9, currentTasksDone=9, retries=9 hasError=fa
 lse, tableName=IDX_DATE_HOUR_X
 2014-10-13 19:33:15,333 INFO  [htable-pool6-t2] client.AsyncProcess: #6, 
 table=IDX_DATE_HOUR_X, attempt=10/350 failed 12 ops, last exception: 
 org.apache.hadoop.hbase.exceptions.RegionOpeningException: 
 org.apache.hadoop.hbase.exceptions.R
 egionOpeningException: Region 
 IDX_DATE_HOUR_X,t\x00\x00\x00\x00\x00,1413186874829.9a92abb84768b129df3faedb877f7bea.
  is opening on A01101303447-V1,60020,1413199890407
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2759)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4213)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3437)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29593)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:744)
 ---
 After two days's try, i found that:
 If idisable 'EVENT', other tables can be scanned, then i enable 'EVENT' 
 manually, the region log show that NullPointExceptin has occur then replaying 
 WAL, the following is log:
 2014-10-13 19:25:21,043 INFO  [RS_OPEN_REGION-A01101303447-V1:60020-1] 
 regionserver.HRegion: Replaying edits from 
 hdfs://localhost/hbase-0.98/data/default/EVENT/def4a581d4ad963cbb8cad32cbfbab2e/recovered.edits/002
 2014-10-13 19:25:21,048 INFO  
 [A01101303447-V1,60020,1413199414409-recovery-writer--pool17-t2-SendThread(localhost:2182)]
  zookeeper.ClientCnxn: Opening socket connection to server 
 localhost/0:0:0:0:0:0:0:1:2182. Will not attempt to authe
 nticate using SASL (unknown error)
 2014-10-13 19:25:21,049 INFO  
 [A01101303447-V1,60020,1413199414409-recovery-writer--pool17-t2-SendThread(localhost:2182)]
  zookeeper.ClientCnxn: Socket connection established to 
 localhost/0:0:0:0:0:0:0:1:2182, initiating session
 2014-10-13 19:25:21,051 INFO  
 [A01101303447-V1,60020,1413199414409-recovery-writer--pool17-t3] 
 zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x199c8484 
 connecting to ZooKeeper ensemble=localhost:2182
 2014-10-13 19:25:21,051 INFO  
 [A01101303447-V1,60020,1413199414409-recovery-writer--pool17-t3] 
 zookeeper.ZooKeeper: Initiating client connection, 
 connectString=localhost:2182 sessionTimeout=9 
 watcher=hconnection-0x199c8484, quorum=lo
 calhost:2182, 

[jira] [Created] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-15 Thread Niels Basjes (JIRA)
Niels Basjes created HBASE-12265:


 Summary: HBase shell 'show_filters' points to internal Facebook URL
 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Priority: Minor


In the HBase shell the output of the show_filters command starts with this:
{code}
hbase(main):001:0 show_filters
Documentation on filters mentioned below can be found at: 
https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
ColumnPrefixFilter
TimestampsFilter
...
{code}

This documentation link cannot be reached by me. It seems to be an internal 
link to a wiki that can only be reached by Facebook employees.

This link is in this file in two places 
hbase-shell/src/main/ruby/shell/commands/show_filters.rb

So far I have not been able to find the 'right' page to point to (I did a quick 
check of the apache wiki and the hbase book).

So either remove the link or add a section to the hbase book and point there. I 
think the latter (creating documentation) is the best solution.

Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172180#comment-14172180
 ] 

ramkrishna.s.vasudevan commented on HBASE-12202:


Pls give me a day's time. I will check this and get back on it.

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Qiang Tian (JIRA)
Qiang Tian created HBASE-12266:
--

 Summary: Slow Scan can cause dead loop in ClientScanner 
 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor


see http://search-hadoop.com/m/DHED45SVsC1.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Qiang Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang Tian updated HBASE-12266:
---
Attachment: HBASE-12266-master.patch

any particular purpose to set it to true there?
thanks.

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Qiang Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang Tian updated HBASE-12266:
---
Status: Patch Available  (was: Open)

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12264) ImportTsv should fail fast if output is not specified and table does not exist

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172240#comment-14172240
 ] 

Hadoop QA commented on HBASE-12264:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674957/HBase-12264.patch
  against trunk revision .
  ATTACHMENT ID: 12674957

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11351//console

This message is automatically generated.

 ImportTsv should fail fast if output is not specified and table does not exist
 --

 Key: HBASE-12264
 URL: https://issues.apache.org/jira/browse/HBASE-12264
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBase-12264.patch


 ImportTsv should fail fast if {{importtsv.bulk.output}} is not specified and 
 the specified table also does not exist



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-10-15 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172250#comment-14172250
 ] 

zhangduo commented on HBASE-10201:
--

Run the same benchmark on a 3 regionservers cluster(2 * Xeon E5-2650 2.6G, 3T * 
11 sata), the result is smililar.

Without per CF flush:
metric_storeCount: 3,
metric_storeFileCount: 9,
metric_memStoreSize: 39965016,
metric_storeFileSize: 4460709275,
metric_compactionsCompletedCount: 46,
metric_numBytesCompactedCount: 11030906070,
metric_numFilesCompactedCount: 145,
Write amplification: 2.47

With per CF flush:
metric_storeCount: 3,
metric_storeFileCount: 7,
metric_memStoreSize: 110195648,
metric_storeFileSize: 4369570622,
metric_compactionsCompletedCount: 27,
metric_numBytesCompactedCount: 10353718691,
metric_numFilesCompactedCount: 89,
Write amplification: 2.37

The patch has a big impact on compactionsCompletedCount, but a small impact on 
numBytesCompactedCount. This is reasonable, the patch only prevent flushing 
small files of small CFs and reduce its compaction number, but most 
numBytesCompactedCount is contributed by large CFs which is not effected(or at 
least, very small) by this patch. So we only get a small improvement of write 
amplification(5%~10%).


 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172281#comment-14172281
 ] 

Hadoop QA commented on HBASE-12266:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674963/HBASE-12266-master.patch
  against trunk revision .
  ATTACHMENT ID: 12674963

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11352//console

This message is automatically generated.

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Solomon Duskis (JIRA)
Solomon Duskis created HBASE-12267:
--

 Summary: Replace HTable constructor in mapreduce.* classes with 
ConnectionFactory 
 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172343#comment-14172343
 ] 

Anoop Sam John commented on HBASE-12202:


Sure Ram.

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-15 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11870:
---
Attachment: HBASE-11870.patch

Tag#fromList(ListTag tags)
Copied this code from Andy's patch in per cell TTL tag

 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-15 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11870:
---
Status: Patch Available  (was: Open)

 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread James Estes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172397#comment-14172397
 ] 

James Estes commented on HBASE-12266:
-

Thanks for filing.

It looks like that was set to true due to a VERY similar issue 
https://issues.apache.org/jira/browse/HBASE-7070 so that it would only 
reset/retry for one rpc call.  In my case though, the reset/retry only for one 
call doesn't work because it will have a few successful rpc calls after the 
reset (the earlier ones are getting a tiny amount of data, and then the super 
slow rpc happens on about the 4th rpc).  

Looking a bit further down, there is logic for the UnknownScannerException that 
will make sure we're not over the scannerTimeout. Should that logic maybe be 
used for all exceptions? Maybe even at the top of the while loop, but I think 
changing it only in the exception case would fix my issue...it would stop 
retrying after 60s (the default for scannerTimeout).

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread James Estes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172405#comment-14172405
 ] 

James Estes commented on HBASE-12266:
-

This is a very similar issue. The fix in HBASE-7070 doesn't work in the case 
where a very selective scan hits a large region, and an rpc call times out. The 
'retryAfterOutOfOrderException' flag fix assumes that only one rpc call is 
causing trouble (because it is reset on the next successful rpc). In my case, 
after the scanner is reset, several rpc calls succeed (they get a small amount 
of data), and the 4th or 5th rpc call hits an rpcTimeout due to hitting large 
region section and it is a highly selective scan.  

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11542) Unit Test KeyStoreTestUtil.java compilation failure in IBM JDK

2014-10-15 Thread pascal oliva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pascal oliva updated HBASE-11542:
-
Status: Patch Available  (was: Open)

new patch  HBASE-11542-6.patch added by using test key files stored into 
hbase-server/src/test/resources/

this patch contains binary file
i used to create the patch : git diff --no-prefix --cached --binary master  
../PATCHES/HBASE-11542-6.patch 
and to apply the patch
 git apply --binary -p0 HBASE-11542-6.patch


 Unit Test  KeyStoreTestUtil.java compilation failure in IBM JDK 
 

 Key: HBASE-11542
 URL: https://issues.apache.org/jira/browse/HBASE-11542
 Project: HBase
  Issue Type: Improvement
  Components: build, test
Affects Versions: 0.99.0
 Environment: RHEL 6.3 ,IBM JDK 6
Reporter: LinseyPang
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-11542-4.patch, HBASE-11542-5.patch, 
 HBASE_11542-1.patch, KeyStoreTestUtil.java.new1, client_crt, client_pkcs8, 
 hbase11542-0.99-v3.patch, hbase11542-0.99-v3.patch, hbase11542-0.99-v3.patch, 
 hbase_11542-v2.patch, server_crt, server_pkcs8, sslkeystore.patch


 In trunk,  jira HBase-10336 added a utility test KeyStoreTestUtil.java, which 
 leverages the following sun classes:
    import sun.security.x509.AlgorithmId;
    import sun.security.x509.CertificateAlgorithmId;
   
 this cause hbase compiler failure if using IBM JDK,  
 There are similar classes like below in IBM jdk: 
 import com.ibm.security.x509.AlgorithmId;
 import com.ibm.security.x509.CertificateAlgorithmId; 
 This jira is to add handling of the x509 references. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11542) Unit Test KeyStoreTestUtil.java compilation failure in IBM JDK

2014-10-15 Thread pascal oliva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pascal oliva updated HBASE-11542:
-
Attachment: HBASE-11542-6.patch

 Unit Test  KeyStoreTestUtil.java compilation failure in IBM JDK 
 

 Key: HBASE-11542
 URL: https://issues.apache.org/jira/browse/HBASE-11542
 Project: HBase
  Issue Type: Improvement
  Components: build, test
Affects Versions: 0.99.0
 Environment: RHEL 6.3 ,IBM JDK 6
Reporter: LinseyPang
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-11542-4.patch, HBASE-11542-5.patch, 
 HBASE-11542-6.patch, HBASE_11542-1.patch, KeyStoreTestUtil.java.new1, 
 client_crt, client_pkcs8, hbase11542-0.99-v3.patch, hbase11542-0.99-v3.patch, 
 hbase11542-0.99-v3.patch, hbase_11542-v2.patch, server_crt, server_pkcs8, 
 sslkeystore.patch


 In trunk,  jira HBase-10336 added a utility test KeyStoreTestUtil.java, which 
 leverages the following sun classes:
    import sun.security.x509.AlgorithmId;
    import sun.security.x509.CertificateAlgorithmId;
   
 this cause hbase compiler failure if using IBM JDK,  
 There are similar classes like below in IBM jdk: 
 import com.ibm.security.x509.AlgorithmId;
 import com.ibm.security.x509.CertificateAlgorithmId; 
 This jira is to add handling of the x509 references. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11542) Unit Test KeyStoreTestUtil.java compilation failure in IBM JDK

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172412#comment-14172412
 ] 

Hadoop QA commented on HBASE-11542:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675006/HBASE-11542-6.patch
  against trunk revision .
  ATTACHMENT ID: 12675006

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11354//console

This message is automatically generated.

 Unit Test  KeyStoreTestUtil.java compilation failure in IBM JDK 
 

 Key: HBASE-11542
 URL: https://issues.apache.org/jira/browse/HBASE-11542
 Project: HBase
  Issue Type: Improvement
  Components: build, test
Affects Versions: 0.99.0
 Environment: RHEL 6.3 ,IBM JDK 6
Reporter: LinseyPang
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-11542-4.patch, HBASE-11542-5.patch, 
 HBASE-11542-6.patch, HBASE_11542-1.patch, KeyStoreTestUtil.java.new1, 
 client_crt, client_pkcs8, hbase11542-0.99-v3.patch, hbase11542-0.99-v3.patch, 
 hbase11542-0.99-v3.patch, hbase_11542-v2.patch, server_crt, server_pkcs8, 
 sslkeystore.patch


 In trunk,  jira HBase-10336 added a utility test KeyStoreTestUtil.java, which 
 leverages the following sun classes:
    import sun.security.x509.AlgorithmId;
    import sun.security.x509.CertificateAlgorithmId;
   
 this cause hbase compiler failure if using IBM JDK,  
 There are similar classes like below in IBM jdk: 
 import com.ibm.security.x509.AlgorithmId;
 import com.ibm.security.x509.CertificateAlgorithmId; 
 This jira is to add handling of the x509 references. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12268) Add support for Scan.setRowPrefixFilter to shell

2014-10-15 Thread Niels Basjes (JIRA)
Niels Basjes created HBASE-12268:


 Summary: Add support for Scan.setRowPrefixFilter to shell
 Key: HBASE-12268
 URL: https://issues.apache.org/jira/browse/HBASE-12268
 Project: HBase
  Issue Type: New Feature
  Components: shell
Reporter: Niels Basjes


I think having the feature introduced in HBASE-11990 in the hbase shell would 
be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Niels Basjes (JIRA)
Niels Basjes created HBASE-12269:


 Summary: Add support for Scan.setRowPrefixFilter to thrift
 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes


I think having the feature introduced in HBASE-11990 in the hbase shell would 
be very useful.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172461#comment-14172461
 ] 

Hadoop QA commented on HBASE-11870:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674988/HBASE-11870.patch
  against trunk revision .
  ATTACHMENT ID: 12674988

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11353//console

This message is automatically generated.

 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11542) Unit Test KeyStoreTestUtil.java compilation failure in IBM JDK

2014-10-15 Thread pascal oliva (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172468#comment-14172468
 ] 

pascal oliva commented on HBASE-11542:
--

I tested again in my side successfully the patch : HBASE-11542-6.patch from 
jira:hbase site.
with :
git apply --binary -p0 ../HBASE-11542-6.patch == No error , no warning
mvn clean ; mvn compile
 mvn test -Dtest=org.apache.hadoop.hbase.http.TestSSLHttpServer -X  | tee 
res.hbase.SSLHttp.log
[INFO] HBase . SUCCESS [  1.830 s]
[INFO] HBase - Annotations ... SUCCESS [  0.906 s]
[INFO] HBase - Common  SUCCESS [  5.822 s]
[INFO] HBase - Protocol .. SUCCESS [  0.095 s]
[INFO] HBase - Client  SUCCESS [  2.205 s]
[INFO] HBase - Hadoop Compatibility .. SUCCESS [  0.216 s]
[INFO] HBase - Hadoop Two Compatibility .. SUCCESS [  1.040 s]
[INFO] HBase - Prefix Tree ... SUCCESS [  0.876 s]
[INFO] HBase - Server  SUCCESS [ 14.280 s]
[INFO] HBase - Testing Util .. SUCCESS [  1.181 s]
[INFO] HBase - Thrift  SUCCESS [  1.132 s]
[INFO] HBase - Shell . SUCCESS [  0.702 s]
[INFO] HBase - Integration Tests . SUCCESS [  1.137 s]
[INFO] HBase - Examples .. SUCCESS [  0.733 s]
[INFO] HBase - Rest .. SUCCESS [  1.237 s]
[INFO] HBase - Assembly .. SUCCESS [  1.057 s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 35.248 s
[INFO] Finished at: 2014-10-15T17:14:04+01:00
[INFO] Final Memory: 91M/300M
[INFO] 


Running org.apache.hadoop.hbase.http.TestSSLHttpServer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.14 sec - in 
org.apache.hadoop.hbase.http.TestSSLHttpServer

What do you suggest to push the patch ? A pull request  ?



 Unit Test  KeyStoreTestUtil.java compilation failure in IBM JDK 
 

 Key: HBASE-11542
 URL: https://issues.apache.org/jira/browse/HBASE-11542
 Project: HBase
  Issue Type: Improvement
  Components: build, test
Affects Versions: 0.99.0
 Environment: RHEL 6.3 ,IBM JDK 6
Reporter: LinseyPang
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-11542-4.patch, HBASE-11542-5.patch, 
 HBASE-11542-6.patch, HBASE_11542-1.patch, KeyStoreTestUtil.java.new1, 
 client_crt, client_pkcs8, hbase11542-0.99-v3.patch, hbase11542-0.99-v3.patch, 
 hbase11542-0.99-v3.patch, hbase_11542-v2.patch, server_crt, server_pkcs8, 
 sslkeystore.patch


 In trunk,  jira HBase-10336 added a utility test KeyStoreTestUtil.java, which 
 leverages the following sun classes:
    import sun.security.x509.AlgorithmId;
    import sun.security.x509.CertificateAlgorithmId;
   
 this cause hbase compiler failure if using IBM JDK,  
 There are similar classes like below in IBM jdk: 
 import com.ibm.security.x509.AlgorithmId;
 import com.ibm.security.x509.CertificateAlgorithmId; 
 This jira is to add handling of the x509 references. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10974) Improve DBEs read performance by avoiding byte array deep copies for key[] and value[]

2014-10-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172499#comment-14172499
 ] 

Anoop Sam John commented on HBASE-10974:


{code}
   currentBuffer = buffer;
+  // Allocate every time we get a new block
+  // Would be great if from the block we know how much is key part and how
+  // much is for value part(the unencoded one). If this value exceeds we
+  // may need to do a copy
+  // TODO : Get the unencoded key length from the hfileblock
+  current.keyBuffer = ByteBuffer.allocate(currentBuffer.capacity() * 16);
{code}
We allocate a very big size buffer for the key? 'currentBuffer' is the buffer 
containing the whole block data and we allocate 16 times bigger buffer! Not 
getting why you want this Ram.


 Improve DBEs read performance by avoiding byte array deep copies for key[] 
 and value[]
 --

 Key: HBASE-10974
 URL: https://issues.apache.org/jira/browse/HBASE-10974
 Project: HBase
  Issue Type: Improvement
  Components: Scanners
Affects Versions: 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.2

 Attachments: HBASE-10974_1.patch


 As part of HBASE-10801, we  tried to reduce the copy of the value [] in 
 forming the KV from the DBEs. 
 The keys required copying and this was restricting us in using Cells and 
 always wanted to copy to be done.
 The idea here is to replace the key byte[] as ByteBuffer and create a 
 consecutive stream of the keys (currently the same byte[] is used and hence 
 the copy).  Use offset and length to track this key bytebuffer.
 The copy of the encoded format to normal Key format is definitely needed and 
 can't be avoided but we could always avoid the deep copy of the bytes to form 
 a KV and thus use cells effectively. Working on a patch, will post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12265:
---
 Priority: Trivial  (was: Minor)
Fix Version/s: 0.99.2
   0.98.8
   2.0.0
 Assignee: Andrew Purtell

 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12265:
---
Status: Patch Available  (was: Open)

 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172501#comment-14172501
 ] 

Andrew Purtell commented on HBASE-12265:


Just remove the mention of the FB URL

 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12265:
---
Attachment: HBASE-12265.patch

 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12233) Bring back root table

2014-10-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172511#comment-14172511
 ] 

Andrew Purtell commented on HBASE-12233:


bq. Is this a feature branch used to determine whether it is worthwhile to 
bring back root (and split meta) or a feature branch to determine whether it is 
feasible to backport it to 0.98? 

I assume used to determine whether it is worthwhile to bring back root (and 
split meta), but done once where you need it for deployment. 

bq. What would be the measure of success?

If you end up deploying the feature in your production that would go quite a 
long way, I think. A compelling before-and-after characterization would also be 
important, e.g. turn off the feature, deploy a large table, shut down, turn on 
the feature, redeploy the table, enumerate relevant metrics collected during 
each table deployment, and report back on the differences here. 

 Bring back root table
 -

 Key: HBASE-12233
 URL: https://issues.apache.org/jira/browse/HBASE-12233
 Project: HBase
  Issue Type: Sub-task
Reporter: Virag Kothari
Assignee: Virag Kothari
 Attachments: HBASE-12233.patch


 First step towards splitting meta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12269:
-
Description: 
I think having the feature introduced in HBASE-11990 in the hbase thrift 
interface would be very useful.


  was:
I think having the feature introduced in HBASE-11990 in the hbase shell would 
be very useful.



 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes

 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12260) MasterServices - remove from coprocessor API (Discuss)

2014-10-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172519#comment-14172519
 ] 

Andrew Purtell commented on HBASE-12260:


No, we expose MasterServices and RegionServerServices as a way for coprocessors 
to get useful access to server internals. I'd be -1 on the proposed change. 
What we probably should do is tag them LimitedPrivate(COPROC)

 MasterServices - remove from coprocessor API (Discuss)
 --

 Key: HBASE-12260
 URL: https://issues.apache.org/jira/browse/HBASE-12260
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: ryan rawson
Priority: Minor

 A major issue with MasterServices is the MasterCoprocessorEnvironment exposes 
 this class even though MasterServices is tagged with 
 @InterfaceAudience.Private
 This means that the entire internals of the HMaster is essentially part of 
 the coprocessor API.  Many of the classes returned by the MasterServices API 
 are highly internal, extremely powerful, and subject to constant change.  
 Perhaps a new API to replace MasterServices that is use-case focused, and 
 justified based on real world co-processors would suit things better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12260) MasterServices - remove from coprocessor API (Discuss)

2014-10-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172524#comment-14172524
 ] 

Andrew Purtell commented on HBASE-12260:


If you want to see real world usage of MasterServices, see the 
AccessController. 

We do have an issue open for building a higher level coprocessor API on top of 
the current one, HBASE-11125 might be a useful vehicle. Not exposing 
MasterServices and RegionServerServices there totally makes sense. However 
HBASE-11125 specifically does not eliminate what would become the lower level 
API for power uses or propose abandoning current users.

 MasterServices - remove from coprocessor API (Discuss)
 --

 Key: HBASE-12260
 URL: https://issues.apache.org/jira/browse/HBASE-12260
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: ryan rawson
Priority: Minor

 A major issue with MasterServices is the MasterCoprocessorEnvironment exposes 
 this class even though MasterServices is tagged with 
 @InterfaceAudience.Private
 This means that the entire internals of the HMaster is essentially part of 
 the coprocessor API.  Many of the classes returned by the MasterServices API 
 are highly internal, extremely powerful, and subject to constant change.  
 Perhaps a new API to replace MasterServices that is use-case focused, and 
 justified based on real world co-processors would suit things better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172528#comment-14172528
 ] 

Anoop Sam John commented on HBASE-12266:


This is not helping?
{code}
if (retryAfterOutOfOrderException) {
retryAfterOutOfOrderException = false;
  } else {
// TODO: Why wrap this in a DNRIOE when it already is a DNRIOE?
throw new DoNotRetryIOException(Failed after retry of  +
  OutOfOrderScannerNextException: was there a rpc timeout?, 
e);
  }
{code}

The change in the patch makes that in one next() call at max one time a reset 
and re scan can happen (when OutOfOrderScannerNextExpception)!

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172530#comment-14172530
 ] 

Anoop Sam John commented on HBASE-12266:


Is this endless loop really?

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12261) Add checkstyle to HBase build process

2014-10-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172532#comment-14172532
 ] 

Andrew Purtell commented on HBASE-12261:


+1

I like that we can toggle on or off individual checks in the checkstyle.xml 
control file

 Add checkstyle to HBase build process
 -

 Key: HBASE-12261
 URL: https://issues.apache.org/jira/browse/HBASE-12261
 Project: HBase
  Issue Type: Bug
  Components: build, site
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: 0001-Add-checkstyle.patch


 We should add checkstyle to hadoop qa for our builds. That would free 
 committers up from checking patches for stylistic issues and leave them free 
 to check the real meat of the patches.
 Additionally we should have the check for empty try catch blocks running so 
 that we can't regress on catching exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12261) Add checkstyle to HBase build process

2014-10-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12261:
---
Fix Version/s: 0.99.2
   0.98.8
   2.0.0

Added fix versions so we'll look at doing this for the next set of releases. 
Undo if you feel that's too aggressive.

 Add checkstyle to HBase build process
 -

 Key: HBASE-12261
 URL: https://issues.apache.org/jira/browse/HBASE-12261
 Project: HBase
  Issue Type: Bug
  Components: build, site
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-checkstyle.patch


 We should add checkstyle to hadoop qa for our builds. That would free 
 committers up from checking patches for stylistic issues and leave them free 
 to check the real meat of the patches.
 Additionally we should have the check for empty try catch blocks running so 
 that we can't regress on catching exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-10-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172542#comment-14172542
 ] 

Andrew Purtell commented on HBASE-11992:


Thanks, yes that sounds better. Please consider putting the patch up on RB, 
it's a large one.

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-11992_0.98_1.patch, hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-10-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172554#comment-14172554
 ] 

Andrew Purtell commented on HBASE-11992:


When doing the test above did you ensure/observe both the old version peer and 
new version peer were both picking up replication work at the same time in the 
same source cluster? 

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-11992_0.98_1.patch, hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12261) Add checkstyle to HBase build process

2014-10-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172555#comment-14172555
 ] 

Elliott Clark commented on HBASE-12261:
---

No that seems fine. Let me break the patch into two parts, the adding 
checkstyle and the fixes that way this is more easily back portable.

 Add checkstyle to HBase build process
 -

 Key: HBASE-12261
 URL: https://issues.apache.org/jira/browse/HBASE-12261
 Project: HBase
  Issue Type: Bug
  Components: build, site
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-checkstyle.patch


 We should add checkstyle to hadoop qa for our builds. That would free 
 committers up from checking patches for stylistic issues and leave them free 
 to check the real meat of the patches.
 Additionally we should have the check for empty try catch blocks running so 
 that we can't regress on catching exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12266:
---
Attachment: 12266-v2.txt

Something like this ?

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: 12266-v2.txt, HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12270) A bug in the bucket cache, with cache blocks on write enabled

2014-10-15 Thread Khaled Elmeleegy (JIRA)
Khaled Elmeleegy created HBASE-12270:


 Summary: A bug in the bucket cache, with cache blocks on write 
enabled
 Key: HBASE-12270
 URL: https://issues.apache.org/jira/browse/HBASE-12270
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
 Environment: I can reproduce it on a simple 2 node cluster, one 
running the master and another running a RS. I was testing on ec2.
I used the following configurations for the cluster. 
hbase-env:HBASE_REGIONSERVER_OPTS=-Xmx2G -XX:MaxDirectMemorySize=5G 
-XX:CMSInitiatingOccupancyFraction=88 -XX:+AggressiveOpts -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xlog 
gc:/tmp/hbase-regionserver-gc.log
hbase-site:

hbase.bucketcache.ioengine=offheap
hbase.bucketcache.size=4196
hbase.rs.cacheblocksonwrite=true
hfile.block.index.cacheonwrite=true
hfile.block.bloom.cacheonwrite=true

Reporter: Khaled Elmeleegy
Priority: Critical


In my experiments, I have writers streaming their output to HBase. The reader 
powers a web page and does this scatter/gather, where it reads 1000 keys 
written last and passes them the the front end. With this workload, I get the 
exception below at the region server. Again, I am using HBAse (0.98.6.1). Any 
help is appreciated.

2014-10-10 15:06:44,173 ERROR 
[B.DefaultRpcServer.handler=62,queue=2,port=60020] ipc.RpcServer: Unexpected 
throwable object 
java.lang.IllegalArgumentException
  at java.nio.Buffer.position(Buffer.java:236)
 at org.apache.hadoop.hbase.util.ByteBufferUtils.skip(ByteBufferUtils.java:434)
  at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:849)
  at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:760)
 at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:248)
   at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
  at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
 at 
org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:176)
  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1780)
  at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3758)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1950)
  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1936)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1913)
  at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3157)
  at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread James Estes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172567#comment-14172567
 ] 

James Estes commented on HBASE-12266:
-

Yes, that is what I was thinking.

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: 12266-v2.txt, HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12270) A bug in the bucket cache, with cache blocks on write enabled

2014-10-15 Thread Khaled Elmeleegy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172569#comment-14172569
 ] 

Khaled Elmeleegy commented on HBASE-12270:
--

I am attaching simple test program to reproduce the problem. It just needs to 
run on a simple toy cluster with a single RS and configured as explained above.

 A bug in the bucket cache, with cache blocks on write enabled
 -

 Key: HBASE-12270
 URL: https://issues.apache.org/jira/browse/HBASE-12270
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
 Environment: I can reproduce it on a simple 2 node cluster, one 
 running the master and another running a RS. I was testing on ec2.
 I used the following configurations for the cluster. 
 hbase-env:HBASE_REGIONSERVER_OPTS=-Xmx2G -XX:MaxDirectMemorySize=5G 
 -XX:CMSInitiatingOccupancyFraction=88 -XX:+AggressiveOpts -verbose:gc 
 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xlog 
 gc:/tmp/hbase-regionserver-gc.log
 hbase-site:
 hbase.bucketcache.ioengine=offheap
 hbase.bucketcache.size=4196
 hbase.rs.cacheblocksonwrite=true
 hfile.block.index.cacheonwrite=true
 hfile.block.bloom.cacheonwrite=true
Reporter: Khaled Elmeleegy
Priority: Critical

 In my experiments, I have writers streaming their output to HBase. The reader 
 powers a web page and does this scatter/gather, where it reads 1000 keys 
 written last and passes them the the front end. With this workload, I get the 
 exception below at the region server. Again, I am using HBAse (0.98.6.1). Any 
 help is appreciated.
 2014-10-10 15:06:44,173 ERROR 
 [B.DefaultRpcServer.handler=62,queue=2,port=60020] ipc.RpcServer: Unexpected 
 throwable object 
 java.lang.IllegalArgumentException
   at java.nio.Buffer.position(Buffer.java:236)
  at 
 org.apache.hadoop.hbase.util.ByteBufferUtils.skip(ByteBufferUtils.java:434)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:849)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:760)
  at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:248)
at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
   at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
  at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:176)
   at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1780)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3758)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1950)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1936)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1913)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3157)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
  at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
  at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12270) A bug in the bucket cache, with cache blocks on write enabled

2014-10-15 Thread Khaled Elmeleegy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Khaled Elmeleegy updated HBASE-12270:
-
Attachment: TestHBase.java
TestKey.java

 A bug in the bucket cache, with cache blocks on write enabled
 -

 Key: HBASE-12270
 URL: https://issues.apache.org/jira/browse/HBASE-12270
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
 Environment: I can reproduce it on a simple 2 node cluster, one 
 running the master and another running a RS. I was testing on ec2.
 I used the following configurations for the cluster. 
 hbase-env:HBASE_REGIONSERVER_OPTS=-Xmx2G -XX:MaxDirectMemorySize=5G 
 -XX:CMSInitiatingOccupancyFraction=88 -XX:+AggressiveOpts -verbose:gc 
 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xlog 
 gc:/tmp/hbase-regionserver-gc.log
 hbase-site:
 hbase.bucketcache.ioengine=offheap
 hbase.bucketcache.size=4196
 hbase.rs.cacheblocksonwrite=true
 hfile.block.index.cacheonwrite=true
 hfile.block.bloom.cacheonwrite=true
Reporter: Khaled Elmeleegy
Priority: Critical
 Attachments: TestHBase.java, TestKey.java


 In my experiments, I have writers streaming their output to HBase. The reader 
 powers a web page and does this scatter/gather, where it reads 1000 keys 
 written last and passes them the the front end. With this workload, I get the 
 exception below at the region server. Again, I am using HBAse (0.98.6.1). Any 
 help is appreciated.
 2014-10-10 15:06:44,173 ERROR 
 [B.DefaultRpcServer.handler=62,queue=2,port=60020] ipc.RpcServer: Unexpected 
 throwable object 
 java.lang.IllegalArgumentException
   at java.nio.Buffer.position(Buffer.java:236)
  at 
 org.apache.hadoop.hbase.util.ByteBufferUtils.skip(ByteBufferUtils.java:434)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:849)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:760)
  at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:248)
at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
   at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
  at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:176)
   at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1780)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3758)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1950)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1936)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1913)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3157)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
  at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
  at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis updated HBASE-12267:
---
Attachment: HBASE-12267.patch

Initial commit of M/R replacement of HTable() with connection.getTable().

 Replace HTable constructor in mapreduce.* classes with ConnectionFactory 
 -

 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12267.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis updated HBASE-12267:
---
Status: Patch Available  (was: Open)

 Replace HTable constructor in mapreduce.* classes with ConnectionFactory 
 -

 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12267.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172587#comment-14172587
 ] 

Hadoop QA commented on HBASE-12267:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675039/HBASE-12267.patch
  against trunk revision .
  ATTACHMENT ID: 12675039

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java:[79,7]
 error: RemoteHTable is not abstract and does not override abstract method 
setClearBufferOnFail(boolean) in Table
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-rest: Compilation failure
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java:[79,7]
 error: RemoteHTable is not abstract and does not override abstract method 
setClearBufferOnFail(boolean) in Table
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hbase-rest


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11357//console

This message is automatically generated.

 Replace HTable constructor in mapreduce.* classes with ConnectionFactory 
 -

 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12267.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Patrick White (JIRA)
Patrick White created HBASE-12271:
-

 Summary: Add counters for files skipped during snapshot export
 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2


It's incredibly handy to see the number of files skipped to know/verify delta 
backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Patrick White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick White updated HBASE-12271:
--
Attachment: 0001-Add-counters-for-skipped-files.patch

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Patrick White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick White updated HBASE-12271:
--
Status: Patch Available  (was: Open)

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12210) Avoid KeyValue in Prefix Tree

2014-10-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172601#comment-14172601
 ] 

Anoop Sam John commented on HBASE-12210:


{code}
@Override
public String toString() {
  return KeyValueUtil.copyToNewKeyValue(this).toString();
}
{code}
Pls avoid copy to KV. See ClonedSeekerState#toString.

Pls make this new class implement HeapSize to get a correct heapSize value.

 Avoid KeyValue in Prefix Tree
 -

 Key: HBASE-12210
 URL: https://issues.apache.org/jira/browse/HBASE-12210
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: Performance
 Fix For: 2.0.0, 0.99.1

 Attachments: HBASE-11871.java, HBASE-11871.patch, HBASE-12210.patch


 Avoid KeyValue recreate where all possible in the PrefixTree module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172600#comment-14172600
 ] 

Elliott Clark commented on HBASE-12271:
---

+1 Running in production with it.

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172598#comment-14172598
 ] 

Matteo Bertozzi commented on HBASE-12271:
-

+1

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172603#comment-14172603
 ] 

Sean Busbey commented on HBASE-12269:
-

{quote}
1) In the thrift interface the elements are numbered.
- Is it better to put the new thing I'm building in a logical place
(i.e. somewhere in the middle causing all later fields to get a different
number)
- or to put it at the end?
{quote}

Please put it at the end. Changing field numbers will break compatibility.

{quote}
2) Apparently the generated thrift code has been committed to version
control.
- Should the changes in these classes be part of my patch?
{quote}

Yes.

{quote}
 - If so, which version of thrift should be used (0.9.0/0.9.1/... ?) and
what is the 100% accurate correct command to generate them?
{quote}

You should use the {{thrift.version}} in master's pom.xml. At the moment that 
is 0.9.0.

[Instructions on generating the classes can be found in the thrift2 package 
docs|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html]
 (and similarly for [the original thrift interface in the thrift 
package|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html])

{quote}
- For my own purpose I created a small shell script so I can repeat the
command I have now (see below). Shall I include this as a separate script
in my patch?


The script I created here is hbase-thrift/generate-thrift-classes.sh
#!/bin/bash
thrift -v --gen java -out src/main/java/
src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
thrift -v --gen java -out src/main/java/
src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
{quote}

I would leave this out, since it is not part of the current issue. I'd love to 
improve our thrift handling, so if you want to make a follow on issue that'd be 
great. At a minimum the script would have to be expanded to verify that the 
correct version of thrift is being used. It should also be tied to maven.

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes

 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-10-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172602#comment-14172602
 ] 

ramkrishna.s.vasudevan commented on HBASE-11992:


bq.When doing the test above did you ensure/observe both the old version peer 
and new version peer were both picking up replication work at the same time in 
the same source cluster?
You mean both the peers were able to get the new edits? Yes.  It was able to.  
Do you suspect any specific behaviour?

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-11992_0.98_1.patch, hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11992) Backport HBASE-11367 (Pluggable replication endpoint) to 0.98

2014-10-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172608#comment-14172608
 ] 

ramkrishna.s.vasudevan commented on HBASE-11992:


https://reviews.apache.org/r/26756 - RB link

 Backport HBASE-11367 (Pluggable replication endpoint) to 0.98
 -

 Key: HBASE-11992
 URL: https://issues.apache.org/jira/browse/HBASE-11992
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-11992_0.98_1.patch, hbase-11367_0.98.patch


 ReplicationSource tails the logs for each peer. HBASE-11367 introduces 
 ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is 
 run in the same RS process and instantiated per replication peer per region 
 server. Implementations of this interface handle the actual shipping of WAL 
 edits to the remote cluster.
 This issue is for backporting HBASE-11367 to 0.98.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12261) Add checkstyle to HBase build process

2014-10-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12261:
--
Status: Patch Available  (was: Open)

 Add checkstyle to HBase build process
 -

 Key: HBASE-12261
 URL: https://issues.apache.org/jira/browse/HBASE-12261
 Project: HBase
  Issue Type: Bug
  Components: build, site
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-checkstyle.patch, 
 0001-HBASE-12261-Add-checkstyle-to-HBase-build-process.patch


 We should add checkstyle to hadoop qa for our builds. That would free 
 committers up from checking patches for stylistic issues and leave them free 
 to check the real meat of the patches.
 Additionally we should have the check for empty try catch blocks running so 
 that we can't regress on catching exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12261) Add checkstyle to HBase build process

2014-10-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12261:
--
Attachment: 0001-HBASE-12261-Add-checkstyle-to-HBase-build-process.patch

Here's the patch with just checkstyle. After this goes in I'll follow up with a 
jira to get our checkstyle error count into a reasonable range.

 Add checkstyle to HBase build process
 -

 Key: HBASE-12261
 URL: https://issues.apache.org/jira/browse/HBASE-12261
 Project: HBase
  Issue Type: Bug
  Components: build, site
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-checkstyle.patch, 
 0001-HBASE-12261-Add-checkstyle-to-HBase-build-process.patch


 We should add checkstyle to hadoop qa for our builds. That would free 
 committers up from checking patches for stylistic issues and leave them free 
 to check the real meat of the patches.
 Additionally we should have the check for empty try catch blocks running so 
 that we can't regress on catching exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172657#comment-14172657
 ] 

Elliott Clark commented on HBASE-12265:
---

+1 lol not sure how that sneaked into there.

 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172654#comment-14172654
 ] 

Hadoop QA commented on HBASE-12265:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675020/HBASE-12265.patch
  against trunk revision .
  ATTACHMENT ID: 12675020

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11355//console

This message is automatically generated.

 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12271:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Patch didn't apply quite cleanly on master and branch-1 so I had to mess with 
it a little.

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172671#comment-14172671
 ] 

Lars Hofhansl commented on HBASE-12271:
---

Can you provide a paragraph of background? I.e. how do you use this for delta 
backup? (Just curious)

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis updated HBASE-12267:
---
Attachment: HBASE-12267_v2.patch

I didn't have the rest project in my environment.  Fixed a compilation problem.

 Replace HTable constructor in mapreduce.* classes with ConnectionFactory 
 -

 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12267.patch, HBASE-12267_v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172690#comment-14172690
 ] 

Matteo Bertozzi commented on HBASE-12271:
-

[~larsh] export snapshot send only the difference between src and dst, so if 
two snapshots share some of the same files (e.g. no compaction between the two 
take snapshots) you export just the delta. this is just a counter to verify 
if the tool does this for real :)

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172693#comment-14172693
 ] 

Ted Yu commented on HBASE-12267:


{code}
 public void close(TaskAttemptContext context) throws IOException {
-  for (HTable table : tables.values()) {
+  for (PairConnection, Table pair : tables.values()) {
+Table table = pair.getSecond();
 table.flushCommits();
+table.close();
+pair.getFirst().close();
{code}
Why do we need to close the Connection ?

 Replace HTable constructor in mapreduce.* classes with ConnectionFactory 
 -

 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12267.patch, HBASE-12267_v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12261) Add checkstyle to HBase build process

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172699#comment-14172699
 ] 

Hadoop QA commented on HBASE-12261:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675057/0001-HBASE-12261-Add-checkstyle-to-HBase-build-process.patch
  against trunk revision .
  ATTACHMENT ID: 12675057

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:red}-1 javac{color}.  The applied patch generated 55 javac compiler 
warnings (more than the trunk's current 53 warnings).

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  echo $MVN clean package checkstyle:checkstyle-aggregate -DskipTests 
-D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunkJavacWarnings.txt 21
+  $MVN clean package checkstyle:checkstyle-aggregate -DskipTests 
-D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunkJavacWarnings.txt 21
+trunkCheckstyleErrors=`$GREP 'error' $PATCH_DIR/trunkCheckstyle.xml | 
$AWK 'BEGIN {total = 0} {total += 1} END {print total}'`
+patchCheckstyleErrors=`$GREP 'error' $PATCH_DIR/patchCheckstyle.xml | 
$AWK 'BEGIN {total = 0} {total += 1} END {print total}'`
+{color:red}-1 javac{color}.  The applied patch generated 
$patchCheckstyleErrors checkstyle errors (more than the trunk's current 
$trunkCheckstyleErrors errors).
+echo There were $patchCheckstyleErrors checkstyle errors in this patch 
compared to $trunkCheckstyleErrors on master.
+{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of checkstyle errors
+ xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11359//console

This message is automatically generated.

 Add checkstyle to HBase build process
 -

 Key: HBASE-12261
 URL: https://issues.apache.org/jira/browse/HBASE-12261
 Project: HBase
  Issue Type: Bug
  Components: build, site
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-checkstyle.patch, 
 0001-HBASE-12261-Add-checkstyle-to-HBase-build-process.patch


 We should add checkstyle to hadoop qa for our builds. 

[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172694#comment-14172694
 ] 

Hadoop QA commented on HBASE-12271:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675042/0001-Add-counters-for-skipped-files.patch
  against trunk revision .
  ATTACHMENT ID: 12675042

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.util.TestBytes.testToStringBytesBinaryReversible(TestBytes.java:296)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleHFileSplit(TestLoadIncrementalHFiles.java:151)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11358//console

This message is automatically generated.

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172702#comment-14172702
 ] 

Niels Basjes commented on HBASE-12269:
--

Thanks, this clarifies a lot.

Yes, this should be tied to maven.
Both pages say the code must be generated with the option  --gen java:hashcode 
It seems the files that are currently in the trunk have been generated with 
--gen java instead.
The diff I get is quite large and most of the volume is taken up by differences 
like this:
{code}
@Override
   public int hashCode() {
-return 0;
+HashCodeBuilder builder = new HashCodeBuilder();
+
+boolean present_row = true  (isSetRow());
+builder.append(present_row);
+if (present_row)
+  builder.append(row);
+
+boolean present_mutations = true  (isSetMutations());
+builder.append(present_mutations);
+if (present_mutations)
+  builder.append(mutations);
+
+return builder.toHashCode();
   }
{code}

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes

 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Patrick White (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172711#comment-14172711
 ] 

Patrick White commented on HBASE-12271:
---

ExportSnapshot will skip files that are the same (either via checksum or via 
name+length comparison). We use this to cut down on transfer time.

First we snapshot the table, then export that snapshot to the remote DFS.
Next run, we instead start by copying the previously exported snapshot into the 
current snapshot's destination (all on the remote DFS).
We then do the snapshot and export dance again, and it dutifully copies only 
the changed files.

Our (fb) HDFS has hard links (last I heard upstream does not) so we can copy 
backups around pretty cheaply. In the future we may just throw everything into 
one directory and let the manifest be the decider of what files are involved, 
but in the mean time it's copy and differential.


 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Patrick White (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172712#comment-14172712
 ] 

Patrick White commented on HBASE-12271:
---

Didn't see that [~mbertozzi] beat me to it :p

 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-15 Thread Dima Spivak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dima Spivak updated HBASE-12229:

Attachment: HBASE-12229_v2.patch

Reuploading the same file to try to get Hadoop QA's attention...

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_v1.patch, 
 HBASE-12229_v2.patch, HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Solomon Duskis (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172739#comment-14172739
 ] 

Solomon Duskis commented on HBASE-12267:


[~ted_yu]: In the brave new world driven by Connection.getTable(), closing the 
table doesn't close the connection.

 Replace HTable constructor in mapreduce.* classes with ConnectionFactory 
 -

 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12267.patch, HBASE-12267_v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172737#comment-14172737
 ] 

Hadoop QA commented on HBASE-12266:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675034/12266-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12675034

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11356//console

This message is automatically generated.

 Slow Scan can cause dead loop in ClientScanner 
 ---

 Key: HBASE-12266
 URL: https://issues.apache.org/jira/browse/HBASE-12266
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 0.96.0
Reporter: Qiang Tian
Priority: Minor
 Attachments: 12266-v2.txt, HBASE-12266-master.patch


 see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12261) Add checkstyle to HBase build process

2014-10-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172748#comment-14172748
 ] 

Elliott Clark commented on HBASE-12261:
---

Hmmm let me look at why the site goal failed.

 Add checkstyle to HBase build process
 -

 Key: HBASE-12261
 URL: https://issues.apache.org/jira/browse/HBASE-12261
 Project: HBase
  Issue Type: Bug
  Components: build, site
Affects Versions: 2.0.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-checkstyle.patch, 
 0001-HBASE-12261-Add-checkstyle-to-HBase-build-process.patch


 We should add checkstyle to hadoop qa for our builds. That would free 
 committers up from checking patches for stylistic issues and leave them free 
 to check the real meat of the patches.
 Additionally we should have the check for empty try catch blocks running so 
 that we can't regress on catching exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172750#comment-14172750
 ] 

Hadoop QA commented on HBASE-12229:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675075/HBASE-12229_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12675075

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11361//console

This message is automatically generated.

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_v1.patch, 
 HBASE-12229_v2.patch, HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9157) ZKUtil.blockUntilAvailable loops forever with non-recoverable errors

2014-10-15 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-9157:
-
Summary: ZKUtil.blockUntilAvailable loops forever with non-recoverable 
errors  (was: ZKUtil.blockUntilAvailable loops forever with 
KeeperException.ConnectionLossException)

 ZKUtil.blockUntilAvailable loops forever with non-recoverable errors
 

 Key: HBASE-9157
 URL: https://issues.apache.org/jira/browse/HBASE-9157
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Attachments: hbase-9157-v2.patch, hbase-9157.patch


 In one of integration test, I observed that a thread keeps spinning error 
 logs Unexpected exception handling blockUntilAvailable due to 
 KeeperException.ConnectionLossException. Below is the related code:
 {code}
 while (!finished) {
   try {
 data = ZKUtil.getData(zkw, znode);
   } catch(KeeperException e) {
 LOG.warn(Unexpected exception handling blockUntilAvailable, e);
   }
   if (data == null  (System.currentTimeMillis() +
 HConstants.SOCKET_RETRY_WAIT_MS  endTime)) {
 Thread.sleep(HConstants.SOCKET_RETRY_WAIT_MS);
   } else {
 finished = true;
   }
 }
 {code}
 Since ConnectionLossException  SessionExpiredException are not recoverable 
 errors, the while loop can't break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9157) ZKUtil.blockUntilAvailable loops forever with non-recoverable errors

2014-10-15 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-9157:
-
Description: 
In one of integration test, I observed that a thread keeps spinning error logs 
Unexpected exception handling blockUntilAvailable due to 
KeeperException.ConnectionLossException. Below is the related code:

{code}
while (!finished) {
  try {
data = ZKUtil.getData(zkw, znode);
  } catch(KeeperException e) {
LOG.warn(Unexpected exception handling blockUntilAvailable, e);
  }

  if (data == null  (System.currentTimeMillis() +
HConstants.SOCKET_RETRY_WAIT_MS  endTime)) {
Thread.sleep(HConstants.SOCKET_RETRY_WAIT_MS);
  } else {
finished = true;
  }
}
{code}

ConnectionLossException might be recoverable but SessionExpiredException and 
AuthFailed are not recoverable errors, the while loop can't break.

  was:
In one of integration test, I observed that a thread keeps spinning error logs 
Unexpected exception handling blockUntilAvailable due to 
KeeperException.ConnectionLossException. Below is the related code:

{code}
while (!finished) {
  try {
data = ZKUtil.getData(zkw, znode);
  } catch(KeeperException e) {
LOG.warn(Unexpected exception handling blockUntilAvailable, e);
  }

  if (data == null  (System.currentTimeMillis() +
HConstants.SOCKET_RETRY_WAIT_MS  endTime)) {
Thread.sleep(HConstants.SOCKET_RETRY_WAIT_MS);
  } else {
finished = true;
  }
}
{code}

Since ConnectionLossException  SessionExpiredException are not recoverable 
errors, the while loop can't break.


 ZKUtil.blockUntilAvailable loops forever with non-recoverable errors
 

 Key: HBASE-9157
 URL: https://issues.apache.org/jira/browse/HBASE-9157
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Attachments: hbase-9157-v2.patch, hbase-9157.patch


 In one of integration test, I observed that a thread keeps spinning error 
 logs Unexpected exception handling blockUntilAvailable due to 
 KeeperException.ConnectionLossException. Below is the related code:
 {code}
 while (!finished) {
   try {
 data = ZKUtil.getData(zkw, znode);
   } catch(KeeperException e) {
 LOG.warn(Unexpected exception handling blockUntilAvailable, e);
   }
   if (data == null  (System.currentTimeMillis() +
 HConstants.SOCKET_RETRY_WAIT_MS  endTime)) {
 Thread.sleep(HConstants.SOCKET_RETRY_WAIT_MS);
   } else {
 finished = true;
   }
 }
 {code}
 ConnectionLossException might be recoverable but SessionExpiredException and 
 AuthFailed are not recoverable errors, the while loop can't break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12267) Replace HTable constructor in mapreduce.* classes with ConnectionFactory

2014-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172763#comment-14172763
 ] 

Hadoop QA commented on HBASE-12267:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675067/HBASE-12267_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12675067

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+// HBaseAdmin only waits for regions to appear in hbase:meta we should 
wait until they are assigned
+  public Table createTableAndWait(HTableDescriptor htd, byte[][] families, 
Connection conn) throws IOException {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.procedure.TestProcedureManager

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.testOpenZeroLengthReportedFileButWithDataGetsSplit(TestHLogSplit.java:459)
at 
org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad.testAtomicBulkLoad(TestHRegionServerBulkLoad.java:287)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11360//console

This message is automatically generated.

 Replace HTable constructor in mapreduce.* classes with ConnectionFactory 
 -

 Key: HBASE-12267
 URL: https://issues.apache.org/jira/browse/HBASE-12267
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12267.patch, HBASE-12267_v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9157) ZKUtil.blockUntilAvailable loops forever with non-recoverable errors

2014-10-15 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-9157:
-
   Resolution: Fixed
Fix Version/s: 0.99.2
   0.98.8
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the reviews! I've integrated the fix into 0.99 and 0.98 branch with 
a small modification(only breaking out loop for SessionExpiredException  
AuthFailedException)

 ZKUtil.blockUntilAvailable loops forever with non-recoverable errors
 

 Key: HBASE-9157
 URL: https://issues.apache.org/jira/browse/HBASE-9157
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.98.8, 0.99.2

 Attachments: hbase-9157-v2.patch, hbase-9157.patch


 In one of integration test, I observed that a thread keeps spinning error 
 logs Unexpected exception handling blockUntilAvailable due to 
 KeeperException.ConnectionLossException. Below is the related code:
 {code}
 while (!finished) {
   try {
 data = ZKUtil.getData(zkw, znode);
   } catch(KeeperException e) {
 LOG.warn(Unexpected exception handling blockUntilAvailable, e);
   }
   if (data == null  (System.currentTimeMillis() +
 HConstants.SOCKET_RETRY_WAIT_MS  endTime)) {
 Thread.sleep(HConstants.SOCKET_RETRY_WAIT_MS);
   } else {
 finished = true;
   }
 }
 {code}
 ConnectionLossException might be recoverable but SessionExpiredException and 
 AuthFailed are not recoverable errors, the while loop can't break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12272) Generating Thrift code automatically

2014-10-15 Thread Niels Basjes (JIRA)
Niels Basjes created HBASE-12272:


 Summary: Generating Thrift code automatically
 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Niels Basjes


The generated thrift code is currently under source control.
This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172782#comment-14172782
 ] 

Niels Basjes commented on HBASE-12269:
--

As requested: I created https://issues.apache.org/jira/browse/HBASE-12272 to 
automate generating the generated thrift classes.

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes

 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12273) Generate .tabledesc file during upgrading if missing

2014-10-15 Thread Yi Deng (JIRA)
Yi Deng created HBASE-12273:
---

 Summary: Generate .tabledesc file during upgrading if missing
 Key: HBASE-12273
 URL: https://issues.apache.org/jira/browse/HBASE-12273
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.6.1
Reporter: Yi Deng
Assignee: Yi Deng


Generate .tabledesc file during upgrading if missing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172815#comment-14172815
 ] 

Hudson commented on HBASE-12271:


SUCCESS: Integrated in HBase-TRUNK #5666 (See 
[https://builds.apache.org/job/HBase-TRUNK/5666/])
HBASE-12271 Add counters for files skipped during snapshot export (eclark: rev 
ba20d4df8ced17e175d6a1d57982559d4dce79cd)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-15 Thread Ted Yu (JIRA)
Ted Yu created HBASE-12274:
--

 Summary: Race between RegionScannerImpl#nextInternal() and 
RegionScannerImpl#close() may produce null pointer exception
 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu


I saw the following in region server log:
{code}
2014-10-15 03:28:36,976 ERROR [B.DefaultRpcServer.handler=0,queue=0,port=60020] 
ipc.RpcServer: Unexpected throwable object
java.lang.NullPointerException
  at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
  at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
  at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
  at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
  at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
  at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
  at java.lang.Thread.run(Thread.java:745)
{code}
This is where the NPE happened:
{code}
// Let's see what we have in the storeHeap.
KeyValue current = this.storeHeap.peek();
{code}
The cause was race between nextInternal(called through nextRaw) and close 
methods.
nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172837#comment-14172837
 ] 

Hudson commented on HBASE-12271:


FAILURE: Integrated in HBase-1.0 #320 (See 
[https://builds.apache.org/job/HBase-1.0/320/])
HBASE-12271 Add counters for files skipped during snapshot export (eclark: rev 
c68c17ffec9fd83ff25928c0c0292f43f78acf02)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12269:
-
Attachment: HBASE-12269-2014-10-15-v1.patch

The actual 'main' code change is less than 10 lines.
The 'test' change is a bit bigger.
But due to the fact that apparently the generated thrift code was previously 
generated with the wrong settings ('java' instead of 'java:hashcode') this 
patch is really large.

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12269-2014-10-15-v1.patch


 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12269:
-
Release Note: Added new utility setting, rowPrefixFilter, to TScan to 
easily scan for a specific row prefix
  Status: Patch Available  (was: Open)

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12269-2014-10-15-v1.patch


 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12271) Add counters for files skipped during snapshot export

2014-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172855#comment-14172855
 ] 

Hudson commented on HBASE-12271:


FAILURE: Integrated in HBase-0.98 #604 (See 
[https://builds.apache.org/job/HBase-0.98/604/])
HBASE-12271 Add counters for files skipped during snapshot export (eclark: rev 
271d11a8592fdba9a8fd5ea5974cd934744235d3)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java


 Add counters for files skipped during snapshot export
 -

 Key: HBASE-12271
 URL: https://issues.apache.org/jira/browse/HBASE-12271
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.7
Reporter: Patrick White
Assignee: Patrick White
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 0001-Add-counters-for-skipped-files.patch


 It's incredibly handy to see the number of files skipped to know/verify delta 
 backups are doing what they should, when they should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172857#comment-14172857
 ] 

Sean Busbey commented on HBASE-12269:
-

In the interests of making this change simpler, could we make another issue to 
fix the generated thrift prior to this patch?

That will also break apart fixing something that is broken and might be needed 
in earlier branches from the feature addition.

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12269-2014-10-15-v1.patch


 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12274:
---
Attachment: 12274-v1.txt

Tentative patch.

Seeking comments.

 Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() 
 may produce null pointer exception
 --

 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu
 Attachments: 12274-v1.txt


 I saw the following in region server log:
 {code}
 2014-10-15 03:28:36,976 ERROR 
 [B.DefaultRpcServer.handler=0,queue=0,port=60020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 This is where the NPE happened:
 {code}
 // Let's see what we have in the storeHeap.
 KeyValue current = this.storeHeap.peek();
 {code}
 The cause was race between nextInternal(called through nextRaw) and close 
 methods.
 nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12268) Add support for Scan.setRowPrefixFilter to shell

2014-10-15 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12268:
-
Release Note: Added new option, ROWPREFIXFILTER, to the scan command in the 
HBase shell to easily scan for a specific row prefix.

 Add support for Scan.setRowPrefixFilter to shell
 

 Key: HBASE-12268
 URL: https://issues.apache.org/jira/browse/HBASE-12268
 Project: HBase
  Issue Type: New Feature
  Components: shell
Reporter: Niels Basjes

 I think having the feature introduced in HBASE-11990 in the hbase shell would 
 be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12268) Add support for Scan.setRowPrefixFilter to shell

2014-10-15 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12268:
-
Status: Patch Available  (was: Open)

 Add support for Scan.setRowPrefixFilter to shell
 

 Key: HBASE-12268
 URL: https://issues.apache.org/jira/browse/HBASE-12268
 Project: HBase
  Issue Type: New Feature
  Components: shell
Reporter: Niels Basjes
 Attachments: HBASE-12268-2014-10-15-v1.patch


 I think having the feature introduced in HBASE-11990 in the hbase shell would 
 be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >