[jira] [Created] (HBASE-10657) HBase client need to process event type 'AuthFailed'

2014-03-03 Thread Shengjun Xin (JIRA)
Shengjun Xin created HBASE-10657:


 Summary: HBase client need to process event type 'AuthFailed'
 Key: HBASE-10657
 URL: https://issues.apache.org/jira/browse/HBASE-10657
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.0
Reporter: Shengjun Xin


In a security cluster, when I run hbase command, I'll meet some error log such 
as the following, even if the command can give the correct result.
14/02/28 09:45:30 ERROR zookeeper.ClientCnxn: Error while calling watcher
java.lang.IllegalStateException: Received event is not valid: AuthFailed
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:410)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:319)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Attachment: 10018v3.patch

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Status: Patch Available  (was: Open)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Fix Version/s: (was: 0.98.1)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Status: Open  (was: Patch Available)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13917876#comment-13917876
 ] 

ramkrishna.s.vasudevan commented on HBASE-10531:


bq.Eventually we should be able to have new implementation of Cell where we 
just the row/family/column/ts without copying anything (actually that is part 
of the goal).
Yes Lars.  That is mentioned in my comments.  Currently we need to make a cell 
and make changes to the code to deal with Cells.  I can do that change also.

 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch, HBASE-10531_1.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-03 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13917875#comment-13917875
 ] 

Matteo Bertozzi commented on HBASE-10622:
-

{quote}In the run(), can we cleanup/delete snapshotTmpDir if Step 2 failed so 
that we don't ask the user to manually clean it since it comes from our Step 1 
copy?{quote}
There is an -overwrite option that does already this, but I think the general 
problem should be solved by the last line of this patch where the 
snapshotTmpDir is removed if you get an exception.

{quote}Another issue is probably more involved, and does not need to be covered 
in this JIRA. It is the overall progress reporting of the ExportSnapshot 
job.{quote}
other jira, it requires a new InputFormat/RecordReader with the progress based 
on the file size and not on the number of lines in the input file. The only 
progress that we track is the current file copy


 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13917880#comment-13917880
 ] 

Nicolas Liochon commented on HBASE-10018:
-

v3 fixes the test issue.
It seems that small scans optimization is not there for small scan. We need it 
to make this patch efficient.



 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13917876#comment-13917876
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-10531 at 3/3/14 9:47 AM:


bq.Eventually we should be able to have new implementation of Cell where we 
just the row/family/column/ts without copying anything (actually that is part 
of the goal).
Yes Lars.  That is mentioned in my comments.  Currently we need to make a cell 
and make changes to the code to deal with Cells.  That is why from the 
available key I made cell(which involves a copy).
To do this we need some changes on some of the places in the code where the 
keys are getting created.  So we need to change that.

{Edit} : comment re phrased


was (Author: ram_krish):
bq.Eventually we should be able to have new implementation of Cell where we 
just the row/family/column/ts without copying anything (actually that is part 
of the goal).
Yes Lars.  That is mentioned in my comments.  Currently we need to make a cell 
and make changes to the code to deal with Cells.  I can do that change also.

 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch, HBASE-10531_1.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon reassigned HBASE-:
--

Assignee: Nicolas Liochon

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon

 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-03 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10622:


Attachment: HBASE-10622-v4.patch

 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10637:


Status: Open  (was: Patch Available)

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10637.v1.patch


 This helps as we can be interrupted there as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13917942#comment-13917942
 ] 

Nicolas Liochon commented on HBASE-10637:
-

TestZKSecretWatcher works locally (tried 10 times)

Reviews welcome. Thanks!

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 This helps as we can be interrupted there as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10637:


Fix Version/s: hbase-10070

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 This helps as we can be interrupted there as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13917960#comment-13917960
 ] 

Hadoop QA commented on HBASE-10018:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632205/10018v3.patch
  against trunk revision .
  ATTACHMENT ID: 12632205

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestCoprocessorScanPolicy

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8861//console

This message is automatically generated.

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10658) Show stats from different caches on RS web page

2014-03-03 Thread Biju Nair (JIRA)
Biju Nair created HBASE-10658:
-

 Summary: Show stats from different caches on RS web page
 Key: HBASE-10658
 URL: https://issues.apache.org/jira/browse/HBASE-10658
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Biju Nair


When deploying L1+L2 cache configurations using multiple cache implementations 
(e.g. LRU + BucketCache), it would help users understand the usage of caches 
separately on the RS page. This will help them to fine tune the sizes based on 
the stats like the cache hit ratio.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918000#comment-13918000
 ] 

Hadoop QA commented on HBASE-10622:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632218/HBASE-10622-v4.patch
  against trunk revision .
  ATTACHMENT ID: 12632218

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot
  org.apache.hadoop.hbase.snapshot.TestExportSnapshot

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8862//console

This message is automatically generated.

 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9990) HTable uses the conf for each newCaller

2014-03-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918016#comment-13918016
 ] 

Andrew Purtell commented on HBASE-9990:
---

Any reason not to move the simple changes that save some time and resources 
like this and HBASE-10080 into 0.98.1?

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 9990.v1.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-03-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918015#comment-13918015
 ] 

Andrew Purtell commented on HBASE-10080:


Any reason not to move the simple changes that save some time and resources 
like this and HBASE-9990 into 0.98.1?

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.99.0

 Attachments: 10080.v1.patch, 10080.v2.patch, 10080.v3.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.

2014-03-03 Thread yuanxinen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanxinen updated HBASE-10549:
--

Fix Version/s: 0.99.0
   Status: Patch Available  (was: Open)

 when there is a hole,LoadIncrementalHFiles will hung up in an infinite loop.
 

 Key: HBASE-10549
 URL: https://issues.apache.org/jira/browse/HBASE-10549
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: yuanxinen
 Fix For: 0.99.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-03-03 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated HBASE-8304:


Attachment: (was: HBASE-8304-v3.patch)

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader

 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-03-03 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated HBASE-8304:


Attachment: (was: HBASE-8304.patch)

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader

 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-03-03 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated HBASE-8304:


Attachment: (was: HBASE-8304-v2.patch)

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader

 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-03-03 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated HBASE-8304:


Attachment: HBASE-8304.patch

Couldn't reproduce the errors of hudson. Resubmit patch and find more details 
about errors.

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader
 Attachments: HBASE-8304.patch


 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-03 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10622:


Attachment: HBASE-10622-v4.patch

 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-03 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10622:


Attachment: (was: HBASE-10622-v4.patch)

 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918165#comment-13918165
 ] 

Hadoop QA commented on HBASE-8304:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632246/HBASE-8304.patch
  against trunk revision .
  ATTACHMENT ID: 12632246

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  public static SetInetSocketAddress 
getNNAddresses(DistributedFileSystem fs, Configuration conf) {
+(MapString, MapString, InetSocketAddress) 
getNNAddressesMethod.invoke(null, conf);

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8863//console

This message is automatically generated.

 Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured 
 without default port.
 ---

 Key: HBASE-8304
 URL: https://issues.apache.org/jira/browse/HBASE-8304
 Project: HBase
  Issue Type: Bug
  Components: HFile, regionserver
Affects Versions: 0.94.5
Reporter: Raymond Liu
  Labels: bulkloader
 Attachments: HBASE-8304.patch


 When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as 
 hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir 
 where port is the hdfs namenode's default port. the bulkload operation will 
 not remove the file in bulk output dir. Store::bulkLoadHfile will think 
 hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy 
 approaching instead of rename.
 The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS 
 according to hbase.rootdir when regionserver started, thus, dest fs uri from 
 the hregion will not matching src fs uri passed from client.
 any suggestion what is the best approaching to fix this issue? 
 I kind of think that we could check for default port if src uri come without 
 port info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-:
---

Status: Patch Available  (was: Open)

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-:
---

Attachment: .v1.patch

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918180#comment-13918180
 ] 

Hadoop QA commented on HBASE-10622:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632257/HBASE-10622-v4.patch
  against trunk revision .
  ATTACHMENT ID: 12632257

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8864//console

This message is automatically generated.

 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918185#comment-13918185
 ] 

Hadoop QA commented on HBASE-:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632281/.v1.patch
  against trunk revision .
  ATTACHMENT ID: 12632281

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.
Here is snippet of errors:
{code}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-client: Compilation failure
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java:[748,19]
 cannot find symbol
[ERROR] symbol  : class ClientSmallReversedScanner
[ERROR] location: class org.apache.hadoop.hbase.client.HTable
[ERROR] - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-client: Compilation failure
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java:[748,19]
 cannot find symbol
symbol  : class ClientSmallReversedScanner
location: class org.apache.hadoop.hbase.client.HTable

at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
--
Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
failure
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java:[748,19]
 cannot find symbol
symbol  : class ClientSmallReversedScanner
location: class org.apache.hadoop.hbase.client.HTable

at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729){code}

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8865//console

This message is automatically generated.

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-:
---

Status: Open  (was: Patch Available)

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-:
---

Attachment: .v2.patch

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch, .v2.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-:
---

Status: Patch Available  (was: Open)

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch, .v2.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9990) HTable uses the conf for each newCaller

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918202#comment-13918202
 ] 

Nicolas Liochon commented on HBASE-9990:


For this one, no reason at all. Let me check if the patch applies to the 98 
branch.

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 9990.v1.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10080) Unnecessary call to locateRegion when creating an HTable instance

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918204#comment-13918204
 ] 

Nicolas Liochon commented on HBASE-10080:
-

It's somehow a behavior change, as we don't launch anymore an exception if the 
table does not exist. Given it's a corner case  we're still at the very 
beginning of the 98 branch, I think it's acceptable however. It's your call: I 
can do the port if there is anything to port.

 Unnecessary call to locateRegion when creating an HTable instance
 -

 Key: HBASE-10080
 URL: https://issues.apache.org/jira/browse/HBASE-10080
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 0.99.0

 Attachments: 10080.v1.patch, 10080.v2.patch, 10080.v3.patch


 It's more or less in contradiction with the objective of having lightweight 
 HTable objects and the data may be stale when we will use it 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9990) HTable uses the conf for each newCaller

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9990:
---

Attachment: 9990.v2.98.patch

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 9990.v1.patch, 9990.v2.98.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9990) HTable uses the conf for each newCaller

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918216#comment-13918216
 ] 

Nicolas Liochon commented on HBASE-9990:


9990.v2.98.patch is for the .98 branch.

 HTable uses the conf for each newCaller
 -

 Key: HBASE-9990
 URL: https://issues.apache.org/jira/browse/HBASE-9990
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 9990.v1.patch, 9990.v2.98.patch, 9990.v2.patch


 You can construct a RpcRetryingCallerFactory, but actually the conf is read 
 for each caller creation. Reading the conf is obviously expensive, and a 
 profiling session shows it. If we want to sent hundreds of thousands of 
 queries per second, we should not do that.
 RpcRetryingCallerFactory.newCaller is called for each get, for example.
 This is not a regression, we have something similar in 0.94.
 On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
 a few thousand calls to get.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10648) Pluggable Memstore

2014-03-03 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10648:
---

Attachment: HBASE-10648.patch

 Pluggable Memstore
 --

 Key: HBASE-10648
 URL: https://issues.apache.org/jira/browse/HBASE-10648
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-10648.patch


 Make Memstore into an interface implementation.  Also make it pluggable by 
 configuring the FQCN of the impl.
 This will allow us to have different impl and optimizations in the Memstore 
 DataStructure and the upper layers untouched.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10648) Pluggable Memstore

2014-03-03 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10648:
---

Status: Patch Available  (was: Open)

 Pluggable Memstore
 --

 Key: HBASE-10648
 URL: https://issues.apache.org/jira/browse/HBASE-10648
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-10648.patch


 Make Memstore into an interface implementation.  Also make it pluggable by 
 configuring the FQCN of the impl.
 This will allow us to have different impl and optimizations in the Memstore 
 DataStructure and the upper layers untouched.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10652) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in rpc

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918228#comment-13918228
 ] 

Nicolas Liochon commented on HBASE-10652:
-

Yeah, not necessary but more consistent. +1. Will commit tomorrow if nobody 
disagrees. 

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in rpc
 --

 Key: HBASE-10652
 URL: https://issues.apache.org/jira/browse/HBASE-10652
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
Priority: Minor
 Attachments: HBASE-10652-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10651) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918240#comment-13918240
 ] 

Nicolas Liochon commented on HBASE-10651:
-

I'm not totally sure here. Any opinion?

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in Replication
 --

 Key: HBASE-10651
 URL: https://issues.apache.org/jira/browse/HBASE-10651
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10651-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10650) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in RegionServer

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918239#comment-13918239
 ] 

Nicolas Liochon commented on HBASE-10650:
-

I think it's correct.  Will commit tomorrow if nobody disagrees.

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in RegionServer
 ---

 Key: HBASE-10650
 URL: https://issues.apache.org/jira/browse/HBASE-10650
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10650-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10649) TestMasterMetrics fails occasionally

2014-03-03 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918269#comment-13918269
 ] 

Jimmy Xiang commented on HBASE-10649:
-

If we can wait, with HBASE-10569 should help.

 TestMasterMetrics fails occasionally
 

 Key: HBASE-10649
 URL: https://issues.apache.org/jira/browse/HBASE-10649
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu

 Latest occurrence was in https://builds.apache.org/job/HBase-TRUNK/4970
 {code}
 java.io.IOException: Shutting down
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:231)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:93)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:875)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:839)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:756)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:727)
   at 
 org.apache.hadoop.hbase.master.TestMasterMetrics.startCluster(TestMasterMetrics.java:56)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runners.Suite.runChild(Suite.java:127)
   at org.junit.runners.Suite.runChild(Suite.java:26)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
 seconds
   at 
 org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
   at 
 org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:425)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:224)
   ... 25 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918287#comment-13918287
 ] 

Nicolas Liochon commented on HBASE-10018:
-

I reproduce the issue, it's a table not found on hbase:namespace when there is 
a master copro. Looking.

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Status: Open  (was: Patch Available)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10648) Pluggable Memstore

2014-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918288#comment-13918288
 ] 

Ted Yu commented on HBASE-10648:


Should there be an interface for Memstore so that the default implementation 
and other implementations have a common contract ?

 Pluggable Memstore
 --

 Key: HBASE-10648
 URL: https://issues.apache.org/jira/browse/HBASE-10648
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-10648.patch


 Make Memstore into an interface implementation.  Also make it pluggable by 
 configuring the FQCN of the impl.
 This will allow us to have different impl and optimizations in the Memstore 
 DataStructure and the upper layers untouched.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10651) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918295#comment-13918295
 ] 

stack commented on HBASE-10651:
---

[~fenghh] How does going back to while to check if we need to terminate relate 
to setting interrupt on thread?  isActive doesn't seem to check thread state.

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in Replication
 --

 Key: HBASE-10651
 URL: https://issues.apache.org/jira/browse/HBASE-10651
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10651-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10652) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in rpc

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918296#comment-13918296
 ] 

stack commented on HBASE-10652:
---

lgtm

 Fix incorrect handling of IE that restores current thread's interrupt status 
 within while/for loops in rpc
 --

 Key: HBASE-10652
 URL: https://issues.apache.org/jira/browse/HBASE-10652
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
Priority: Minor
 Attachments: HBASE-10652-trunk_v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918298#comment-13918298
 ] 

stack commented on HBASE-:
--

lgtm

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch, .v2.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918308#comment-13918308
 ] 

Hadoop QA commented on HBASE-:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632282/.v2.patch
  against trunk revision .
  ATTACHMENT ID: 12632282

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8866//console

This message is automatically generated.

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch, .v2.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918321#comment-13918321
 ] 

stack commented on HBASE-10637:
---

This is hard work.

Patch looks good but I do not follow how it relates to the subject of the 
issue.  So +1 on patch after you do a better subject on this issue (smile).

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 This helps as we can be interrupted there as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10637:


Description: Since HBASE-10525, we can write in a different thread than the 
client. This allows the client thread to be interrupted w/o any impact on the 
shared tcp connection. We should setup the iostream on the second thread as 
well, i.e. when we do the write, and not when we do the getConnection.  (was: 
This helps as we can be interrupted there as well.)

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918326#comment-13918326
 ] 

Nicolas Liochon commented on HBASE-10637:
-

:-)
I've updated the description to give more context. Thanks a lot for the review!

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10656) high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918328#comment-13918328
 ] 

stack commented on HBASE-10656:
---

Nice work [~ikeda]

We use Unsafe in other areas of the code base too so a total purge would take 
more than just our undoing use of high-scale-lib counters.

Are we susceptible to the bug you've identified?  Do we do the write while read 
w/o protection?   Are these high-scale counters used for metrics only or for 
more critical countings?  I've not looked.

Thanks you [~ikeda] for digging in here.

  high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug
 

 Key: HBASE-10656
 URL: https://issues.apache.org/jira/browse/HBASE-10656
 Project: HBase
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: MyCounter.java, MyCounterTest.java


 Cliff's high-scale-lib's Counter is used in important classes (for example, 
 HRegion) in HBase, but Counter uses sun.misc.Unsafe, that is implementation 
 detail of the Java standard library and belongs to Oracle (Sun). That 
 consequently makes HBase depend on the specific JRE Implementation.
 To make matters worse, Counter has a bug and you may get wrong result if you 
 mix a reading method into your logic calling writing methods.
 In more detail, I think the bug is caused by reading an internal array field 
 without resolving memory caching, which is intentional the comment says, but 
 storing the read result into a volatile field. That field may be not changed 
 after you can see the true values of the array field, and also may be not 
 changed after updating the next CAT instance's values in some race 
 condition when extending CAT instance chain.
 Anyway, it is possible that you create a new alternative class which only 
 depends on the standard library. I know Java8 provides its alternative, but 
 HBase should support Java6 and Java7 for some time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10632) Region lost in limbo after ArrayIndexOutOfBoundsException during assignment

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918333#comment-13918333
 ] 

stack commented on HBASE-10632:
---

Skimmed patch.  lgtm.  +1 for 0.96.  Thanks.

 Region lost in limbo after ArrayIndexOutOfBoundsException during assignment
 ---

 Key: HBASE-10632
 URL: https://issues.apache.org/jira/browse/HBASE-10632
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: hbase-10070
Reporter: Nick Dimiduk
Assignee: Enis Soztutar
 Fix For: 0.96.2, 0.98.1, 0.99.0, hbase-10070

 Attachments: hbase-10632_v1.patch


 Discovered while running IntegrationTestBigLinkedList. Region 
 24d68aa7239824e42390a77b7212fcbf is scheduled for move from hor13n19 to 
 hor13n13. During the process an exception is thrown.
 {noformat}
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 master.RegionStates: Transitioning {24d68aa7239824e42390a77b7212fcbf 
 state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} will be handled by SSH 
 for hor13n19.gq1.ygridcore.net,60020,1393341563552
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 handler.ServerShutdownHandler: Reassigning 7 region(s) that 
 hor13n19.gq1.ygridcore.net,60020,1393341563552 was carrying (and 0 regions(s) 
 that were opening on this server)
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 handler.ServerShutdownHandler: Reassigning region with rs = 
 {24d68aa7239824e42390a77b7212fcbf state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} and deleting zk node 
 if exists
 2014-02-25 15:30:42,623 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 master.RegionStates: Transitioned {24d68aa7239824e42390a77b7212fcbf 
 state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} to 
 {24d68aa7239824e42390a77b7212fcbf state=OFFLINE, ts=1393342242623, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 2014-02-25 15:30:42,623 DEBUG [AM.ZK.Worker-pool2-t46] 
 master.AssignmentManager: Znode 
 IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.
  deleted, state: {24d68aa7239824e42390a77b7212fcbf state=OFFLINE, 
 ts=1393342242623, server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 ...
 2014-02-25 15:30:43,993 ERROR [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 executor.EventHandler: Caught throwable while processing event 
 M_SERVER_SHUTDOWN
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.init(BaseLoadBalancer.java:250)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.createCluster(BaseLoadBalancer.java:921)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.roundRobinAssignment(BaseLoadBalancer.java:860)
   at 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2482)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:282)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}
 After that, region is left in limbo and is never reassigned.
 {noformat}
 2014-02-25 15:35:11,581 INFO  [FifoRpcScheduler.handler1-thread-6] 
 master.HMaster: Client=hrt_qa//68.142.246.29 move 
 hri=IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.,
  src=hor13n19.gq1.ygridcore.net,60020,1393341563552, 
 dest=hor13n13.gq1.ygridcore.net,60020,139334275, running balancer
 2014-02-25 15:35:11,581 INFO  [FifoRpcScheduler.handler1-thread-6] 
 master.AssignmentManager: Ignored moving region not assigned: {ENCODED = 
 24d68aa7239824e42390a77b7212fcbf, NAME = 
 'IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.',
  STARTKEY = '\x80\x06\x1A', ENDKEY = ''}, {24d68aa7239824e42390a77b7212fcbf 
 state=OFFLINE, ts=1393342242623, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 ...
 2014-02-25 15:35:26,586 DEBUG 
 [hor13n12.gq1.ygridcore.net,6,1393341917402-BalancerChore] 
 master.HMaster: Not running balancer because 1 region(s) in transition: 
 {24d68aa7239824e42390a77b7212fcbf={24d68aa7239824e42390a77b7212fcbf 
 state=OFFLINE, ts=1393342242623, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552}}
 ...
 2014-02-25 15:35:51,945 DEBUG [FifoRpcScheduler.handler1-thread-16] 
 master.HMaster: 

[jira] [Commented] (HBASE-10656) high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918335#comment-13918335
 ] 

Nicolas Liochon commented on HBASE-10656:
-

Is there any reference to this bug in the high-scale-lib repo?

  high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug
 

 Key: HBASE-10656
 URL: https://issues.apache.org/jira/browse/HBASE-10656
 Project: HBase
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: MyCounter.java, MyCounterTest.java


 Cliff's high-scale-lib's Counter is used in important classes (for example, 
 HRegion) in HBase, but Counter uses sun.misc.Unsafe, that is implementation 
 detail of the Java standard library and belongs to Oracle (Sun). That 
 consequently makes HBase depend on the specific JRE Implementation.
 To make matters worse, Counter has a bug and you may get wrong result if you 
 mix a reading method into your logic calling writing methods.
 In more detail, I think the bug is caused by reading an internal array field 
 without resolving memory caching, which is intentional the comment says, but 
 storing the read result into a volatile field. That field may be not changed 
 after you can see the true values of the array field, and also may be not 
 changed after updating the next CAT instance's values in some race 
 condition when extending CAT instance chain.
 Anyway, it is possible that you create a new alternative class which only 
 depends on the standard library. I know Java8 provides its alternative, but 
 HBase should support Java6 and Java7 for some time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10648) Pluggable Memstore

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918336#comment-13918336
 ] 

Hadoop QA commented on HBASE-10648:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632288/HBASE-10648.patch
  against trunk revision .
  ATTACHMENT ID: 12632288

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8867//console

This message is automatically generated.

 Pluggable Memstore
 --

 Key: HBASE-10648
 URL: https://issues.apache.org/jira/browse/HBASE-10648
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-10648.patch


 Make Memstore into an interface implementation.  Also make it pluggable by 
 configuring the FQCN of the impl.
 This will allow us to have different impl and optimizations in the Memstore 
 DataStructure and the upper layers untouched.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918352#comment-13918352
 ] 

Devaraj Das commented on HBASE-10637:
-

[~nkeywal], quick question - if the tcp connection is shared between the 
threads, why do the setup in the thread at all? Is it because by the time the 
thread gets scheduled the connection might have closed, etc.?

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10573) Use Netty 4

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918360#comment-13918360
 ] 

stack commented on HBASE-10573:
---

bq. I wonder if we won't have an issue, for example when we will want to pass 
the buffer from the hbase socket to hdfs.

Do we mean passing netty bytebuf from hbase to hdfs in the above?  We will want 
to go both ways -- out and in.

The list of netty(5) benefits are long as Andrew notes so even if we skirt 
ByteBuf -- no reuse probably simplifies implementation would be my guess -- 
this ticket is worthwhile.

 Use Netty 4
 ---

 Key: HBASE-10573
 URL: https://issues.apache.org/jira/browse/HBASE-10573
 Project: HBase
  Issue Type: Sub-task
Affects Versions: hbase-10191
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 10573.patch


 Pull in Netty 4 and sort out the consequences.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10079) Race in TableName cache

2014-03-03 Thread Cosmin Lehene (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918362#comment-13918362
 ] 

Cosmin Lehene commented on HBASE-10079:
---

Shouldn't the affects version be 0.96.0?

 Race in TableName cache
 ---

 Key: HBASE-10079
 URL: https://issues.apache.org/jira/browse/HBASE-10079
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.1, 0.99.0

 Attachments: 10079.v1.patch, hbase-10079-addendum.patch, 
 hbase-10079.v2.patch


 Testing 0.96.1rc1.
 With one process incrementing a row in a table, we increment single col.  We 
 flush or do kills/kill-9 and data is lost.  flush and kill are likely the 
 same problem (kill would flush), kill -9 may or may not have the same root 
 cause.
 5 nodes
 hadoop 2.1.0 (a pre cdh5b1 hdfs).
 hbase 0.96.1 rc1 
 Test: 25 increments on a single row an single col with various number of 
 client threads (IncrementBlaster).  Verify we have a count of 25 after 
 the run (IncrementVerifier).
 Run 1: No fault injection.  5 runs.  count = 25. on multiple runs.  
 Correctness verified.  1638 inc/s throughput.
 Run 2: flushes table with incrementing row.  count = 246875 !=25.  
 correctness failed.  1517 inc/s throughput.  
 Run 3: kill of rs hosting incremented row.  count = 243750 != 25. 
 Correctness failed.   1451 inc/s throughput.
 Run 4: one kill -9 of rs hosting incremented row.  246878.!= 25.  
 Correctness failed. 1395 inc/s (including recovery)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10622) Improve log and Exceptions in Export Snapshot

2014-03-03 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918364#comment-13918364
 ] 

Jerry He commented on HBASE-10622:
--

bq. other jira, it requires a new InputFormat/RecordReader with the progress 
based on the file size and not on the number of lines in the input file. The 
only progress that we track is the current file copy

Agree. Not an easy thing to do.  It seems that DistCp has the same issue.

Looks good.  Thanks.

 Improve log and Exceptions in Export Snapshot 
 --

 Key: HBASE-10622
 URL: https://issues.apache.org/jira/browse/HBASE-10622
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-10622-v0.patch, HBASE-10622-v1.patch, 
 HBASE-10622-v2.patch, HBASE-10622-v3.patch, HBASE-10622-v4.patch


 from the logs of export snapshot is not really clear what's going on,
 adding some extra information useful to debug, and in some places the real 
 exception can be thrown



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918367#comment-13918367
 ] 

Nicolas Liochon commented on HBASE-10637:
-

the connection is protected by a lock (a synchronized). So, today the first 
thread which needs a new connection setup the streams. the code is:
clientThead: getConnection()  - setupStream - write

since 10525
clientThead: getConnection()  - setupStream - post message to writerThread
writerThread: write

with this patch
clientThead: getConnection()  - post message to writerThread
writerThread: setupStream -  write

So you can interrupt the client thread: it does not touch the tcp connection.
the pre 10525 being kept by default (with the allowsInterrupt option)

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Attachment: 10018.v4.patch

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 
 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10018:


Status: Patch Available  (was: Open)

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 
 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918374#comment-13918374
 ] 

Nicolas Liochon commented on HBASE-10018:
-

I've solved the issue by filtering on the system tables on the scanner in the 
test. It seems to be the right thing to do, but I'm not sure of what the test 
is actually doing.
v4 includes HBASE-. 

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 
 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10637) rpcClient: Setup the iostream when doing the write

2014-03-03 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918376#comment-13918376
 ] 

Devaraj Das commented on HBASE-10637:
-

Makes sense. +1

 rpcClient: Setup the iostream when doing the write
 --

 Key: HBASE-10637
 URL: https://issues.apache.org/jira/browse/HBASE-10637
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, hbase-10070

 Attachments: 10637.v1.patch


 Since HBASE-10525, we can write in a different thread than the client. This 
 allows the client thread to be interrupted w/o any impact on the shared tcp 
 connection. We should setup the iostream on the second thread as well, i.e. 
 when we do the write, and not when we do the getConnection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10079) Race in TableName cache

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918383#comment-13918383
 ] 

Nicolas Liochon edited comment on HBASE-10079 at 3/3/14 6:35 PM:
-

It does not affect 96.0. It was introduced during the dev on 0.96.1 and was 
fixed in this version before beeing delivered.


was (Author: nkeywal):
It does not affect 96.0. It was introduced during the dev on 0.96.1 and was 
fixed h

 Race in TableName cache
 ---

 Key: HBASE-10079
 URL: https://issues.apache.org/jira/browse/HBASE-10079
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.1, 0.99.0

 Attachments: 10079.v1.patch, hbase-10079-addendum.patch, 
 hbase-10079.v2.patch


 Testing 0.96.1rc1.
 With one process incrementing a row in a table, we increment single col.  We 
 flush or do kills/kill-9 and data is lost.  flush and kill are likely the 
 same problem (kill would flush), kill -9 may or may not have the same root 
 cause.
 5 nodes
 hadoop 2.1.0 (a pre cdh5b1 hdfs).
 hbase 0.96.1 rc1 
 Test: 25 increments on a single row an single col with various number of 
 client threads (IncrementBlaster).  Verify we have a count of 25 after 
 the run (IncrementVerifier).
 Run 1: No fault injection.  5 runs.  count = 25. on multiple runs.  
 Correctness verified.  1638 inc/s throughput.
 Run 2: flushes table with incrementing row.  count = 246875 !=25.  
 correctness failed.  1517 inc/s throughput.  
 Run 3: kill of rs hosting incremented row.  count = 243750 != 25. 
 Correctness failed.   1451 inc/s throughput.
 Run 4: one kill -9 of rs hosting incremented row.  246878.!= 25.  
 Correctness failed. 1395 inc/s (including recovery)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10079) Race in TableName cache

2014-03-03 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918383#comment-13918383
 ] 

Nicolas Liochon commented on HBASE-10079:
-

It does not affect 96.0. It was introduced during the dev on 0.96.1 and was 
fixed h

 Race in TableName cache
 ---

 Key: HBASE-10079
 URL: https://issues.apache.org/jira/browse/HBASE-10079
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.1, 0.99.0

 Attachments: 10079.v1.patch, hbase-10079-addendum.patch, 
 hbase-10079.v2.patch


 Testing 0.96.1rc1.
 With one process incrementing a row in a table, we increment single col.  We 
 flush or do kills/kill-9 and data is lost.  flush and kill are likely the 
 same problem (kill would flush), kill -9 may or may not have the same root 
 cause.
 5 nodes
 hadoop 2.1.0 (a pre cdh5b1 hdfs).
 hbase 0.96.1 rc1 
 Test: 25 increments on a single row an single col with various number of 
 client threads (IncrementBlaster).  Verify we have a count of 25 after 
 the run (IncrementVerifier).
 Run 1: No fault injection.  5 runs.  count = 25. on multiple runs.  
 Correctness verified.  1638 inc/s throughput.
 Run 2: flushes table with incrementing row.  count = 246875 !=25.  
 correctness failed.  1517 inc/s throughput.  
 Run 3: kill of rs hosting incremented row.  count = 243750 != 25. 
 Correctness failed.   1451 inc/s throughput.
 Run 4: one kill -9 of rs hosting incremented row.  246878.!= 25.  
 Correctness failed. 1395 inc/s (including recovery)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9999) Add support for small reverse scan

2014-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918385#comment-13918385
 ] 

Ted Yu commented on HBASE-:
---

+1
{code}
+ * scan results, unless the results cross multiple regions or the row count of
+ * results excess the caching.
{code}
'excess' - 'exceed'

Please add stability annotation to ClientSmallReverseScanner.java

 Add support for small reverse scan
 --

 Key: HBASE-
 URL: https://issues.apache.org/jira/browse/HBASE-
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Nicolas Liochon
 Attachments: .v1.patch, .v2.patch


 HBASE-4811 adds feature of reverse scan. This JIRA adds the support for small 
 reverse scan.
 This is activated when both 'reversed' and 'small' attributes are true in 
 Scan Object



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10191) Move large arena storage off heap

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918407#comment-13918407
 ] 

stack commented on HBASE-10191:
---

[~mcorgan]

bq.  It's basically creating small in-memory HFiles that can be compacted 
several times in memory without going to disk, and holding on to the WAL 
entries until they do go to disk.

Pardon dumb questions, creating small in-memory HFiles...  -- from a small 
CSLM that does the sort for us?  Or, I remember talking to Martin Thompson once 
trying to ask how  he'd go about the MemStore 'problem' and I'm sure he didn't 
follow what I was on about (I was doing a crappy job explaining I'm sure),, but 
other than his usual adage of try everything and measure, he suggested just 
trying a sort on the fly... Are you thinking the same Matt?  So we'd keep 
around Cells and then once we had a batch or if after some nanos had elapsed, 
we'd do a merge sort w/ current set of in-memory edits and then put in place 
the new sorted 'in-memory-hfile' and up the mvcc read point so it was readable? 
 Once they got to a certain size we'd do like we do now with snapshot and start 
up a new foreground set of edits to merge into?


bq. ...and holding on to the WAL entries until they do go to disk

What you thinking here?  Would be good if the WAL system was not related to the 
MemStore system (though chatting w/ [~liyin] recently, he had an idea that 
would make the WAL sync more 'live' if WAL sync updated mvcc (mvcc and seqid 
being tied).

bq. Anoop, Ram, and I were throwing around ideas of making in-memory HFiles out 
of memstore snapshots

Would be sweet if the value at least was not on heap   Sounds like nice 
experiment Andrew.




 Move large arena storage off heap
 -

 Key: HBASE-10191
 URL: https://issues.apache.org/jira/browse/HBASE-10191
 Project: HBase
  Issue Type: Umbrella
Reporter: Andrew Purtell

 Even with the improved G1 GC in Java 7, Java processes that want to address 
 large regions of memory while also providing low high-percentile latencies 
 continue to be challenged. Fundamentally, a Java server process that has high 
 data throughput and also tight latency SLAs will be stymied by the fact that 
 the JVM does not provide a fully concurrent collector. There is simply not 
 enough throughput to copy data during GC under safepoint (all application 
 threads suspended) within available time bounds. This is increasingly an 
 issue for HBase users operating under dual pressures: 1. tight response SLAs, 
 2. the increasing amount of RAM available in commodity server 
 configurations, because GC load is roughly proportional to heap size.
 We can address this using parallel strategies. We should talk with the Java 
 platform developer community about the possibility of a fully concurrent 
 collector appearing in OpenJDK somehow. Set aside the question of if this is 
 too little too late, if one becomes available the benefit will be immediate 
 though subject to qualification for production, and transparent in terms of 
 code changes. However in the meantime we need an answer for Java versions 
 already in production. This requires we move the large arena allocations off 
 heap, those being the blockcache and memstore. On other JIRAs recently there 
 has been related discussion about combining the blockcache and memstore 
 (HBASE-9399) and on flushing memstore into blockcache (HBASE-5311), which is 
 related work. We should build off heap allocation for memstore and 
 blockcache, perhaps a unified pool for both, and plumb through zero copy 
 direct access to these allocations (via direct buffers) through the read and 
 write I/O paths. This may require the construction of classes that provide 
 object views over data contained within direct buffers. This is something 
 else we could talk with the Java platform developer community about - it 
 could be possible to provide language level object views over off heap 
 memory, on heap objects could hold references to objects backed by off heap 
 memory but not vice versa, maybe facilitated by new intrinsics in Unsafe. 
 Again we need an answer for today also. We should investigate what existing 
 libraries may be available in this regard. Key will be avoiding 
 marshalling/unmarshalling costs. At most we should be copying primitives out 
 of the direct buffers to register or stack locations until finally copying 
 data to construct protobuf Messages. A related issue there is HBASE-9794, 
 which proposes scatter-gather access to KeyValues when constructing RPC 
 messages. We should see how far we can get with that and also zero copy 
 construction of protobuf Messages backed by direct buffer allocations. Some 
 amount of native code may be required.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-8065) bulkload can load the hfile into hbase table,but this mechanism can't remove prior data

2014-03-03 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918412#comment-13918412
 ] 

Nick Dimiduk commented on HBASE-8065:
-

HBASE-5525 provides the {{truncate_preserve}} command that does just as I 
suggested. It's best to use this feature instead. Does it satisfy your need?

 bulkload can load the hfile into hbase table,but this mechanism can't remove 
 prior data
 ---

 Key: HBASE-8065
 URL: https://issues.apache.org/jira/browse/HBASE-8065
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC, mapreduce, regionserver
Affects Versions: 0.94.0
 Environment: hadoop-1.0.2、hbase-0.94.0
Reporter: Yuan Kang
Assignee: Yuan Kang
Priority: Critical
 Attachments: LoadIncrementalHFiles-bulkload-can-clean-olddata.patch


 this patch can do bulkload for one more parameter ‘need to refresh’,when this 
 parameter is true.bulkload can clean the old date in the hbase table ,then do 
 the new date load



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10514) Forward port HBASE-10464 and HBASE-10466

2014-03-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918418#comment-13918418
 ] 

Lars Hofhansl commented on HBASE-10514:
---

ping :)

 Forward port HBASE-10464 and HBASE-10466
 

 Key: HBASE-10514
 URL: https://issues.apache.org/jira/browse/HBASE-10514
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
Priority: Critical
 Fix For: 0.96.2, 0.94.18


 Critical data loss issues that we need to ensure are not in branches beyond 
 0.89fb.  Assigning myself.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10514) Forward port HBASE-10464 and HBASE-10466

2014-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918422#comment-13918422
 ] 

stack commented on HBASE-10514:
---

Thanks [~lhofhansl] Pinging myself now.

 Forward port HBASE-10464 and HBASE-10466
 

 Key: HBASE-10514
 URL: https://issues.apache.org/jira/browse/HBASE-10514
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
Priority: Critical
 Fix For: 0.96.2, 0.94.18


 Critical data loss issues that we need to ensure are not in branches beyond 
 0.89fb.  Assigning myself.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10639) Unload script displays wrong counts (off by one) when unloading regions

2014-03-03 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918436#comment-13918436
 ] 

Jonathan Hsieh commented on HBASE-10639:


looks good to me.  Thanks Srikanth!  committed to 0.96/0.98/trunk.

 Unload script displays wrong counts (off by one) when unloading regions 
 

 Key: HBASE-10639
 URL: https://issues.apache.org/jira/browse/HBASE-10639
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.17
Reporter: Srikanth Srungarapu
Priority: Trivial
 Fix For: 0.98.1, 0.99.0

 Attachments: hbase_10639.patch


 Upon running an unload command, such as:
 hbase org.jruby.Main /usr/lib/hbase/bin/region_mover.rb unload `hostname`, 
 the region counter is indexed at 0 and hence, when the regions are being 
 moved, regions are being counted from 0 instead of 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10639) Unload script displays wrong counts (off by one) when unloading regions

2014-03-03 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh resolved HBASE-10639.


   Resolution: Fixed
Fix Version/s: 0.96.2
 Assignee: Srikanth Srungarapu
 Hadoop Flags: Reviewed

 Unload script displays wrong counts (off by one) when unloading regions 
 

 Key: HBASE-10639
 URL: https://issues.apache.org/jira/browse/HBASE-10639
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.17
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: hbase_10639.patch


 Upon running an unload command, such as:
 hbase org.jruby.Main /usr/lib/hbase/bin/region_mover.rb unload `hostname`, 
 the region counter is indexed at 0 and hence, when the regions are being 
 moved, regions are being counted from 0 instead of 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10609) Remove filterKeyValue(Cell ignored) from FilterBase

2014-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918461#comment-13918461
 ] 

Ted Yu commented on HBASE-10609:


Thanks for the review, Lars.

Will integrate later today.

 Remove filterKeyValue(Cell ignored) from FilterBase
 ---

 Key: HBASE-10609
 URL: https://issues.apache.org/jira/browse/HBASE-10609
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.99.0

 Attachments: 10609-v1.txt


 FilterBase.java has been marked @InterfaceAudience.Private since 0.96
 You can find background in HBASE-10485: PrefixFilter#filterKeyValue() should 
 perform filtering on row key
 Dropping filterKeyValue(Cell ignored) would let developers make conscientious 
 decision on when ReturnCode.INCLUDE should be returned.
 Here is the thread on dev@ mailing list:
 http://search-hadoop.com/m/DHED4l8JBI1



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10635) thrift#TestThriftServer fails due to TTL validity check

2014-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918477#comment-13918477
 ] 

Hudson commented on HBASE-10635:


FAILURE: Integrated in HBase-TRUNK #4972 (See 
[https://builds.apache.org/job/HBase-TRUNK/4972/])
HBASE-10635 thrift#TestThriftServer fails due to TTL validity check (enis: rev 
1573447)
* 
/hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java


 thrift#TestThriftServer fails due to TTL validity check
 ---

 Key: HBASE-10635
 URL: https://issues.apache.org/jira/browse/HBASE-10635
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Enis Soztutar
 Fix For: 0.99.0

 Attachments: hbase-10635_v1.patch


 From 
 https://builds.apache.org/job/HBase-TRUNK/4960/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServer/testAll/
  :
 {code}
 IOError(message:org.apache.hadoop.hbase.DoNotRetryIOException: TTL for column 
 family columnA  must be positive. Set hbase.table.sanity.checks to false at 
 conf or table descriptor if you want to bypass sanity checks
   at 
 org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1824)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1750)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1876)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40470)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2016)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
   at 
 org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 )
   at 
 org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.createTable(ThriftServerRunner.java:971)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.createTestTables(TestThriftServer.java:224)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.doTestTableCreateDrop(TestThriftServer.java:140)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.doTestTableCreateDrop(TestThriftServer.java:136)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.testAll(TestThriftServer.java:115)
 {code}
 Looks like ColumnDescriptor contains TTL of -1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10659) [89-fb] Optimize the threading model in HBase write path

2014-03-03 Thread Liyin Tang (JIRA)
Liyin Tang created HBASE-10659:
--

 Summary: [89-fb] Optimize the threading model in HBase write path
 Key: HBASE-10659
 URL: https://issues.apache.org/jira/browse/HBASE-10659
 Project: HBase
  Issue Type: New Feature
Reporter: Liyin Tang


Recently, we have done multiple prototypes to optimize the HBase (0.89)write 
path. And based on the simulator results, the following model is able to 
achieve much higher overall throughput with less threads.

IPC Writer Threads Pool: 
IPC handler threads will prepare all Put requests, and append the WALEdit, as 
one transaction, into a concurrent collection with a read lock. And then just 
return;

HLogSyncer Thread:
Each HLogSyncer thread is corresponding to one HLog stream. It swaps the 
concurrent collection with a write lock, and then iterate over all the elements 
in the previous concurrent collection, generate the sequence id for each 
transaction, and write to HLog. After the HLog sync is done, append these 
transactions as a batch into a blocking queue. 

Memstore Update Thread:
The memstore update thread will poll the blocking queue and update the memstore 
for each transaction by using the sequence id as MVCC. Once the memstore update 
is done, dispatch to the responder thread pool to return to the client.

Responder Thread Pool:
Responder thread pool will return the RPC call in parallel. 

We are still evaluating this model and will share more results/numbers once it 
is ready. But really appreciate any comments in advance !




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10079) Race in TableName cache

2014-03-03 Thread Cosmin Lehene (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cosmin Lehene updated HBASE-10079:
--

Affects Version/s: (was: 0.96.1)

 Race in TableName cache
 ---

 Key: HBASE-10079
 URL: https://issues.apache.org/jira/browse/HBASE-10079
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.1, 0.99.0

 Attachments: 10079.v1.patch, hbase-10079-addendum.patch, 
 hbase-10079.v2.patch


 Testing 0.96.1rc1.
 With one process incrementing a row in a table, we increment single col.  We 
 flush or do kills/kill-9 and data is lost.  flush and kill are likely the 
 same problem (kill would flush), kill -9 may or may not have the same root 
 cause.
 5 nodes
 hadoop 2.1.0 (a pre cdh5b1 hdfs).
 hbase 0.96.1 rc1 
 Test: 25 increments on a single row an single col with various number of 
 client threads (IncrementBlaster).  Verify we have a count of 25 after 
 the run (IncrementVerifier).
 Run 1: No fault injection.  5 runs.  count = 25. on multiple runs.  
 Correctness verified.  1638 inc/s throughput.
 Run 2: flushes table with incrementing row.  count = 246875 !=25.  
 correctness failed.  1517 inc/s throughput.  
 Run 3: kill of rs hosting incremented row.  count = 243750 != 25. 
 Correctness failed.   1451 inc/s throughput.
 Run 4: one kill -9 of rs hosting incremented row.  246878.!= 25.  
 Correctness failed. 1395 inc/s (including recovery)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10079) Race in TableName cache

2014-03-03 Thread Cosmin Lehene (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918497#comment-13918497
 ] 

Cosmin Lehene commented on HBASE-10079:
---

Thanks [~liochon], I removed the affects version. 

 Race in TableName cache
 ---

 Key: HBASE-10079
 URL: https://issues.apache.org/jira/browse/HBASE-10079
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.1, 0.99.0

 Attachments: 10079.v1.patch, hbase-10079-addendum.patch, 
 hbase-10079.v2.patch


 Testing 0.96.1rc1.
 With one process incrementing a row in a table, we increment single col.  We 
 flush or do kills/kill-9 and data is lost.  flush and kill are likely the 
 same problem (kill would flush), kill -9 may or may not have the same root 
 cause.
 5 nodes
 hadoop 2.1.0 (a pre cdh5b1 hdfs).
 hbase 0.96.1 rc1 
 Test: 25 increments on a single row an single col with various number of 
 client threads (IncrementBlaster).  Verify we have a count of 25 after 
 the run (IncrementVerifier).
 Run 1: No fault injection.  5 runs.  count = 25. on multiple runs.  
 Correctness verified.  1638 inc/s throughput.
 Run 2: flushes table with incrementing row.  count = 246875 !=25.  
 correctness failed.  1517 inc/s throughput.  
 Run 3: kill of rs hosting incremented row.  count = 243750 != 25. 
 Correctness failed.   1451 inc/s throughput.
 Run 4: one kill -9 of rs hosting incremented row.  246878.!= 25.  
 Correctness failed. 1395 inc/s (including recovery)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918534#comment-13918534
 ] 

Hadoop QA commented on HBASE-10018:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632321/10018.v4.patch
  against trunk revision .
  ATTACHMENT ID: 12632321

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8868//console

This message is automatically generated.

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 
 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10632) Region lost in limbo after ArrayIndexOutOfBoundsException during assignment

2014-03-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10632:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this to trunk, 0.98, 0.96 and hbase-10070 branches. Thanks for 
reviews. 

 Region lost in limbo after ArrayIndexOutOfBoundsException during assignment
 ---

 Key: HBASE-10632
 URL: https://issues.apache.org/jira/browse/HBASE-10632
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: hbase-10070
Reporter: Nick Dimiduk
Assignee: Enis Soztutar
 Fix For: 0.96.2, 0.98.1, 0.99.0, hbase-10070

 Attachments: hbase-10632_v1.patch


 Discovered while running IntegrationTestBigLinkedList. Region 
 24d68aa7239824e42390a77b7212fcbf is scheduled for move from hor13n19 to 
 hor13n13. During the process an exception is thrown.
 {noformat}
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 master.RegionStates: Transitioning {24d68aa7239824e42390a77b7212fcbf 
 state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} will be handled by SSH 
 for hor13n19.gq1.ygridcore.net,60020,1393341563552
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 handler.ServerShutdownHandler: Reassigning 7 region(s) that 
 hor13n19.gq1.ygridcore.net,60020,1393341563552 was carrying (and 0 regions(s) 
 that were opening on this server)
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 handler.ServerShutdownHandler: Reassigning region with rs = 
 {24d68aa7239824e42390a77b7212fcbf state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} and deleting zk node 
 if exists
 2014-02-25 15:30:42,623 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 master.RegionStates: Transitioned {24d68aa7239824e42390a77b7212fcbf 
 state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} to 
 {24d68aa7239824e42390a77b7212fcbf state=OFFLINE, ts=1393342242623, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 2014-02-25 15:30:42,623 DEBUG [AM.ZK.Worker-pool2-t46] 
 master.AssignmentManager: Znode 
 IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.
  deleted, state: {24d68aa7239824e42390a77b7212fcbf state=OFFLINE, 
 ts=1393342242623, server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 ...
 2014-02-25 15:30:43,993 ERROR [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 executor.EventHandler: Caught throwable while processing event 
 M_SERVER_SHUTDOWN
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.init(BaseLoadBalancer.java:250)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.createCluster(BaseLoadBalancer.java:921)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.roundRobinAssignment(BaseLoadBalancer.java:860)
   at 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2482)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:282)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}
 After that, region is left in limbo and is never reassigned.
 {noformat}
 2014-02-25 15:35:11,581 INFO  [FifoRpcScheduler.handler1-thread-6] 
 master.HMaster: Client=hrt_qa//68.142.246.29 move 
 hri=IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.,
  src=hor13n19.gq1.ygridcore.net,60020,1393341563552, 
 dest=hor13n13.gq1.ygridcore.net,60020,139334275, running balancer
 2014-02-25 15:35:11,581 INFO  [FifoRpcScheduler.handler1-thread-6] 
 master.AssignmentManager: Ignored moving region not assigned: {ENCODED = 
 24d68aa7239824e42390a77b7212fcbf, NAME = 
 'IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.',
  STARTKEY = '\x80\x06\x1A', ENDKEY = ''}, {24d68aa7239824e42390a77b7212fcbf 
 state=OFFLINE, ts=1393342242623, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 ...
 2014-02-25 15:35:26,586 DEBUG 
 [hor13n12.gq1.ygridcore.net,6,1393341917402-BalancerChore] 
 master.HMaster: Not running balancer because 1 region(s) in transition: 
 {24d68aa7239824e42390a77b7212fcbf={24d68aa7239824e42390a77b7212fcbf 
 state=OFFLINE, ts=1393342242623, 
 

[jira] [Updated] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-03 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9708:
---

Attachment: HBASE-9708.1.patch

attached same patch, with line size fixed, waiting for a jenkins run before 
committing it

 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Priority: Minor
 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10191) Move large arena storage off heap

2014-03-03 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918632#comment-13918632
 ] 

Matt Corgan commented on HBASE-10191:
-

{quote}creating small in-memory HFiles... – from a small CSLM that does the 
sort for us?{quote}yes, that is all i meant.  The CSLM would remain small 
because it gets flushed more often.  I don't doubt there are better ways to do 
it than the CSLM (like the deferred sorting you mention), but even just 
shrinking the size of the CSLM would be an improvement without having to 
re-think the memstore's concurrency mechanisms.

Let's say you have a 500MB memstore limit, and that encodes (not compresses) to 
100MB.  You could:
* split it into 10 stripes, each with ~50MB limit, and flush each of the 10 
stripes (to memory) individually
** you probably have a performance boost already because 10 50MB CSLMs is 
better than 1 500MB CSLM
* for a given stripe, flush the CSLM each time it reaches 25MB, which will spit 
out 5MB encoded memory hfile to the off-heap storage
* optionally compact a stripe's memory hfiles in the background to increase 
read performance
* when a stripe has 25MB CSLM + 5 encoded snapshots, flush/compact the whole 
thing to disk
* release the WAL entries for the stripe

On the WAL entries, i was just pointing out that you can no longer release the 
WAL entries when you flush the CSLM.  You have to hold on to the WAL entries 
until you flush the memory hfiles to disk.

 Move large arena storage off heap
 -

 Key: HBASE-10191
 URL: https://issues.apache.org/jira/browse/HBASE-10191
 Project: HBase
  Issue Type: Umbrella
Reporter: Andrew Purtell

 Even with the improved G1 GC in Java 7, Java processes that want to address 
 large regions of memory while also providing low high-percentile latencies 
 continue to be challenged. Fundamentally, a Java server process that has high 
 data throughput and also tight latency SLAs will be stymied by the fact that 
 the JVM does not provide a fully concurrent collector. There is simply not 
 enough throughput to copy data during GC under safepoint (all application 
 threads suspended) within available time bounds. This is increasingly an 
 issue for HBase users operating under dual pressures: 1. tight response SLAs, 
 2. the increasing amount of RAM available in commodity server 
 configurations, because GC load is roughly proportional to heap size.
 We can address this using parallel strategies. We should talk with the Java 
 platform developer community about the possibility of a fully concurrent 
 collector appearing in OpenJDK somehow. Set aside the question of if this is 
 too little too late, if one becomes available the benefit will be immediate 
 though subject to qualification for production, and transparent in terms of 
 code changes. However in the meantime we need an answer for Java versions 
 already in production. This requires we move the large arena allocations off 
 heap, those being the blockcache and memstore. On other JIRAs recently there 
 has been related discussion about combining the blockcache and memstore 
 (HBASE-9399) and on flushing memstore into blockcache (HBASE-5311), which is 
 related work. We should build off heap allocation for memstore and 
 blockcache, perhaps a unified pool for both, and plumb through zero copy 
 direct access to these allocations (via direct buffers) through the read and 
 write I/O paths. This may require the construction of classes that provide 
 object views over data contained within direct buffers. This is something 
 else we could talk with the Java platform developer community about - it 
 could be possible to provide language level object views over off heap 
 memory, on heap objects could hold references to objects backed by off heap 
 memory but not vice versa, maybe facilitated by new intrinsics in Unsafe. 
 Again we need an answer for today also. We should investigate what existing 
 libraries may be available in this regard. Key will be avoiding 
 marshalling/unmarshalling costs. At most we should be copying primitives out 
 of the direct buffers to register or stack locations until finally copying 
 data to construct protobuf Messages. A related issue there is HBASE-9794, 
 which proposes scatter-gather access to KeyValues when constructing RPC 
 messages. We should see how far we can get with that and also zero copy 
 construction of protobuf Messages backed by direct buffer allocations. Some 
 amount of native code may be required.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10639) Unload script displays wrong counts (off by one) when unloading regions

2014-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918670#comment-13918670
 ] 

Hudson commented on HBASE-10639:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #183 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/183/])
HBASE-10639 Unload script displays wrong counts (off by one) when unloading 
regions (Srikanth Srungarapu) (jmhsieh: rev 1573679)
* /hbase/branches/0.98/bin/region_mover.rb


 Unload script displays wrong counts (off by one) when unloading regions 
 

 Key: HBASE-10639
 URL: https://issues.apache.org/jira/browse/HBASE-10639
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.17
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: hbase_10639.patch


 Upon running an unload command, such as:
 hbase org.jruby.Main /usr/lib/hbase/bin/region_mover.rb unload `hostname`, 
 the region counter is indexed at 0 and hence, when the regions are being 
 moved, regions are being counted from 0 instead of 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10639) Unload script displays wrong counts (off by one) when unloading regions

2014-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918672#comment-13918672
 ] 

Hudson commented on HBASE-10639:


FAILURE: Integrated in HBase-0.98 #195 (See 
[https://builds.apache.org/job/HBase-0.98/195/])
HBASE-10639 Unload script displays wrong counts (off by one) when unloading 
regions (Srikanth Srungarapu) (jmhsieh: rev 1573679)
* /hbase/branches/0.98/bin/region_mover.rb


 Unload script displays wrong counts (off by one) when unloading regions 
 

 Key: HBASE-10639
 URL: https://issues.apache.org/jira/browse/HBASE-10639
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.17
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: hbase_10639.patch


 Upon running an unload command, such as:
 hbase org.jruby.Main /usr/lib/hbase/bin/region_mover.rb unload `hostname`, 
 the region counter is indexed at 0 and hence, when the regions are being 
 moved, regions are being counted from 0 instead of 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10451) Enable back Tag compression on HFiles

2014-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918671#comment-13918671
 ] 

Hudson commented on HBASE-10451:


FAILURE: Integrated in HBase-0.98 #195 (See 
[https://builds.apache.org/job/HBase-0.98/195/])
HBASE-10451 Enable back Tag compression on HFiles.(Anoop) (anoopsamjohn: rev 
1573150)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
/hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestTagCompressionContext.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileScannerWithTagCompression.java


 Enable back Tag compression on HFiles
 -

 Key: HBASE-10451
 URL: https://issues.apache.org/jira/browse/HBASE-10451
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10451.patch, HBASE-10451_V2.patch, 
 HBASE-10451_V3.patch, HBASE-10451_V4.patch, HBASE-10451_V5.patch, 
 HBASE-10451_V6.patch


 HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
 issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10639) Unload script displays wrong counts (off by one) when unloading regions

2014-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918678#comment-13918678
 ] 

Hudson commented on HBASE-10639:


SUCCESS: Integrated in hbase-0.96 #323 (See 
[https://builds.apache.org/job/hbase-0.96/323/])
HBASE-10639 Unload script displays wrong counts (off by one) when unloading 
regions (Srikanth Srungarapu) (jmhsieh: rev 1573680)
* /hbase/branches/0.96/bin/region_mover.rb


 Unload script displays wrong counts (off by one) when unloading regions 
 

 Key: HBASE-10639
 URL: https://issues.apache.org/jira/browse/HBASE-10639
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.17
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: hbase_10639.patch


 Upon running an unload command, such as:
 hbase org.jruby.Main /usr/lib/hbase/bin/region_mover.rb unload `hostname`, 
 the region counter is indexed at 0 and hence, when the regions are being 
 moved, regions are being counted from 0 instead of 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918710#comment-13918710
 ] 

Hadoop QA commented on HBASE-9708:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12632361/HBASE-9708.1.patch
  against trunk revision .
  ATTACHMENT ID: 12632361

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  throw new IllegalArgumentException(isSnapshot ? Snapshot : Table 
+  qualifier must not be empty);
+ characters': i.e. [a-zA-Z_0-9]:  + 
Bytes.toString(qualifierName));

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8869//console

This message is automatically generated.

 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Priority: Minor
 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10632) Region lost in limbo after ArrayIndexOutOfBoundsException during assignment

2014-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918713#comment-13918713
 ] 

Hudson commented on HBASE-10632:


FAILURE: Integrated in HBase-TRUNK #4973 (See 
[https://builds.apache.org/job/HBase-TRUNK/4973/])
HBASE-10632 Region lost in limbo after ArrayIndexOutOfBoundsException during 
assignment (enis: rev 1573723)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java


 Region lost in limbo after ArrayIndexOutOfBoundsException during assignment
 ---

 Key: HBASE-10632
 URL: https://issues.apache.org/jira/browse/HBASE-10632
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: hbase-10070
Reporter: Nick Dimiduk
Assignee: Enis Soztutar
 Fix For: 0.96.2, 0.98.1, 0.99.0, hbase-10070

 Attachments: hbase-10632_v1.patch


 Discovered while running IntegrationTestBigLinkedList. Region 
 24d68aa7239824e42390a77b7212fcbf is scheduled for move from hor13n19 to 
 hor13n13. During the process an exception is thrown.
 {noformat}
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 master.RegionStates: Transitioning {24d68aa7239824e42390a77b7212fcbf 
 state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} will be handled by SSH 
 for hor13n19.gq1.ygridcore.net,60020,1393341563552
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 handler.ServerShutdownHandler: Reassigning 7 region(s) that 
 hor13n19.gq1.ygridcore.net,60020,1393341563552 was carrying (and 0 regions(s) 
 that were opening on this server)
 2014-02-25 15:30:42,613 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 handler.ServerShutdownHandler: Reassigning region with rs = 
 {24d68aa7239824e42390a77b7212fcbf state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} and deleting zk node 
 if exists
 2014-02-25 15:30:42,623 INFO  [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 master.RegionStates: Transitioned {24d68aa7239824e42390a77b7212fcbf 
 state=OPENING, ts=1393342207107, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552} to 
 {24d68aa7239824e42390a77b7212fcbf state=OFFLINE, ts=1393342242623, 
 server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 2014-02-25 15:30:42,623 DEBUG [AM.ZK.Worker-pool2-t46] 
 master.AssignmentManager: Znode 
 IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.
  deleted, state: {24d68aa7239824e42390a77b7212fcbf state=OFFLINE, 
 ts=1393342242623, server=hor13n19.gq1.ygridcore.net,60020,1393341563552}
 ...
 2014-02-25 15:30:43,993 ERROR [MASTER_SERVER_OPERATIONS-hor13n12:6-4] 
 executor.EventHandler: Caught throwable while processing event 
 M_SERVER_SHUTDOWN
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.init(BaseLoadBalancer.java:250)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.createCluster(BaseLoadBalancer.java:921)
   at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.roundRobinAssignment(BaseLoadBalancer.java:860)
   at 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2482)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:282)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {noformat}
 After that, region is left in limbo and is never reassigned.
 {noformat}
 2014-02-25 15:35:11,581 INFO  [FifoRpcScheduler.handler1-thread-6] 
 master.HMaster: Client=hrt_qa//68.142.246.29 move 
 hri=IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.,
  src=hor13n19.gq1.ygridcore.net,60020,1393341563552, 
 dest=hor13n13.gq1.ygridcore.net,60020,139334275, running balancer
 2014-02-25 15:35:11,581 INFO  [FifoRpcScheduler.handler1-thread-6] 
 master.AssignmentManager: Ignored moving region not assigned: {ENCODED = 
 24d68aa7239824e42390a77b7212fcbf, NAME = 
 'IntegrationTestBigLinkedList,\x80\x06\x1A,1393342105093.24d68aa7239824e42390a77b7212fcbf.',
  STARTKEY = '\x80\x06\x1A', ENDKEY = ''}, {24d68aa7239824e42390a77b7212fcbf 
 state=OFFLINE, ts=1393342242623, 
 

[jira] [Commented] (HBASE-10639) Unload script displays wrong counts (off by one) when unloading regions

2014-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918712#comment-13918712
 ] 

Hudson commented on HBASE-10639:


FAILURE: Integrated in HBase-TRUNK #4973 (See 
[https://builds.apache.org/job/HBase-TRUNK/4973/])
HBASE-10639 Unload script displays wrong counts (off by one) when unloading 
regions (Srikanth Srungarapu) (jmhsieh: rev 1573678)
* /hbase/trunk/bin/region_mover.rb


 Unload script displays wrong counts (off by one) when unloading regions 
 

 Key: HBASE-10639
 URL: https://issues.apache.org/jira/browse/HBASE-10639
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.17
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: hbase_10639.patch


 Upon running an unload command, such as:
 hbase org.jruby.Main /usr/lib/hbase/bin/region_mover.rb unload `hostname`, 
 the region counter is indexed at 0 and hence, when the regions are being 
 moved, regions are being counted from 0 instead of 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10660) MR over snapshots can OOM when alternative blockcache is enabled

2014-03-03 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-10660:


 Summary: MR over snapshots can OOM when alternative blockcache is 
enabled
 Key: HBASE-10660
 URL: https://issues.apache.org/jira/browse/HBASE-10660
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.0, 0.99.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk


Running {{IntegrationTestTableSnapshotInputFormat}} with the {{BucketCache}} 
enabled results in OOM. The map task is running a sequential scan over the 
region region it opened, so probably it's not benefiting much from a 
blockcache. Just disable blockcache entirely for these scans because it's 
likely the cache config detected is designed for a RS running on a different 
class of hardware than that running the map task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10660) MR over snapshots can OOM when alternative blockcache is enabled

2014-03-03 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10660:
-

Status: Patch Available  (was: Open)

 MR over snapshots can OOM when alternative blockcache is enabled
 

 Key: HBASE-10660
 URL: https://issues.apache.org/jira/browse/HBASE-10660
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.0, 0.99.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-10660.00.patch


 Running {{IntegrationTestTableSnapshotInputFormat}} with the {{BucketCache}} 
 enabled results in OOM. The map task is running a sequential scan over the 
 region region it opened, so probably it's not benefiting much from a 
 blockcache. Just disable blockcache entirely for these scans because it's 
 likely the cache config detected is designed for a RS running on a different 
 class of hardware than that running the map task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10660) MR over snapshots can OOM when alternative blockcache is enabled

2014-03-03 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918738#comment-13918738
 ] 

Nick Dimiduk commented on HBASE-10660:
--

ping [~enis] [~lhofhansl]. Any thoughts on how the complete lack of a 
BlockCache will impact consumption of index/meta blocks for the purposes of 
this scan?

 MR over snapshots can OOM when alternative blockcache is enabled
 

 Key: HBASE-10660
 URL: https://issues.apache.org/jira/browse/HBASE-10660
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.0, 0.99.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-10660.00.patch


 Running {{IntegrationTestTableSnapshotInputFormat}} with the {{BucketCache}} 
 enabled results in OOM. The map task is running a sequential scan over the 
 region region it opened, so probably it's not benefiting much from a 
 blockcache. Just disable blockcache entirely for these scans because it's 
 likely the cache config detected is designed for a RS running on a different 
 class of hardware than that running the map task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-03 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9708:
---

Attachment: HBASE-9708.1.patch

 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Priority: Minor
 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9708) Improve Snapshot Name Error Message

2014-03-03 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9708:
---

Attachment: (was: HBASE-9708.1.patch)

 Improve Snapshot Name Error Message
 ---

 Key: HBASE-9708
 URL: https://issues.apache.org/jira/browse/HBASE-9708
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.94.2
Reporter: Jesse Anderson
Priority: Minor
 Attachments: HBASE-9708.1.patch, HBASE-9708.1.patch


 The output for snapshots when you enter an invalid snapshot name talks about 
 User-space table names instead of Snapshot names. The error message 
 should say Snapshot names can only contain
 Here is an example of the output:
 {noformat}
 hbase(main):001:0 snapshot 'user', 'asdf asdf'
 ERROR: java.lang.IllegalArgumentException: Illegal character 32 at 4. 
 User-space table names can only contain 'word characters': i.e. 
 [a-zA-Z_0-9-.]: asdf asdf
 Here is some help for this command:
 Take a snapshot of specified table. Examples:
   hbase snapshot 'sourceTable', 'snapshotName'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10653) Incorrect table status in HBase shell Describe

2014-03-03 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918748#comment-13918748
 ] 

Nick Dimiduk commented on HBASE-10653:
--

Can you provide any context for this issue? Sequence of actions performed 
(failed or otherwise) in order to reproduce, relevant logs from master/RS, etc. 
There's not much to go on here.

 Incorrect table status in HBase shell Describe
 --

 Key: HBASE-10653
 URL: https://issues.apache.org/jira/browse/HBASE-10653
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Biju Nair
  Labels: HbaseShell, describe

 Describe output of table which is disabled shows as enabled.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10635) thrift#TestThriftServer fails due to TTL validity check

2014-03-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10635:
--

Attachment: hbase-10635_v1-addendum.patch

Committed the addendum patch as well which also fixes TestThriftServerCmdLine. 



 thrift#TestThriftServer fails due to TTL validity check
 ---

 Key: HBASE-10635
 URL: https://issues.apache.org/jira/browse/HBASE-10635
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Enis Soztutar
 Fix For: 0.99.0

 Attachments: hbase-10635_v1-addendum.patch, hbase-10635_v1.patch


 From 
 https://builds.apache.org/job/HBase-TRUNK/4960/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServer/testAll/
  :
 {code}
 IOError(message:org.apache.hadoop.hbase.DoNotRetryIOException: TTL for column 
 family columnA  must be positive. Set hbase.table.sanity.checks to false at 
 conf or table descriptor if you want to bypass sanity checks
   at 
 org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1824)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1750)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1876)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40470)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2016)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
   at 
 org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 )
   at 
 org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.createTable(ThriftServerRunner.java:971)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.createTestTables(TestThriftServer.java:224)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.doTestTableCreateDrop(TestThriftServer.java:140)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.doTestTableCreateDrop(TestThriftServer.java:136)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.testAll(TestThriftServer.java:115)
 {code}
 Looks like ColumnDescriptor contains TTL of -1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10635) thrift#TestThriftServer fails due to TTL validity check

2014-03-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10635:
--

Priority: Trivial  (was: Major)

 thrift#TestThriftServer fails due to TTL validity check
 ---

 Key: HBASE-10635
 URL: https://issues.apache.org/jira/browse/HBASE-10635
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Enis Soztutar
Priority: Trivial
 Fix For: 0.99.0

 Attachments: hbase-10635_v1-addendum.patch, hbase-10635_v1.patch


 From 
 https://builds.apache.org/job/HBase-TRUNK/4960/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServer/testAll/
  :
 {code}
 IOError(message:org.apache.hadoop.hbase.DoNotRetryIOException: TTL for column 
 family columnA  must be positive. Set hbase.table.sanity.checks to false at 
 conf or table descriptor if you want to bypass sanity checks
   at 
 org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1824)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1750)
   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1876)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40470)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2016)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
   at 
 org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 )
   at 
 org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.createTable(ThriftServerRunner.java:971)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.createTestTables(TestThriftServer.java:224)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.doTestTableCreateDrop(TestThriftServer.java:140)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.doTestTableCreateDrop(TestThriftServer.java:136)
   at 
 org.apache.hadoop.hbase.thrift.TestThriftServer.testAll(TestThriftServer.java:115)
 {code}
 Looks like ColumnDescriptor contains TTL of -1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >