[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072858#comment-14072858
 ] 

Hadoop QA commented on HBASE-11384:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657530/HBASE-11384_4.patch
  against trunk revision .
  ATTACHMENT ID: 12657530

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 44 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestHBaseFsck

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10172//console

This message is automatically generated.

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11582) Fix Javadoc warning in DataInputInputStream and CacheConfig

2014-07-24 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved HBASE-11582.


Resolution: Invalid

Stack already fixed this Javadoc warning in HBASE-11573.  Reopen if needed 
again.

 Fix Javadoc warning in DataInputInputStream and CacheConfig
 ---

 Key: HBASE-11582
 URL: https://issues.apache.org/jira/browse/HBASE-11582
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 0.99.0


 Recent precommit builds shows some javadoc warnings here
 {code}
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/io/DataInputInputStream.java:32:
  warning - Tag @see: reference not found: DataOutputOutputStream
 [WARNING] 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java:99:
  warning - Tag @link: can't find BUCKET_CACHE_COMBINED_PERCENTAGE_KEY in 
 org.apache.hadoop.hbase.io.hfile.CacheConfig
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-24 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11516:


Attachment: (was: HBASE-11516.patch)

 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch, 
 HBASE-11516_v3.patch, region_server_webui.png


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-24 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11516:


Attachment: HBASE-11516.patch
region_server_webui.png

Attaching the screenshot of region server web ui and new patch as per 
suggestions.

 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch, 
 HBASE-11516_v3.patch, region_server_webui.png


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-24 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11516:


Attachment: HBASE-11516_v3.patch

 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch, 
 HBASE-11516_v3.patch, region_server_webui.png


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11583) Refactoring out the configuration changes for enabling VisbilityLabels in the unit tests.

2014-07-24 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-11583:
---

 Summary: Refactoring out the configuration changes for enabling 
VisbilityLabels in the unit tests.
 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11583) Refactoring out the configuration changes for enabling VisbilityLabels in the unit tests.

2014-07-24 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11583:


Attachment: HBASE-11583.patch

 Refactoring out the configuration changes for enabling VisbilityLabels in the 
 unit tests.
 -

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11583) Refactoring out the configuration changes for enabling VisbilityLabels in the unit tests.

2014-07-24 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11583:


Description: All the unit tests contain the code for enabling the 
visibility changes. Incorporating future configuration changes for Visibility 
Labels configuration can be made easier by refactoring them out to a single 
place.

 Refactoring out the configuration changes for enabling VisbilityLabels in the 
 unit tests.
 -

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072885#comment-14072885
 ] 

Anoop Sam John commented on HBASE-11384:


HTD#setCheckAuthsForMutation(boolean setCheckAuths)
Do we need a setter or it can be set using a config alone?  If we have it as a 
config, then we can set it one time for all the tables in a cluster.  Also the 
config at cluster level can be overriden for a particular table by setting it 
using HTD#setValue()

We have to handle in IntegrationTestIngestWithVisibilityLabels?

+boolean checkAuths = 
c.getEnvironment().getRegion().getTableDesc().getCheckAuthsForMutation();
Just have a boolean instance member in VC and init it on postOPen()?

{code}
if (!auths.contains(labelOrdinal)) {
+throw new AccessDeniedException(Visibility label  + identifier
++  not associated with user  + userName);
+  }
{code}
Say Visibility label  + identifier +  is not authorized for user  + 
userName);
AccessDeniedException is okey?


{code}
+  public ListInteger getAuthsAsOrdinals(String user) {
+ListInteger auths = EMPTY_INT_LIST;
+this.lock.readLock().lock();
+try {
+  SetInteger authOrdinals = userAuths.get(user);
+  if (authOrdinals != null) {
+auths = new ArrayListInteger(authOrdinals.size());
+for (Integer authOrdinal : authOrdinals) {
+  auths.add(authOrdinal);
+}
{code}
Any reason why you want to return List than Set?  So that u will need a 
convertion here?



{code}
+  public static HTable createTable(HTableDescriptor htd, byte[] families, 
Configuration c,
+  HBaseTestingUtility util) throws IOException {
+HColumnDescriptor hcd = new HColumnDescriptor(families);
+// Disable blooms (they are on by default as of 0.95) but we disable them
+// here because
+// tests have hard coded counts of what to expect in block cache, etc., and
+// blooms being
+// on is interfering.
+hcd.setBloomFilterType(BloomType.NONE);
{code}
Why pass Configuration when you can get the same from HBaseTestingUtility?
Are we asserting counts in block cache in Visibility tests?


{code}
+table.put(p);
+  } catch (Throwable t) {
+assertTrue(t.getMessage().contains(AccessDeniedException));
+  } finally {
{code}
We shuld fail() after the table.put() call within try block


By default we will have auth check for labels in Mutation visibility 
expression. Make sure to update the documentation and add this behaviour change 
in Release notes. (Mark this jira as Incompatible change?)


 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072885#comment-14072885
 ] 

Anoop Sam John edited comment on HBASE-11384 at 7/24/14 6:30 AM:
-

HTD#setCheckAuthsForMutation(boolean setCheckAuths)
Do we need a setter or it can be set using a config alone?  If we have it as a 
config, then we can set it one time for all the tables in a cluster.  Also the 
config at cluster level can be overridden for a particular table by setting it 
using HTD#setValue()

We have to handle in IntegrationTestIngestWithVisibilityLabels?

{code}
+boolean checkAuths = 
c.getEnvironment().getRegion().getTableDesc().getCheckAuthsForMutation();{code}
Just have a boolean instance member in VC and init it on postOpen()?

{code}
if (!auths.contains(labelOrdinal)) {
+throw new AccessDeniedException(Visibility label  + identifier
++  not associated with user  + userName);
+  }
{code}
Say Visibility label  + identifier +  is not authorized for user  + 
userName);
AccessDeniedException is okey?


{code}
+  public ListInteger getAuthsAsOrdinals(String user) {
+ListInteger auths = EMPTY_INT_LIST;
+this.lock.readLock().lock();
+try {
+  SetInteger authOrdinals = userAuths.get(user);
+  if (authOrdinals != null) {
+auths = new ArrayListInteger(authOrdinals.size());
+for (Integer authOrdinal : authOrdinals) {
+  auths.add(authOrdinal);
+}
{code}
Any reason why you want to return List than Set?  So that u will need a 
conversion here?



{code}
+  public static HTable createTable(HTableDescriptor htd, byte[] families, 
Configuration c,
+  HBaseTestingUtility util) throws IOException {
+HColumnDescriptor hcd = new HColumnDescriptor(families);
+// Disable blooms (they are on by default as of 0.95) but we disable them
+// here because
+// tests have hard coded counts of what to expect in block cache, etc., and
+// blooms being
+// on is interfering.
+hcd.setBloomFilterType(BloomType.NONE);
{code}
Why pass Configuration when you can get the same from HBaseTestingUtility?
Are we asserting counts in block cache in Visibility tests?


{code}
+table.put(p);
+  } catch (Throwable t) {
+assertTrue(t.getMessage().contains(AccessDeniedException));
+  } finally {
{code}
We should fail() after the table.put() call within try block


By default we will have auth check for labels in Mutation visibility 
expression. Make sure to update the documentation and add this behavior change 
in Release notes. (Mark this jira as Incompatible change?)



was (Author: anoop.hbase):
HTD#setCheckAuthsForMutation(boolean setCheckAuths)
Do we need a setter or it can be set using a config alone?  If we have it as a 
config, then we can set it one time for all the tables in a cluster.  Also the 
config at cluster level can be overriden for a particular table by setting it 
using HTD#setValue()

We have to handle in IntegrationTestIngestWithVisibilityLabels?

+boolean checkAuths = 
c.getEnvironment().getRegion().getTableDesc().getCheckAuthsForMutation();
Just have a boolean instance member in VC and init it on postOPen()?

{code}
if (!auths.contains(labelOrdinal)) {
+throw new AccessDeniedException(Visibility label  + identifier
++  not associated with user  + userName);
+  }
{code}
Say Visibility label  + identifier +  is not authorized for user  + 
userName);
AccessDeniedException is okey?


{code}
+  public ListInteger getAuthsAsOrdinals(String user) {
+ListInteger auths = EMPTY_INT_LIST;
+this.lock.readLock().lock();
+try {
+  SetInteger authOrdinals = userAuths.get(user);
+  if (authOrdinals != null) {
+auths = new ArrayListInteger(authOrdinals.size());
+for (Integer authOrdinal : authOrdinals) {
+  auths.add(authOrdinal);
+}
{code}
Any reason why you want to return List than Set?  So that u will need a 
convertion here?



{code}
+  public static HTable createTable(HTableDescriptor htd, byte[] families, 
Configuration c,
+  HBaseTestingUtility util) throws IOException {
+HColumnDescriptor hcd = new HColumnDescriptor(families);
+// Disable blooms (they are on by default as of 0.95) but we disable them
+// here because
+// tests have hard coded counts of what to expect in block cache, etc., and
+// blooms being
+// on is interfering.
+hcd.setBloomFilterType(BloomType.NONE);
{code}
Why pass Configuration when you can get the same from HBaseTestingUtility?
Are we asserting counts in block cache in Visibility tests?


{code}
+table.put(p);
+  } catch (Throwable t) {
+assertTrue(t.getMessage().contains(AccessDeniedException));
+  } finally {
{code}
We shuld fail() after the table.put() call within try block


By default 

[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-24 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072893#comment-14072893
 ] 

ramkrishna.s.vasudevan commented on HBASE-11384:


bq.HTD#setCheckAuthsForMutation(boolean setCheckAuths)
We can have cluster level also fine, but allowing HTD.setValue() then we have 
to expose that config outside.  Making it by default to true would mean that it 
is on by default. 
bq.We have to handle in IntegrationTestIngestWithVisibilityLabels?
I checked this and found that it is calling LoadTestTool.  That is why changed 
in LTT. Does it make sense?
bq.Just have a boolean instance member in VC and init it on postOpen()?
Okie. 
bq.AccessDeniedException is okey?
Previous comment from Andy suggested that to be AccessDenied.  Hence changed 
it. Changing to authorized is fine with me in the comment. 
bq.Why pass Configuration when you can get the same from HBaseTestingUtility?
Will remove the configuration. Initially did not pass the Testingutiliity later 
added it.
Will remove the copy paste issue in the comment. 
bq.We should fail() after the table.put() call within try block
The intention was that we would definitely get exception so wanted to validate 
the type of error alone. Fine in adding a fail() also.
bq.By default we will have auth check for labels in Mutation visibility 
expression
Yes. Fine with updating the documentation.

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11527:
---

Fix Version/s: (was: 0.99.0)

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11527:
---

Attachment: HBASE-11527.patch

Avoid some code duplication also.

Ping [~stack]

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11527:
---

Affects Version/s: (was: 0.99.0)

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11527:
---

Status: Patch Available  (was: Open)

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072908#comment-14072908
 ] 

Anoop Sam John commented on HBASE-11384:


bq.The intention was that we would definitely get exception so wanted to 
validate the type of error alone. Fine in adding a fail() also.
Ya. If we dont have the fail() then the test won't fail even with out this 
patch !!

bq.but allowing HTD.setValue() then we have to expose that config outside.
I think mostly one want the setting to be same for all tables in a cluster. 
That is why I would +1 a config setting. Even if one need table level setting, 
now from 0.96 we have a way.  So see whether we really want a setter in HTD

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072910#comment-14072910
 ] 

Anoop Sam John commented on HBASE-11384:


bq.Previous comment from Andy suggested that to be AccessDenied. Hence changed 
it.
NP. Was just asking.

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11583) Refactoring out the configuration changes for enabling VisbilityLabels in the unit tests.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11583:
---

Issue Type: Improvement  (was: Bug)

 Refactoring out the configuration changes for enabling VisbilityLabels in the 
 unit tests.
 -

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11583) Refactoring out the configuration changes for enabling VisbilityLabels in the unit tests.

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072929#comment-14072929
 ] 

Anoop Sam John commented on HBASE-11583:


Thanks for doing this.
VisibilityTestUtil is missing License header !

 Refactoring out the configuration changes for enabling VisbilityLabels in the 
 unit tests.
 -

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11583) Refactoring out the configuration changes for enabling VisbilityLabels in the unit tests.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11583:
---

Status: Patch Available  (was: Open)

 Refactoring out the configuration changes for enabling VisbilityLabels in the 
 unit tests.
 -

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072949#comment-14072949
 ] 

Hadoop QA commented on HBASE-11527:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657560/HBASE-11527.patch
  against trunk revision .
  ATTACHMENT ID: 12657560

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10174//console

This message is automatically generated.

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11527:
---

Attachment: HBASE-11527.patch

Rebased patch

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch, HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10773) Make use of ByteRanges in HFileBlock instead of ByteBuffers

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073037#comment-14073037
 ] 

Anoop Sam John commented on HBASE-10773:


I think a change from BB to BR through out the read way might be what needed at 
one short.  Else you will end up in recreating BB objects from BR st some 
levels?

 Make use of ByteRanges in HFileBlock instead of ByteBuffers
 ---

 Key: HBASE-10773
 URL: https://issues.apache.org/jira/browse/HBASE-10773
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Replacing BBs with Byte Ranges  in block cache as part of HBASE-10772, would 
 help in replacing BBs with BRs in HFileBlock also.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11425) Cell/DBB end-to-end on the read-path

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073043#comment-14073043
 ] 

Anoop Sam John commented on HBASE-11425:


bq.We need BR instead of BB to work around issues with BB API issues: inlining 
pessimism, range checking and index compensations that cannot be skipped for 
performance, and related. 
Yes. So we will have our own written HeapBB/DirectBB stuff than just wrapping 
the nio objects.

 Cell/DBB end-to-end on the read-path
 

 Key: HBASE-11425
 URL: https://issues.apache.org/jira/browse/HBASE-11425
 Project: HBase
  Issue Type: Umbrella
  Components: regionserver, Scanners
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John

 Umbrella jira to make sure we can have blocks cached in offheap backed cache. 
 In the entire read path, we can refer to this offheap buffer and avoid onheap 
 copying.
 The high level items I can identify as of now are
 1. Avoid the array() call on BB in read path.. (This is there in many 
 classes. We can handle class by class)
 2. Support Buffer based getter APIs in cell.  In read path we will create a 
 new Cell with backed by BB. Will need in CellComparator, Filter (like SCVF), 
 CPs etc.
 3. Avoid KeyValue.ensureKeyValue() calls in read path - This make byte copy.
 4. Remove all CP hooks (which are already deprecated) which deal with KVs.  
 (In read path)
 Will add subtasks under this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11558) Caching set on Scan object gets lost when using TableMapReduceUtil in 0.95+

2014-07-24 Thread Ishan Chhabra (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073049#comment-14073049
 ] 

Ishan Chhabra commented on HBASE-11558:
---

[~apurtell], If caching is set during a general scan (not MapReduce), it will 
be serialized and sent in the openScanner request even though it is not needed. 
However, it would just be 3-4 bytes more overhead, and only in the openScanner 
call and not the next call. 

If this is ok, I would be happy to put a patch up.

 Caching set on Scan object gets lost when using TableMapReduceUtil in 0.95+
 ---

 Key: HBASE-11558
 URL: https://issues.apache.org/jira/browse/HBASE-11558
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, Scanners
Reporter: Ishan Chhabra
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0


 0.94 and before, if one sets caching on the Scan object in the Job by calling 
 scan.setCaching(int) and passes it to TableMapReduceUtil, it is correctly 
 read and used by the mappers during a mapreduce job. This is because 
 Scan.write respects and serializes caching, which is used internally by 
 TableMapReduceUtil to serialize and transfer the scan object to the mappers.
 0.95+, after the move to protobuf, ProtobufUtil.toScan does not respect 
 caching anymore as ClientProtos.Scan does not have the field caching. Caching 
 is passed via the ScanRequest object to the server and so is not needed in 
 the Scan object. However, this breaks application code that relies on the 
 earlier behavior. This will lead to sudden degradation in Scan performance 
 0.96+ for users relying on the old behavior.
 There are 2 options here:
 1. Add caching to Scan object, adding an extra int to the payload for the 
 Scan object which is really not needed in the general case.
 2. Document and preach that TableMapReduceUtil.setScannerCaching must be 
 called by the client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073053#comment-14073053
 ] 

Hadoop QA commented on HBASE-11527:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657584/HBASE-11527.patch
  against trunk revision .
  ATTACHMENT ID: 12657584

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestHeapMemoryManager
  org.apache.hadoop.hbase.io.hfile.TestCacheConfig

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10175//console

This message is automatically generated.

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch, HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11527:
---

Attachment: HBASE-11527.patch

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch, HBASE-11527.patch, HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11550) Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated

2014-07-24 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-11550:


Attachment: HBASE-11550-v1.patch

Hi, [~ndimiduk].

Could you please review the new patch?

Thanks

 Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated
 

 Key: HBASE-11550
 URL: https://issues.apache.org/jira/browse/HBASE-11550
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Attachments: HBASE-11550-v1.patch, HBASE-11550.patch


 User can pass bucket sizes through hbase.bucketcache.bucket.sizes config 
 entry.
 The sizes are supposed to be in increasing order. Validation should be added 
 in CacheConfig#getL2().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073146#comment-14073146
 ] 

Hadoop QA commented on HBASE-11527:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657585/HBASE-11527.patch
  against trunk revision .
  ATTACHMENT ID: 12657585

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10176//console

This message is automatically generated.

 Cluster free memory limit check should consider L2 block cache size also when 
 L2 cache is onheap.
 -

 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-11527.patch, HBASE-11527.patch, HBASE-11527.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11550) Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated

2014-07-24 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-11550:


Status: Patch Available  (was: Open)

 Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated
 

 Key: HBASE-11550
 URL: https://issues.apache.org/jira/browse/HBASE-11550
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Attachments: HBASE-11550-v1.patch, HBASE-11550.patch


 User can pass bucket sizes through hbase.bucketcache.bucket.sizes config 
 entry.
 The sizes are supposed to be in increasing order. Validation should be added 
 in CacheConfig#getL2().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11564) Improve cancellation management in the rpc layer

2014-07-24 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073188#comment-14073188
 ] 

Nicolas Liochon commented on HBASE-11564:
-

Comparing the results with PE reads, 1m rows, 3 replicas and a 100 microseconds 
delay gives:
- w/o the patch:
 - total: 210s
 - 95%: 426us
 - 99%: 550us

- with the patch:
 - total: 174s (i.e. w/o the patch we're 20% slower)
 - 95%: 208us
 - 99%: 262us

So it's nice. I still have some variation in the results so likely there is 
still room from improvement somewhere.

Commit is on its way...

 Improve cancellation management in the rpc layer
 

 Key: HBASE-11564
 URL: https://issues.apache.org/jira/browse/HBASE-11564
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 1.0.0, 2.0.0

 Attachments: 11564.v1.patch, 11564.v2.patch


 The current client code depends on interrupting the thread for canceling a 
 request. It's actually possible to rely on a callback in protobuf.
 The patch includes as well various performance improvements in replica 
 management. 
 On a version before HBASE-11492 the perf was ~35% better. I will redo the 
 test with the last version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11564) Improve cancellation management in the rpc layer

2014-07-24 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11564:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to master  branch-1 (late ping to [~enis] for this, tell me if you 
want more time to review it. I can revert if necessary)

 Improve cancellation management in the rpc layer
 

 Key: HBASE-11564
 URL: https://issues.apache.org/jira/browse/HBASE-11564
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 1.0.0, 2.0.0

 Attachments: 11564.v1.patch, 11564.v2.patch


 The current client code depends on interrupting the thread for canceling a 
 request. It's actually possible to rely on a callback in protobuf.
 The patch includes as well various performance improvements in replica 
 management. 
 On a version before HBASE-11492 the perf was ~35% better. I will redo the 
 test with the last version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11550) Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073230#comment-14073230
 ] 

Hadoop QA commented on HBASE-11550:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657596/HBASE-11550-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12657596

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestRegionRebalancing

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10177//console

This message is automatically generated.

 Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated
 

 Key: HBASE-11550
 URL: https://issues.apache.org/jira/browse/HBASE-11550
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Attachments: HBASE-11550-v1.patch, HBASE-11550.patch


 User can pass bucket sizes through hbase.bucketcache.bucket.sizes config 
 entry.
 The sizes are supposed to be in increasing order. Validation should be added 
 in CacheConfig#getL2().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11584) HBase file encryption, consistences observed and data loss

2014-07-24 Thread shankarlingayya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shankarlingayya updated HBASE-11584:


Description: 



Procedure:
1. Start the Hbase services (HMaster  region Server)
2. Enable HFile encryption and WAL file encryption as below, and perform 
'table4-0' put operations (100 records added)
property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property
property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
/property
property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
/property
property
 namehbase.regionserver.wal.encryption/name
 valuetrue/value
/property
 
3. Machine went down, so all process went down
4. We disabled the WAL file encryption for performance reason, and keep 
encryption only for Hfile, as below
property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property

5. Start the Region Server and query the 'table4-0' data
hbase(main):003:0 count 'table4-0'
ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
XX-XX-XX-XX,60020,1406209023146
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
at java.lang.Thread.run(Thread.java:662)

6. Not able to read the data, so we decided to revert back the configuration 
(as original)

7. Kill/Stop the Region Server, revert all the configurations as original, as 
below

property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property
property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
/property
property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
/property
property
 namehbase.regionserver.wal.encryption/name
 valuetrue/value
/property

7. Start the Region Server, and perform the 'table4-0' query 
hbase(main):003:0 count 'table4-0'
ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
XX-XX-XX-XX,60020,1406209023146
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
at 

[jira] [Created] (HBASE-11584) HBase file encryption, consistences observed and data loss

2014-07-24 Thread shankarlingayya (JIRA)
shankarlingayya created HBASE-11584:
---

 Summary: HBase file encryption, consistences observed and data loss
 Key: HBASE-11584
 URL: https://issues.apache.org/jira/browse/HBASE-11584
 Project: HBase
  Issue Type: Bug
  Components: hbck, HFile
Affects Versions: 0.98.3
 Environment: SuSE 11 SP3
Reporter: shankarlingayya
Priority: Critical


Procedure:
1. Start the Hbase services (HMaster  region Server)
2. Enable HFile encryption and WAL file encryption as below, and perform 
'table4-0' put operations (100 records added)
property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property
property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
/property
property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
/property
property
 namehbase.regionserver.wal.encryption/name
 valuetrue/value
/property
 
3. Machine went down, so all process went down
4. We disabled the WAL file encryption for performance reason, and keep 
encryption only for Hfile, as below
property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property

5. Start the Region Server and query the 'table4-0' data
hbase(main):003:0 count 'table4-0'
ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
XX-XX-XX-XX,60020,1406209023146
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
at java.lang.Thread.run(Thread.java:662)

6. Not able to read the data, so we decided to revert back the configuration 
(as original)

7. Kill/Stop the Region Server, revert all the configurations as original, as 
below

property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property
property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
/property
property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
/property
property
 namehbase.regionserver.wal.encryption/name
 valuetrue/value
/property

7. Start the Region Server, and perform the 'table4-0' query 
hbase(main):003:0 count 'table4-0'
ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
XX-XX-XX-XX,60020,1406209023146
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 

[jira] [Updated] (HBASE-11584) HBase file encryption, consistences observed and data loss

2014-07-24 Thread shankarlingayya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shankarlingayya updated HBASE-11584:


Description: 
HBase file encryption some consistences observed and data loss happens after 
running the hbck tool,
the operation steps are as below.

Procedure:
1. Start the Hbase services (HMaster  region Server)
2. Enable HFile encryption and WAL file encryption as below, and perform 
'table4-0' put operations (100 records added)
property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property
property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
/property
property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
/property
property
 namehbase.regionserver.wal.encryption/name
 valuetrue/value
/property
 
3. Machine went down, so all process went down
4. We disabled the WAL file encryption for performance reason, and keep 
encryption only for Hfile, as below
property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property

5. Start the Region Server and query the 'table4-0' data
hbase(main):003:0 count 'table4-0'
ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
XX-XX-XX-XX,60020,1406209023146
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
at java.lang.Thread.run(Thread.java:662)

6. Not able to read the data, so we decided to revert back the configuration 
(as original)

7. Kill/Stop the Region Server, revert all the configurations as original, as 
below

property
 namehbase.crypto.keyprovider/name
 valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
/property
property
 namehbase.crypto.keyprovider.parameters/name
 valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
/property
property
 namehbase.crypto.master.key.name/name
 valuehdfs/value
/property
property
 namehfile.format.version/name
 value3/value
/property
property
 namehbase.regionserver.hlog.reader.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
/property
property
 namehbase.regionserver.hlog.writer.impl/name
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
/property
property
 namehbase.regionserver.wal.encryption/name
 valuetrue/value
/property

7. Start the Region Server, and perform the 'table4-0' query 
hbase(main):003:0 count 'table4-0'
ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
XX-XX-XX-XX,60020,1406209023146
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
at 

[jira] [Commented] (HBASE-11584) HBase file encryption, consistences observed and data loss

2014-07-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073303#comment-14073303
 ] 

Anoop Sam John commented on HBASE-11584:


In Step#4, instead of removing these configs, can you make this way and check 
once?

{code}
property
namehbase.regionserver.hlog.reader.impl/name
valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
/property
property
namehbase.regionserver.hlog.writer.impl/name
valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
/property
property
namehbase.regionserver.wal.encryption/name
valuefalse/value
/property
{code}

Still WAL encryption will be disabled for new writes.


 HBase file encryption, consistences observed and data loss
 --

 Key: HBASE-11584
 URL: https://issues.apache.org/jira/browse/HBASE-11584
 Project: HBase
  Issue Type: Bug
  Components: hbck, HFile
Affects Versions: 0.98.3
 Environment: SuSE 11 SP3
Reporter: shankarlingayya
Priority: Critical

 HBase file encryption some consistences observed and data loss happens after 
 running the hbck tool,
 the operation steps are as below.
 Procedure:
 1. Start the Hbase services (HMaster  region Server)
 2. Enable HFile encryption and WAL file encryption as below, and perform 
 'table4-0' put operations (100 records added)
 property
  namehbase.crypto.keyprovider/name
  valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
 /property
 property
  namehbase.crypto.keyprovider.parameters/name
  valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
 /property
 property
  namehbase.crypto.master.key.name/name
  valuehdfs/value
 /property
 property
  namehfile.format.version/name
  value3/value
 /property
 property
  namehbase.regionserver.hlog.reader.impl/name
  
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
  namehbase.regionserver.hlog.writer.impl/name
  
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property
 property
  namehbase.regionserver.wal.encryption/name
  valuetrue/value
 /property
  
 3. Machine went down, so all process went down
 4. We disabled the WAL file encryption for performance reason, and keep 
 encryption only for Hfile, as below
 property
  namehbase.crypto.keyprovider/name
  valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
 /property
 property
  namehbase.crypto.keyprovider.parameters/name
  valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
 /property
 property
  namehbase.crypto.master.key.name/name
  valuehdfs/value
 /property
 property
  namehfile.format.version/name
  value3/value
 /property
 5. Start the Region Server and query the 'table4-0' data
 hbase(main):003:0 count 'table4-0'
 ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
 table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
 XX-XX-XX-XX,60020,1406209023146
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
 at java.lang.Thread.run(Thread.java:662)
 6. Not able to read the data, so we decided to revert back the configuration 
 (as original)
 7. Kill/Stop the Region Server, revert all the configurations as original, as 
 below
 property
  namehbase.crypto.keyprovider/name
  valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
 /property
 property
  namehbase.crypto.keyprovider.parameters/name
  valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
 /property
 property
  namehbase.crypto.master.key.name/name
  valuehdfs/value
 /property
 property
  namehfile.format.version/name
  value3/value
 /property
 property
  namehbase.regionserver.hlog.reader.impl/name
  
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
  namehbase.regionserver.hlog.writer.impl/name
  
 

[jira] [Commented] (HBASE-11564) Improve cancellation management in the rpc layer

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073347#comment-14073347
 ] 

Hudson commented on HBASE-11564:


SUCCESS: Integrated in HBase-TRUNK #5339 (See 
[https://builds.apache.org/job/HBase-TRUNK/5339/])
HBASE-11564 Improve cancellation management in the rpc layer (nkeywal: rev 
d8401c8e446dcef9ffb9c71f1d14413772f22c75)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/Subprocedure.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestRegionReplicaPerf.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ExceptionUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java


 Improve cancellation management in the rpc layer
 

 Key: HBASE-11564
 URL: https://issues.apache.org/jira/browse/HBASE-11564
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 1.0.0, 2.0.0

 Attachments: 11564.v1.patch, 11564.v2.patch


 The current client code depends on interrupting the thread for canceling a 
 request. It's actually possible to rely on a callback in protobuf.
 The patch includes as well various performance improvements in replica 
 management. 
 On a version before HBASE-11492 the perf was ~35% better. I will redo the 
 test with the last version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-11585:
---

 Summary: PE: Allows warm-up
 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0


When we measure the latency, warm-up helps to get repeatable and useful 
measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11585:


Status: Patch Available  (was: Open)

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11585:


Attachment: 11585.v1.patch

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-07-24 Thread Demai Ni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Demai Ni updated HBASE-9531:


Attachment: HBASE-9531-master-v1.patch

[~apurtell], thanks a lot for the help. I just tried out the patch again which 
is valid for both 0.98 and master(trunk) branch. So resubmit to HadoopQA. 

[~enis], how about branch-1? thanks. 

 a command line (hbase shell) interface to retreive the replication metrics 
 and show replication lag
 ---

 Key: HBASE-9531
 URL: https://issues.apache.org/jira/browse/HBASE-9531
 Project: HBase
  Issue Type: New Feature
  Components: Replication
Affects Versions: 0.99.0
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
 HBASE-9531-master-v1.patch, HBASE-9531-trunk-v0.patch, 
 HBASE-9531-trunk-v0.patch


 This jira is to provide a command line (hbase shell) interface to retreive 
 the replication metrics info such as:ageOfLastShippedOp, 
 timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
 timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
 lag of replication(source only)
 Understand that hbase is using Hadoop 
 metrics(http://hbase.apache.org/metrics.html), which is a common way to 
 monitor metric info. This Jira is to serve as a light-weight client 
 interface, comparing to a completed(certainly better, but heavier)GUI 
 monitoring package. I made the code works on 0.94.9 now, and like to use this 
 jira to get opinions about whether the feature is valuable to other 
 users/workshop. If so, I will build a trunk patch. 
 All inputs are greatly appreciated. Thank you!
 The overall design is to reuse the existing logic which supports hbase shell 
 command 'status', and invent a new module, called ReplicationLoad.  In 
 HRegionServer.buildServerLoad() , use the local replication service objects 
 to get their loads  which could be wrapped in a ReplicationLoad object and 
 then simply pass it to the ServerLoad. In ReplicationSourceMetrics and 
 ReplicationSinkMetrics, a few getters and setters will be created, and ask 
 Replication to build a ReplicationLoad.  (many thanks to Jean-Daniel for 
 his kindly suggestions through dev email list)
 the replication lag will be calculated for source only, and use this formula: 
 {code:title=Replication lag|borderStyle=solid}
   if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - 
 timeStampsOfLastShippedOp)) //err on the large side
   else if (current time - timeStampsOfLastShippedOp)  2* 
 ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen 
 recently 
 else lag = 0 // last shipped may happens last night, so NO real lag 
 although ageOfLastShippedOp is non-zero
 {code}
 External will look something like:
 {code:title=status 'replication'|borderStyle=solid}
 hbase(main):001:0 status 'replication'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):002:0 status 'replication','source'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','sink'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','lag' 
 version 

[jira] [Commented] (HBASE-4624) Remove and convert @deprecated RemoteExceptionHandler.decodeRemoteException calls

2014-07-24 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073386#comment-14073386
 ] 

Jonathan Hsieh commented on HBASE-4624:
---

Patch looks good but the bot had some failures.  Not sure if they are related.  
an you check to see if they are?  I can't get to this today but can tomorrow.

 Remove and convert @deprecated RemoteExceptionHandler.decodeRemoteException 
 calls
 -

 Key: HBASE-4624
 URL: https://issues.apache.org/jira/browse/HBASE-4624
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.99.0, 0.98.4, 2.0.0
Reporter: Jonathan Hsieh
Assignee: Talat UYARER
  Labels: noob
 Attachments: HBASE-4624.patch


 Moving issue w/ no recent movement out of 0.95



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-4624) Remove and convert @deprecated RemoteExceptionHandler.decodeRemoteException calls

2014-07-24 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073386#comment-14073386
 ] 

Jonathan Hsieh edited comment on HBASE-4624 at 7/24/14 5:12 PM:


Patch looks good but the bot had some failures.  Not sure if they are related. 
can you check to see if the errors are?  I can't get to this today but can 
tomorrow.


was (Author: jmhsieh):
Patch looks good but the bot had some failures.  Not sure if they are related.  
an you check to see if they are?  I can't get to this today but can tomorrow.

 Remove and convert @deprecated RemoteExceptionHandler.decodeRemoteException 
 calls
 -

 Key: HBASE-4624
 URL: https://issues.apache.org/jira/browse/HBASE-4624
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.99.0, 0.98.4, 2.0.0
Reporter: Jonathan Hsieh
Assignee: Talat UYARER
  Labels: noob
 Attachments: HBASE-4624.patch


 Moving issue w/ no recent movement out of 0.95



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11575) Pseudo distributed mode does not work as documented

2014-07-24 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11575:


   Resolution: Fixed
Fix Version/s: (was: 0.99.0)
   2.0.0
   1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Yes, the patch worked for me. Pseudo distributed mode is fixed. Thanks for 
reporting the issue and reviewing the patch. Integrated in branch 1 and master.

 Pseudo distributed mode does not work as documented 
 

 Key: HBASE-11575
 URL: https://issues.apache.org/jira/browse/HBASE-11575
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Jimmy Xiang
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-11575.patch


 After master-RS colocation, now the pseudo dist-mode does not work as 
 documented since you cannot start a region server in the same port 16020. 
 I think we can either select a random port (and info port) for the master's 
 region server, or document how to do a pseudo-distributed setup in the book. 
 [~jxiang] wdyt? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073432#comment-14073432
 ] 

Hadoop QA commented on HBASE-9531:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12657637/HBASE-9531-master-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12657637

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  result = result  (hasReplicationLoadSourceString() == 
other.hasReplicationLoadSourceString());
+  new java.lang.String[] { NumberOfRequests, 
TotalNumberOfRequests, UsedHeapMB, MaxHeapMB, RegionLoads, 
Coprocessors, ReportStartTime, ReportEndTime, InfoServerPort, 
ReplicationLoadSourceString, ReplicationLoadSinkString, });
+rsource = 
org.apache.hadoop.hbase.replication.regionserver.ReplicationLoad::REPLICATIONLOADSOURCE
+rsink = 
org.apache.hadoop.hbase.replication.regionserver.ReplicationLoad::REPLICATIONLOADSINK

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestCacheConfig

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10179//console

This message is automatically generated.

 a command line (hbase shell) interface to retreive the replication metrics 
 and show replication lag
 ---

 Key: HBASE-9531
 URL: https://issues.apache.org/jira/browse/HBASE-9531
 Project: HBase
  Issue Type: New Feature
  Components: Replication
Affects Versions: 0.99.0
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
 HBASE-9531-master-v1.patch, HBASE-9531-trunk-v0.patch, 
 HBASE-9531-trunk-v0.patch


 This jira is to provide a command line (hbase shell) interface to retreive 
 the replication metrics info such as:ageOfLastShippedOp, 
 timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
 timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
 lag of replication(source only)
 Understand that hbase is using Hadoop 
 metrics(http://hbase.apache.org/metrics.html), which is a common way to 
 monitor metric info. This Jira is to serve as a light-weight client 
 interface, comparing to a completed(certainly better, but heavier)GUI 
 monitoring package. I made the code works on 0.94.9 now, and like to use this 
 jira to get 

[jira] [Commented] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-07-24 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073454#comment-14073454
 ] 

Demai Ni commented on HBASE-9531:
-

the failure org.apache.hadoop.hbase.io.hfile.TestCacheConfig show up in a few 
other patch testing, seems unrelated with this jira/patch

 a command line (hbase shell) interface to retreive the replication metrics 
 and show replication lag
 ---

 Key: HBASE-9531
 URL: https://issues.apache.org/jira/browse/HBASE-9531
 Project: HBase
  Issue Type: New Feature
  Components: Replication
Affects Versions: 0.99.0
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
 HBASE-9531-master-v1.patch, HBASE-9531-trunk-v0.patch, 
 HBASE-9531-trunk-v0.patch


 This jira is to provide a command line (hbase shell) interface to retreive 
 the replication metrics info such as:ageOfLastShippedOp, 
 timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
 timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
 lag of replication(source only)
 Understand that hbase is using Hadoop 
 metrics(http://hbase.apache.org/metrics.html), which is a common way to 
 monitor metric info. This Jira is to serve as a light-weight client 
 interface, comparing to a completed(certainly better, but heavier)GUI 
 monitoring package. I made the code works on 0.94.9 now, and like to use this 
 jira to get opinions about whether the feature is valuable to other 
 users/workshop. If so, I will build a trunk patch. 
 All inputs are greatly appreciated. Thank you!
 The overall design is to reuse the existing logic which supports hbase shell 
 command 'status', and invent a new module, called ReplicationLoad.  In 
 HRegionServer.buildServerLoad() , use the local replication service objects 
 to get their loads  which could be wrapped in a ReplicationLoad object and 
 then simply pass it to the ServerLoad. In ReplicationSourceMetrics and 
 ReplicationSinkMetrics, a few getters and setters will be created, and ask 
 Replication to build a ReplicationLoad.  (many thanks to Jean-Daniel for 
 his kindly suggestions through dev email list)
 the replication lag will be calculated for source only, and use this formula: 
 {code:title=Replication lag|borderStyle=solid}
   if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - 
 timeStampsOfLastShippedOp)) //err on the large side
   else if (current time - timeStampsOfLastShippedOp)  2* 
 ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen 
 recently 
 else lag = 0 // last shipped may happens last night, so NO real lag 
 although ageOfLastShippedOp is non-zero
 {code}
 External will look something like:
 {code:title=status 'replication'|borderStyle=solid}
 hbase(main):001:0 status 'replication'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):002:0 status 'replication','source'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','sink'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','lag' 
 version 0.94.9
 3 live servers
     

[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073469#comment-14073469
 ] 

Hadoop QA commented on HBASE-11585:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657636/11585.v1.patch
  against trunk revision .
  ATTACHMENT ID: 12657636

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10178//console

This message is automatically generated.

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11564) Improve cancellation management in the rpc layer

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073516#comment-14073516
 ] 

Hudson commented on HBASE-11564:


FAILURE: Integrated in HBase-1.0 #67 (See 
[https://builds.apache.org/job/HBase-1.0/67/])
HBASE-11564 Improve cancellation management in the rpc layer (nkeywal: rev 
d8562052a4f5c956a514becf6439442763387e86)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestRegionReplicaPerf.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/TimeLimitedRpcController.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/Subprocedure.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ExceptionUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java


 Improve cancellation management in the rpc layer
 

 Key: HBASE-11564
 URL: https://issues.apache.org/jira/browse/HBASE-11564
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 1.0.0, 2.0.0

 Attachments: 11564.v1.patch, 11564.v2.patch


 The current client code depends on interrupting the thread for canceling a 
 request. It's actually possible to rely on a callback in protobuf.
 The patch includes as well various performance improvements in replica 
 management. 
 On a version before HBASE-11492 the perf was ~35% better. I will redo the 
 test with the last version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073524#comment-14073524
 ] 

Devaraj Das commented on HBASE-11585:
-

+1

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073525#comment-14073525
 ] 

Nick Dimiduk commented on HBASE-11585:
--

How much warmup do you find helps? 1k? 100k? Is this warming up the client or 
the region servers (or can you tell?)

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073526#comment-14073526
 ] 

Nick Dimiduk commented on HBASE-11585:
--

+1

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11579) CopyTable should check endtime value only if != 0

2014-07-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11579:
---

   Resolution: Fixed
Fix Version/s: 2.0.0
   0.98.5
   0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to 0.98+

Ping [~enis], small bug fix went in

 CopyTable should check endtime value only if != 0
 -

 Key: HBASE-11579
 URL: https://issues.apache.org/jira/browse/HBASE-11579
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.99.0, 0.98.4
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11579-v0-trunk.patch


 CopyTable automatically assign an endTime if startTime is specified and 
 endTime is not:
 {code}
 if (startTime != 0) {
   scan.setTimeRange(startTime,
   endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
 }
 {code}
 However, we test if endTime is == 0 and exit before we get a chance to set 
 the range:
 {code}
   if (startTime  endTime) {
 printUsage(Invalid time range filter: starttime= + startTime +
 endtime= + endTime);
 return false;
   }
 {code}
 So we need to check endTime only if it's != 0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11558) Caching set on Scan object gets lost when using TableMapReduceUtil in 0.95+

2014-07-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11558:
---

Assignee: Ishan Chhabra  (was: Andrew Purtell)

bq. If this is ok, I would be happy to put a patch up.

It is. Thanks! Reassigning to you

 Caching set on Scan object gets lost when using TableMapReduceUtil in 0.95+
 ---

 Key: HBASE-11558
 URL: https://issues.apache.org/jira/browse/HBASE-11558
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, Scanners
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
 Fix For: 0.99.0, 0.98.5, 2.0.0


 0.94 and before, if one sets caching on the Scan object in the Job by calling 
 scan.setCaching(int) and passes it to TableMapReduceUtil, it is correctly 
 read and used by the mappers during a mapreduce job. This is because 
 Scan.write respects and serializes caching, which is used internally by 
 TableMapReduceUtil to serialize and transfer the scan object to the mappers.
 0.95+, after the move to protobuf, ProtobufUtil.toScan does not respect 
 caching anymore as ClientProtos.Scan does not have the field caching. Caching 
 is passed via the ScanRequest object to the server and so is not needed in 
 the Scan object. However, this breaks application code that relies on the 
 earlier behavior. This will lead to sudden degradation in Scan performance 
 0.96+ for users relying on the old behavior.
 There are 2 options here:
 1. Add caching to Scan object, adding an extra int to the payload for the 
 Scan object which is really not needed in the general case.
 2. Document and preach that TableMapReduceUtil.setScannerCaching must be 
 called by the client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073544#comment-14073544
 ] 

Andrew Purtell commented on HBASE-11384:


{quote}
bq. HTD#setCheckAuthsForMutation(boolean setCheckAuths)
We can have cluster level also fine, but allowing HTD.setValue() then we have 
to expose that config outside. Making it by default to true would mean that it 
is on by default.
{quote}

I think a cluster wide setting is better. We could make it a table attr but 
let's not unless we can come up with a credible use case.

Should be off by default in 0.98. Could be on by default in 0.99+

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11584) HBase file encryption, consistences observed and data loss

2014-07-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-11584.


Resolution: Invalid

Please mail u...@hbase.apache.org for reporting potential problems and 
community assistance in troubleshooting. JIRA isn't the correct forum for 
reporting problems until the cause is known. Nothing reported on this issue 
indicates encryption is more than an incidental detail. We don't encrypt the 
META table. There could be many reasons why you lost your HFiles. Did you keep 
the test configuration that puts the HBase root in /tmp for example. Anyway, 
please don't reply here, take this to u...@hbase.apache.org. 

 HBase file encryption, consistences observed and data loss
 --

 Key: HBASE-11584
 URL: https://issues.apache.org/jira/browse/HBASE-11584
 Project: HBase
  Issue Type: Bug
  Components: hbck, HFile
Affects Versions: 0.98.3
 Environment: SuSE 11 SP3
Reporter: shankarlingayya
Priority: Critical

 HBase file encryption some consistences observed and data loss happens after 
 running the hbck tool,
 the operation steps are as below.
 Procedure:
 1. Start the Hbase services (HMaster  region Server)
 2. Enable HFile encryption and WAL file encryption as below, and perform 
 'table4-0' put operations (100 records added)
 property
  namehbase.crypto.keyprovider/name
  valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
 /property
 property
  namehbase.crypto.keyprovider.parameters/name
  valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
 /property
 property
  namehbase.crypto.master.key.name/name
  valuehdfs/value
 /property
 property
  namehfile.format.version/name
  value3/value
 /property
 property
  namehbase.regionserver.hlog.reader.impl/name
  
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
  namehbase.regionserver.hlog.writer.impl/name
  
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property
 property
  namehbase.regionserver.wal.encryption/name
  valuetrue/value
 /property
  
 3. Machine went down, so all process went down
 4. We disabled the WAL file encryption for performance reason, and keep 
 encryption only for Hfile, as below
 property
  namehbase.crypto.keyprovider/name
  valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
 /property
 property
  namehbase.crypto.keyprovider.parameters/name
  valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
 /property
 property
  namehbase.crypto.master.key.name/name
  valuehdfs/value
 /property
 property
  namehfile.format.version/name
  value3/value
 /property
 5. Start the Region Server and query the 'table4-0' data
 hbase(main):003:0 count 'table4-0'
 ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region 
 table4-0,,1406207815456.fc10620a3dcc14e004ab034420f7d332. is not online on 
 XX-XX-XX-XX,60020,1406209023146
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2685)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4119)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3066)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2084)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
 at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
 at java.lang.Thread.run(Thread.java:662)
 6. Not able to read the data, so we decided to revert back the configuration 
 (as original)
 7. Kill/Stop the Region Server, revert all the configurations as original, as 
 below
 property
  namehbase.crypto.keyprovider/name
  valueorg.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider/value
 /property
 property
  namehbase.crypto.keyprovider.parameters/name
  valuejceks:///opt/shankar1/kdc_keytab/hbase.jks?password=Hadoop@234/value
 /property
 property
  namehbase.crypto.master.key.name/name
  valuehdfs/value
 /property
 property
  namehfile.format.version/name
  value3/value
 /property
 property
  namehbase.regionserver.hlog.reader.impl/name
  
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader/value
 /property
 property
  namehbase.regionserver.hlog.writer.impl/name
  
 valueorg.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter/value
 /property

[jira] [Commented] (HBASE-11575) Pseudo distributed mode does not work as documented

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073549#comment-14073549
 ] 

Hudson commented on HBASE-11575:


SUCCESS: Integrated in HBase-TRUNK #5340 (See 
[https://builds.apache.org/job/HBase-TRUNK/5340/])
HBASE-11575 Pseudo distributed mode does not work as documented (jxiang: rev 
cc61cc308168af407a2f21851e9932292af8ca77)
* conf/regionservers
* bin/hbase-config.sh
* bin/local-master-backup.sh
* bin/local-regionservers.sh
* src/main/docbkx/getting_started.xml


 Pseudo distributed mode does not work as documented 
 

 Key: HBASE-11575
 URL: https://issues.apache.org/jira/browse/HBASE-11575
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Jimmy Xiang
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-11575.patch


 After master-RS colocation, now the pseudo dist-mode does not work as 
 documented since you cannot start a region server in the same port 16020. 
 I think we can either select a random port (and info port) for the master's 
 region server, or document how to do a pseudo-distributed setup in the book. 
 [~jxiang] wdyt? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-07-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073559#comment-14073559
 ] 

Andrew Purtell commented on HBASE-9531:
---

Ok, I'm going to commit this to 0.98+ in a few hours unless objection.

 a command line (hbase shell) interface to retreive the replication metrics 
 and show replication lag
 ---

 Key: HBASE-9531
 URL: https://issues.apache.org/jira/browse/HBASE-9531
 Project: HBase
  Issue Type: New Feature
  Components: Replication
Affects Versions: 0.99.0
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
 HBASE-9531-master-v1.patch, HBASE-9531-trunk-v0.patch, 
 HBASE-9531-trunk-v0.patch


 This jira is to provide a command line (hbase shell) interface to retreive 
 the replication metrics info such as:ageOfLastShippedOp, 
 timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
 timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
 lag of replication(source only)
 Understand that hbase is using Hadoop 
 metrics(http://hbase.apache.org/metrics.html), which is a common way to 
 monitor metric info. This Jira is to serve as a light-weight client 
 interface, comparing to a completed(certainly better, but heavier)GUI 
 monitoring package. I made the code works on 0.94.9 now, and like to use this 
 jira to get opinions about whether the feature is valuable to other 
 users/workshop. If so, I will build a trunk patch. 
 All inputs are greatly appreciated. Thank you!
 The overall design is to reuse the existing logic which supports hbase shell 
 command 'status', and invent a new module, called ReplicationLoad.  In 
 HRegionServer.buildServerLoad() , use the local replication service objects 
 to get their loads  which could be wrapped in a ReplicationLoad object and 
 then simply pass it to the ServerLoad. In ReplicationSourceMetrics and 
 ReplicationSinkMetrics, a few getters and setters will be created, and ask 
 Replication to build a ReplicationLoad.  (many thanks to Jean-Daniel for 
 his kindly suggestions through dev email list)
 the replication lag will be calculated for source only, and use this formula: 
 {code:title=Replication lag|borderStyle=solid}
   if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - 
 timeStampsOfLastShippedOp)) //err on the large side
   else if (current time - timeStampsOfLastShippedOp)  2* 
 ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen 
 recently 
 else lag = 0 // last shipped may happens last night, so NO real lag 
 although ageOfLastShippedOp is non-zero
 {code}
 External will look something like:
 {code:title=status 'replication'|borderStyle=solid}
 hbase(main):001:0 status 'replication'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):002:0 status 'replication','source'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','sink'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','lag' 
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com: lag = 0
     hdtest018.svl.ibm.com: lag = 14
   

[jira] [Updated] (HBASE-11583) Refactoring out the configuration changes for enabling VisibilityLabels in the unit tests.

2014-07-24 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-11583:


Summary: Refactoring out the configuration changes for enabling 
VisibilityLabels in the unit tests.  (was: Refactoring out the configuration 
changes for enabling VisbilityLabels in the unit tests.)

 Refactoring out the configuration changes for enabling VisibilityLabels in 
 the unit tests.
 --

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073565#comment-14073565
 ] 

Nicolas Liochon commented on HBASE-11585:
-

bq. How much warmup do you find helps? 1k? 100k? Is this warming up the client 
or the region servers (or can you tell?)
for a 1m  rows run, I exclude the first 200K. But you can do differently :-)
It's more interesting for the client, as you can warmup the server by running 
the client multiple times

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-11586:
--

 Summary: HFile's HDFS op latency sampling code is not used
 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0


HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
HFile#offerWriteLatency but the samples are never drained. There are no callers 
of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and related. The 
three ArrayBlockingQueues we are using as sample buffers in HFile will fill 
quickly and are never drained. 

There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
related, so we are incrementing a set of AtomicLong counters that will never be 
read nor reset.

We are calling System.nanoTime in block read and write paths twice but not 
utilizing the measurements. We should hook this code back up to metrics or 
remove it.

We are also not using HFile#getChecksumFailuresCount anywhere but in some unit 
test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11586:
---

Description: 
HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
HFile#offerWriteLatency but the samples are never drained. There are no callers 
of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and related. The 
three ArrayBlockingQueues we are using as sample buffers in HFile will fill 
quickly and are never drained. 

There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
related, so we are incrementing a set of AtomicLong counters that will never be 
read nor reset.

We are calling System.nanoTime in block read and write paths twice but not 
utilizing the measurements.

We should hook this code back up to metrics or remove it.

We are also not using HFile#getChecksumFailuresCount anywhere but in some unit 
test code.

  was:
HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
HFile#offerWriteLatency but the samples are never drained. There are no callers 
of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and related. The 
three ArrayBlockingQueues we are using as sample buffers in HFile will fill 
quickly and are never drained. 

There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
related, so we are incrementing a set of AtomicLong counters that will never be 
read nor reset.

We are calling System.nanoTime in block read and write paths twice but not 
utilizing the measurements. We should hook this code back up to metrics or 
remove it.

We are also not using HFile#getChecksumFailuresCount anywhere but in some unit 
test code.


 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073614#comment-14073614
 ] 

Andrew Purtell commented on HBASE-11516:


v3 patch looks pretty good. UI updates look great. 

{code}
@@ -2357,4 +2523,19 @@ public class RegionCoprocessorHost
 }
 return tracker;
   }
+
+  public MapString, Long getCoprocessorExecutionTimes() {
+MapString, Long results = new HashMapString, Long();
+for (RegionEnvironment env : coprocessors) {
+  if (env.getInstance() instanceof RegionObserver) {
+long total = 0;
+for (Long time : env.getExecutionLatenciesNanos()) {
+  total += time;
+}
+total /= 1000;
+results.put(env.getInstance().getClass().getSimpleName(), total);
+  }
+}
+return results;
+  }
{code}

This only sums the sampled execution times. I think this will be misleading 
because the measure won't be the real total time executing in the coprocessor 
over the reporting period if we ran out of space in the sample queue. We can 
report useful statistics using the samples though. Consider 
DescriptiveStatistics from commons-math, we are using it elsewhere. It's not 
threadsafe so shouldn't be used in place of the sample buffer, but can be used 
in getCoprocessorExecutionTimes when iterating through the samples. With 
DescriptiveStatistics we can get min, max, avg, various percentiles, e.g. 95th, 
99th.

It might make more sense to rename getCoprocessorExecutionTimes to 
getCoprocessorExecutionStatistics or similar. 


 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch, 
 HBASE-11516_v3.patch, region_server_webui.png


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11586:
---

Status: Patch Available  (was: Open)

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11586:
---

Attachment: HBASE-11586.patch

Attached patch removes the sample buffers, related methods, and unused 
measuring in the HFile reader and writer.

The atomic variables for op count and nanotime accumulators are still used by 
HFileReadWriteTest, in a test package of hbase-server. 

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11575) Pseudo distributed mode does not work as documented

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073640#comment-14073640
 ] 

Hudson commented on HBASE-11575:


SUCCESS: Integrated in HBase-1.0 #68 (See 
[https://builds.apache.org/job/HBase-1.0/68/])
HBASE-11575 Pseudo distributed mode does not work as documented (jxiang: rev 
147a3521f94f82c3a189a4ace2f7232c48b1ad5d)
* bin/local-master-backup.sh
* bin/local-regionservers.sh
* bin/hbase-config.sh
* conf/regionservers
* src/main/docbkx/getting_started.xml


 Pseudo distributed mode does not work as documented 
 

 Key: HBASE-11575
 URL: https://issues.apache.org/jira/browse/HBASE-11575
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Jimmy Xiang
Priority: Critical
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-11575.patch


 After master-RS colocation, now the pseudo dist-mode does not work as 
 documented since you cannot start a region server in the same port 16020. 
 I think we can either select a random port (and info port) for the master's 
 region server, or document how to do a pseudo-distributed setup in the book. 
 [~jxiang] wdyt? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11579) CopyTable should check endtime value only if != 0

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073663#comment-14073663
 ] 

Hudson commented on HBASE-11579:


SUCCESS: Integrated in HBase-0.98 #418 (See 
[https://builds.apache.org/job/HBase-0.98/418/])
HBASE-11579 CopyTable should check endtime value only if != 0 (Jean-Marc 
Spaggiari) (apurtell: rev 2a53add01b89f750433dfb93cdaeead0baace3a4)
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java


 CopyTable should check endtime value only if != 0
 -

 Key: HBASE-11579
 URL: https://issues.apache.org/jira/browse/HBASE-11579
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.99.0, 0.98.4
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11579-v0-trunk.patch


 CopyTable automatically assign an endTime if startTime is specified and 
 endTime is not:
 {code}
 if (startTime != 0) {
   scan.setTimeRange(startTime,
   endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
 }
 {code}
 However, we test if endTime is == 0 and exit before we get a chance to set 
 the range:
 {code}
   if (startTime  endTime) {
 printUsage(Invalid time range filter: starttime= + startTime +
 endtime= + endTime);
 return false;
   }
 {code}
 So we need to check endTime only if it's != 0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11326) Use an InputFormat for ExportSnapshot

2014-07-24 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073681#comment-14073681
 ] 

Jerry He commented on HBASE-11326:
--

Hi, [~mbertozzi]

Could you put this in 0.98?

 Use an InputFormat for ExportSnapshot
 -

 Key: HBASE-11326
 URL: https://issues.apache.org/jira/browse/HBASE-11326
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11326-v0.patch


 Use an InputFormat instead of uploading a set of input files to have a 
 progress based on the file size



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11581) Add option so CombinedBlockCache L2 can be null (fscache)

2014-07-24 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073682#comment-14073682
 ] 

Jonathan Hsieh commented on HBASE-11581:


looks fine.  Is the idea just for specialized benchmarking?

 Is there an easy way to get unit tests to exercise this new config?

 Add option so CombinedBlockCache L2 can be null (fscache)
 -

 Key: HBASE-11581
 URL: https://issues.apache.org/jira/browse/HBASE-11581
 Project: HBase
  Issue Type: New Feature
  Components: BlockCache
Reporter: stack
Assignee: stack
Priority: Minor
 Attachments: 11581.txt


 Add option, mostly for comparison's sake, that allows a deploy orchestrated 
 by CombinedBlockCache such that its L1 is LruBlockCache for META blocks but 
 DATA blocks are fetched each time (we don't try and cache them, no blockcache 
 churn).
 In operation, i can see fscache coming around to cover the fetched DATA 
 blocks such that if the DATA blocks fit in fscache, seeks go to zero.
 This setup for sure runs slower.  Will publish numbers elsewhere.  Meantime, 
 here is a patch to enable this option.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11579) CopyTable should check endtime value only if != 0

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073685#comment-14073685
 ] 

Hudson commented on HBASE-11579:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #397 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/397/])
HBASE-11579 CopyTable should check endtime value only if != 0 (Jean-Marc 
Spaggiari) (apurtell: rev 2a53add01b89f750433dfb93cdaeead0baace3a4)
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java


 CopyTable should check endtime value only if != 0
 -

 Key: HBASE-11579
 URL: https://issues.apache.org/jira/browse/HBASE-11579
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.99.0, 0.98.4
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11579-v0-trunk.patch


 CopyTable automatically assign an endTime if startTime is specified and 
 endTime is not:
 {code}
 if (startTime != 0) {
   scan.setTimeRange(startTime,
   endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
 }
 {code}
 However, we test if endTime is == 0 and exit before we get a chance to set 
 the range:
 {code}
   if (startTime  endTime) {
 printUsage(Invalid time range filter: starttime= + startTime +
 endtime= + endTime);
 return false;
   }
 {code}
 So we need to check endTime only if it's != 0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-24 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073691#comment-14073691
 ] 

Jonathan Hsieh commented on HBASE-11585:


shoudn't we update the start time or at least have another start time based on 
when the warmup has completed?

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11583) Refactoring out the configuration changes for enabling VisibilityLabels in the unit tests.

2014-07-24 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11583:


Attachment: HBASE-11583_v2.patch

[~anoop.hbase]Thanks for taking a look. Attaching the new patch which includes 
License header.

 Refactoring out the configuration changes for enabling VisibilityLabels in 
 the unit tests.
 --

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch, HBASE-11583_v2.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11587) Improve documentation around custom filters

2014-07-24 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-11587:
---

 Summary: Improve documentation around custom filters
 Key: HBASE-11587
 URL: https://issues.apache.org/jira/browse/HBASE-11587
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Dima Spivak


The instructions in the ref guide surrounding creation of custom filters by 
extending FilterBase are incomplete and need updating. Since 0.96 and 
protobufs, it looks like you need to implement the parseFrom method to make 
custom filter work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073743#comment-14073743
 ] 

Srikanth Srungarapu commented on HBASE-11586:
-

+1 (non-binding vote).

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073747#comment-14073747
 ] 

Hadoop QA commented on HBASE-11586:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657668/HBASE-11586.patch
  against trunk revision .
  ATTACHMENT ID: 12657668

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10180//console

This message is automatically generated.

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11579) CopyTable should check endtime value only if != 0

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073750#comment-14073750
 ] 

Hudson commented on HBASE-11579:


FAILURE: Integrated in HBase-TRUNK #5341 (See 
[https://builds.apache.org/job/HBase-TRUNK/5341/])
HBASE-11579 CopyTable should check endtime value only if != 0 (Jean-Marc 
Spaggiari) (apurtell: rev 7b5a309697cd1fa249e2e6a002acff724c17b1b2)
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java


 CopyTable should check endtime value only if != 0
 -

 Key: HBASE-11579
 URL: https://issues.apache.org/jira/browse/HBASE-11579
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.99.0, 0.98.4
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11579-v0-trunk.patch


 CopyTable automatically assign an endTime if startTime is specified and 
 endTime is not:
 {code}
 if (startTime != 0) {
   scan.setTimeRange(startTime,
   endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
 }
 {code}
 However, we test if endTime is == 0 and exit before we get a chance to set 
 the range:
 {code}
   if (startTime  endTime) {
 printUsage(Invalid time range filter: starttime= + startTime +
 endtime= + endTime);
 return false;
   }
 {code}
 So we need to check endTime only if it's != 0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11564) Improve cancellation management in the rpc layer

2014-07-24 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073758#comment-14073758
 ] 

Enis Soztutar commented on HBASE-11564:
---

This should be good for branch-1. But I am concerned about the change in 
ResultBoundedCompletionService that we changed from returning the first 
successful response from replicas to returning the first response (even though 
an exception). In testing, I think we ended up changing it to wait for all 
retries from all replicas to be consumed, because in some cases, the replica 
retries will just throw RetriesExhausted and we do not wait for success from 
other replicas. I might be mis-reading the patch though. 

 Improve cancellation management in the rpc layer
 

 Key: HBASE-11564
 URL: https://issues.apache.org/jira/browse/HBASE-11564
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 1.0.0, 2.0.0

 Attachments: 11564.v1.patch, 11564.v2.patch


 The current client code depends on interrupting the thread for canceling a 
 request. It's actually possible to rely on a callback in protobuf.
 The patch includes as well various performance improvements in replica 
 management. 
 On a version before HBASE-11492 the perf was ~35% better. I will redo the 
 test with the last version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11564) Improve cancellation management in the rpc layer

2014-07-24 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-11564:
--

Fix Version/s: (was: 1.0.0)
   0.99.0

 Improve cancellation management in the rpc layer
 

 Key: HBASE-11564
 URL: https://issues.apache.org/jira/browse/HBASE-11564
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, 2.0.0

 Attachments: 11564.v1.patch, 11564.v2.patch


 The current client code depends on interrupting the thread for canceling a 
 request. It's actually possible to rely on a callback in protobuf.
 The patch includes as well various performance improvements in replica 
 management. 
 On a version before HBASE-11492 the perf was ~35% better. I will redo the 
 test with the last version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-07-24 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073769#comment-14073769
 ] 

Enis Soztutar commented on HBASE-9531:
--

This looks ok for branch-1. However, I would rather prefer to not keep strings 
around in ClusterStatus. Should we not carry the metrics as numerics inside 
ClusterStatus (probably together with an object wrapper like a PB version of 
ReplicationLoad). The string conversion can be done at the last step by the 
shell script.  

 a command line (hbase shell) interface to retreive the replication metrics 
 and show replication lag
 ---

 Key: HBASE-9531
 URL: https://issues.apache.org/jira/browse/HBASE-9531
 Project: HBase
  Issue Type: New Feature
  Components: Replication
Affects Versions: 0.99.0
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
 HBASE-9531-master-v1.patch, HBASE-9531-trunk-v0.patch, 
 HBASE-9531-trunk-v0.patch


 This jira is to provide a command line (hbase shell) interface to retreive 
 the replication metrics info such as:ageOfLastShippedOp, 
 timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
 timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
 lag of replication(source only)
 Understand that hbase is using Hadoop 
 metrics(http://hbase.apache.org/metrics.html), which is a common way to 
 monitor metric info. This Jira is to serve as a light-weight client 
 interface, comparing to a completed(certainly better, but heavier)GUI 
 monitoring package. I made the code works on 0.94.9 now, and like to use this 
 jira to get opinions about whether the feature is valuable to other 
 users/workshop. If so, I will build a trunk patch. 
 All inputs are greatly appreciated. Thank you!
 The overall design is to reuse the existing logic which supports hbase shell 
 command 'status', and invent a new module, called ReplicationLoad.  In 
 HRegionServer.buildServerLoad() , use the local replication service objects 
 to get their loads  which could be wrapped in a ReplicationLoad object and 
 then simply pass it to the ServerLoad. In ReplicationSourceMetrics and 
 ReplicationSinkMetrics, a few getters and setters will be created, and ask 
 Replication to build a ReplicationLoad.  (many thanks to Jean-Daniel for 
 his kindly suggestions through dev email list)
 the replication lag will be calculated for source only, and use this formula: 
 {code:title=Replication lag|borderStyle=solid}
   if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - 
 timeStampsOfLastShippedOp)) //err on the large side
   else if (current time - timeStampsOfLastShippedOp)  2* 
 ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen 
 recently 
 else lag = 0 // last shipped may happens last night, so NO real lag 
 although ageOfLastShippedOp is non-zero
 {code}
 External will look something like:
 {code:title=status 'replication'|borderStyle=solid}
 hbase(main):001:0 status 'replication'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):002:0 status 'replication','source'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','sink'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     

[jira] [Commented] (HBASE-11583) Refactoring out the configuration changes for enabling VisibilityLabels in the unit tests.

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073774#comment-14073774
 ] 

Hadoop QA commented on HBASE-11583:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657685/HBASE-11583_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12657685

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestCacheConfig

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10181//console

This message is automatically generated.

 Refactoring out the configuration changes for enabling VisibilityLabels in 
 the unit tests.
 --

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Attachments: HBASE-11583.patch, HBASE-11583_v2.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073780#comment-14073780
 ] 

Enis Soztutar commented on HBASE-11586:
---

+1 for branch-1+. 

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-07-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073785#comment-14073785
 ] 

Andrew Purtell commented on HBASE-9531:
---

bq. Should we not carry the metrics as numerics inside ClusterStatus (probably 
together with an object wrapper like a PB version of ReplicationLoad). The 
string conversion can be done at the last step by the shell script.

Sounds good to me too

 a command line (hbase shell) interface to retreive the replication metrics 
 and show replication lag
 ---

 Key: HBASE-9531
 URL: https://issues.apache.org/jira/browse/HBASE-9531
 Project: HBase
  Issue Type: New Feature
  Components: Replication
Affects Versions: 0.99.0
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
 HBASE-9531-master-v1.patch, HBASE-9531-trunk-v0.patch, 
 HBASE-9531-trunk-v0.patch


 This jira is to provide a command line (hbase shell) interface to retreive 
 the replication metrics info such as:ageOfLastShippedOp, 
 timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
 timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
 lag of replication(source only)
 Understand that hbase is using Hadoop 
 metrics(http://hbase.apache.org/metrics.html), which is a common way to 
 monitor metric info. This Jira is to serve as a light-weight client 
 interface, comparing to a completed(certainly better, but heavier)GUI 
 monitoring package. I made the code works on 0.94.9 now, and like to use this 
 jira to get opinions about whether the feature is valuable to other 
 users/workshop. If so, I will build a trunk patch. 
 All inputs are greatly appreciated. Thank you!
 The overall design is to reuse the existing logic which supports hbase shell 
 command 'status', and invent a new module, called ReplicationLoad.  In 
 HRegionServer.buildServerLoad() , use the local replication service objects 
 to get their loads  which could be wrapped in a ReplicationLoad object and 
 then simply pass it to the ServerLoad. In ReplicationSourceMetrics and 
 ReplicationSinkMetrics, a few getters and setters will be created, and ask 
 Replication to build a ReplicationLoad.  (many thanks to Jean-Daniel for 
 his kindly suggestions through dev email list)
 the replication lag will be calculated for source only, and use this formula: 
 {code:title=Replication lag|borderStyle=solid}
   if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - 
 timeStampsOfLastShippedOp)) //err on the large side
   else if (current time - timeStampsOfLastShippedOp)  2* 
 ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen 
 recently 
 else lag = 0 // last shipped may happens last night, so NO real lag 
 although ageOfLastShippedOp is non-zero
 {code}
 External will look something like:
 {code:title=status 'replication'|borderStyle=solid}
 hbase(main):001:0 status 'replication'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):002:0 status 'replication','source'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','sink'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 

[jira] [Commented] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-07-24 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073817#comment-14073817
 ] 

Demai Ni commented on HBASE-9531:
-

[~enis], good point. I will provide a revised patch accordingly demai

 a command line (hbase shell) interface to retreive the replication metrics 
 and show replication lag
 ---

 Key: HBASE-9531
 URL: https://issues.apache.org/jira/browse/HBASE-9531
 Project: HBase
  Issue Type: New Feature
  Components: Replication
Affects Versions: 0.99.0
Reporter: Demai Ni
Assignee: Demai Ni
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
 HBASE-9531-master-v1.patch, HBASE-9531-trunk-v0.patch, 
 HBASE-9531-trunk-v0.patch


 This jira is to provide a command line (hbase shell) interface to retreive 
 the replication metrics info such as:ageOfLastShippedOp, 
 timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
 timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
 lag of replication(source only)
 Understand that hbase is using Hadoop 
 metrics(http://hbase.apache.org/metrics.html), which is a common way to 
 monitor metric info. This Jira is to serve as a light-weight client 
 interface, comparing to a completed(certainly better, but heavier)GUI 
 monitoring package. I made the code works on 0.94.9 now, and like to use this 
 jira to get opinions about whether the feature is valuable to other 
 users/workshop. If so, I will build a trunk patch. 
 All inputs are greatly appreciated. Thank you!
 The overall design is to reuse the existing logic which supports hbase shell 
 command 'status', and invent a new module, called ReplicationLoad.  In 
 HRegionServer.buildServerLoad() , use the local replication service objects 
 to get their loads  which could be wrapped in a ReplicationLoad object and 
 then simply pass it to the ServerLoad. In ReplicationSourceMetrics and 
 ReplicationSinkMetrics, a few getters and setters will be created, and ask 
 Replication to build a ReplicationLoad.  (many thanks to Jean-Daniel for 
 his kindly suggestions through dev email list)
 the replication lag will be calculated for source only, and use this formula: 
 {code:title=Replication lag|borderStyle=solid}
   if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - 
 timeStampsOfLastShippedOp)) //err on the large side
   else if (current time - timeStampsOfLastShippedOp)  2* 
 ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen 
 recently 
 else lag = 0 // last shipped may happens last night, so NO real lag 
 although ageOfLastShippedOp is non-zero
 {code}
 External will look something like:
 {code:title=status 'replication'|borderStyle=solid}
 hbase(main):001:0 status 'replication'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):002:0 status 'replication','source'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
     hdtest018.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
     hdtest015.svl.ibm.com:
     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
 timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','sink'
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
     hdtest018.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:50:59 PDT 2013
     hdtest015.svl.ibm.com:
     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
 14:48:48 PDT 2013
 hbase(main):003:0 status 'replication','lag' 
 version 0.94.9
 3 live servers
     hdtest017.svl.ibm.com: lag = 0
     hdtest018.svl.ibm.com: lag = 14
     

[jira] [Updated] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl

2014-07-24 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-11388:
---

Status: Patch Available  (was: Open)

+1 on the second patch, I'll get a Hadoop QA run for good measure.

 The order parameter is wrong when invoking the constructor of the 
 ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl
 -

 Key: HBASE-11388
 URL: https://issues.apache.org/jira/browse/HBASE-11388
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.3, 0.99.0
Reporter: Qianxi Zhang
Assignee: Qianxi Zhang
Priority: Minor
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE_11388.patch, HBASE_11388_trunk_V1.patch


 The parameters is Configurationi, ClusterKey and id in the constructor 
 of the class ReplicationPeer. But he order parameter is Configurationi, 
 id and ClusterKey when invoking the constructor of the ReplicationPeer In 
 the method getPeer of the class ReplicationPeersZKImpl
 ReplicationPeer#76
 {code}
   public ReplicationPeer(Configuration conf, String key, String id) throws 
 ReplicationException {
 this.conf = conf;
 this.clusterKey = key;
 this.id = id;
 try {
   this.reloadZkWatcher();
 } catch (IOException e) {
   throw new ReplicationException(Error connecting to peer cluster with 
 peerId= + id, e);
 }
   }
 {code}
 ReplicationPeersZKImpl#498
 {code}
 ReplicationPeer peer =
 new ReplicationPeer(peerConf, peerId, 
 ZKUtil.getZooKeeperClusterKey(peerConf));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11579) CopyTable should check endtime value only if != 0

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073825#comment-14073825
 ] 

Hudson commented on HBASE-11579:


SUCCESS: Integrated in HBase-1.0 #69 (See 
[https://builds.apache.org/job/HBase-1.0/69/])
HBASE-11579 CopyTable should check endtime value only if != 0 (Jean-Marc 
Spaggiari) (apurtell: rev e742d88b42974d455839c58c7e604b2c2b99504c)
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java


 CopyTable should check endtime value only if != 0
 -

 Key: HBASE-11579
 URL: https://issues.apache.org/jira/browse/HBASE-11579
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.99.0, 0.98.4
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11579-v0-trunk.patch


 CopyTable automatically assign an endTime if startTime is specified and 
 endTime is not:
 {code}
 if (startTime != 0) {
   scan.setTimeRange(startTime,
   endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
 }
 {code}
 However, we test if endTime is == 0 and exit before we get a chance to set 
 the range:
 {code}
   if (startTime  endTime) {
 printUsage(Invalid time range filter: starttime= + startTime +
 endtime= + endTime);
 return false;
   }
 {code}
 So we need to check endTime only if it's != 0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073829#comment-14073829
 ] 

Hadoop QA commented on HBASE-11388:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12653824/HBASE_11388_trunk_V1.patch
  against trunk revision .
  ATTACHMENT ID: 12653824

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10182//console

This message is automatically generated.

 The order parameter is wrong when invoking the constructor of the 
 ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl
 -

 Key: HBASE-11388
 URL: https://issues.apache.org/jira/browse/HBASE-11388
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.99.0, 0.98.3
Reporter: Qianxi Zhang
Assignee: Qianxi Zhang
Priority: Minor
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE_11388.patch, HBASE_11388_trunk_V1.patch


 The parameters is Configurationi, ClusterKey and id in the constructor 
 of the class ReplicationPeer. But he order parameter is Configurationi, 
 id and ClusterKey when invoking the constructor of the ReplicationPeer In 
 the method getPeer of the class ReplicationPeersZKImpl
 ReplicationPeer#76
 {code}
   public ReplicationPeer(Configuration conf, String key, String id) throws 
 ReplicationException {
 this.conf = conf;
 this.clusterKey = key;
 this.id = id;
 try {
   this.reloadZkWatcher();
 } catch (IOException e) {
   throw new ReplicationException(Error connecting to peer cluster with 
 peerId= + id, e);
 }
   }
 {code}
 ReplicationPeersZKImpl#498
 {code}
 ReplicationPeer peer =
 new ReplicationPeer(peerConf, peerId, 
 ZKUtil.getZooKeeperClusterKey(peerConf));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl

2014-07-24 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-11388:
---

Status: Open  (was: Patch Available)

Well, this needs a rebase. I'm back from vacation now, [~qianxiZhang], so if 
you resubmit a patch this week we could get this in quickly.

 The order parameter is wrong when invoking the constructor of the 
 ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl
 -

 Key: HBASE-11388
 URL: https://issues.apache.org/jira/browse/HBASE-11388
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.3, 0.99.0
Reporter: Qianxi Zhang
Assignee: Qianxi Zhang
Priority: Minor
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE_11388.patch, HBASE_11388_trunk_V1.patch


 The parameters is Configurationi, ClusterKey and id in the constructor 
 of the class ReplicationPeer. But he order parameter is Configurationi, 
 id and ClusterKey when invoking the constructor of the ReplicationPeer In 
 the method getPeer of the class ReplicationPeersZKImpl
 ReplicationPeer#76
 {code}
   public ReplicationPeer(Configuration conf, String key, String id) throws 
 ReplicationException {
 this.conf = conf;
 this.clusterKey = key;
 this.id = id;
 try {
   this.reloadZkWatcher();
 } catch (IOException e) {
   throw new ReplicationException(Error connecting to peer cluster with 
 peerId= + id, e);
 }
   }
 {code}
 ReplicationPeersZKImpl#498
 {code}
 ReplicationPeer peer =
 new ReplicationPeer(peerConf, peerId, 
 ZKUtil.getZooKeeperClusterKey(peerConf));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11409) Add more flexibility for input directory structure to LoadIncrementalHFiles

2014-07-24 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073842#comment-14073842
 ] 

churro morales commented on HBASE-11409:


new patch fixes bug

 Add more flexibility for input directory structure to LoadIncrementalHFiles
 ---

 Key: HBASE-11409
 URL: https://issues.apache.org/jira/browse/HBASE-11409
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.20
Reporter: churro morales
 Attachments: HBASE-11409-0.94.patch, HBASE-11409.0.94.v1.patch


 Use case:
 We were trying to combine two very large tables into a single table.  Thus we 
 ran jobs in one datacenter that populated certain column families and another 
 datacenter which populated other column families.  Took a snapshot and 
 exported them to their respective datacenters.  Wanted to simply take the 
 hdfs restored snapshot and use LoadIncremental to merge the data.  
 It would be nice to add support where we could run LoadIncremental on a 
 directory where the depth of store files is something other than two (current 
 behavior).  
 With snapshots it would be nice if you could pass a restored hdfs snapshot's 
 directory and have the tool run.  
 I am attaching a patch where I parameterize the bulkLoad timeout as well as 
 the default store file depth.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11409) Add more flexibility for input directory structure to LoadIncrementalHFiles

2014-07-24 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-11409:
---

Attachment: HBASE-11409.0.94.v1.patch

 Add more flexibility for input directory structure to LoadIncrementalHFiles
 ---

 Key: HBASE-11409
 URL: https://issues.apache.org/jira/browse/HBASE-11409
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.20
Reporter: churro morales
 Attachments: HBASE-11409-0.94.patch, HBASE-11409.0.94.v1.patch


 Use case:
 We were trying to combine two very large tables into a single table.  Thus we 
 ran jobs in one datacenter that populated certain column families and another 
 datacenter which populated other column families.  Took a snapshot and 
 exported them to their respective datacenters.  Wanted to simply take the 
 hdfs restored snapshot and use LoadIncremental to merge the data.  
 It would be nice to add support where we could run LoadIncremental on a 
 directory where the depth of store files is something other than two (current 
 behavior).  
 With snapshots it would be nice if you could pass a restored hdfs snapshot's 
 directory and have the tool run.  
 I am attaching a patch where I parameterize the bulkLoad timeout as well as 
 the default store file depth.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11331) [blockcache] lazy block decompression

2014-07-24 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073878#comment-14073878
 ] 

Nick Dimiduk commented on HBASE-11331:
--

bq. How feasible keeping count of how many times a block has been decompressed 
and if over a configurable threshold, instead shove the decompressed block back 
into the block cache in place of the compressed one? We already count if been 
accessed more than once? Could we leverage this fact?

I like it. Do you think that's necessary for this feature, or an acceptable 
follow-on JIRA?

 [blockcache] lazy block decompression
 -

 Key: HBASE-11331
 URL: https://issues.apache.org/jira/browse/HBASE-11331
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11331.00.patch, 
 HBASE-11331LazyBlockDecompressperfcompare.pdf


 Maintaining data in its compressed form in the block cache will greatly 
 increase our effective blockcache size and should show a meaning improvement 
 in cache hit rates in well designed applications. The idea here is to lazily 
 decompress/decrypt blocks when they're consumed, rather than as soon as 
 they're pulled off of disk.
 This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11586:
---

Attachment: HBASE-11586.patch

Updated patch. I was looking this over for commit and noticed that although 
HFileReadWriteTest wants to use the op count and nanotime accumulators to bench 
operations, those counters are not updated anywhere. grepped over all modules 
to confirm. Change is a continuation of earlier acked work without surprises. 
Going to commit to 0.98+ in a little while unless objection.

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073880#comment-14073880
 ] 

Andrew Purtell edited comment on HBASE-11586 at 7/25/14 12:12 AM:
--

Updated patch. I was looking this over for commit and noticed that although 
HFileReadWriteTest wants to use the op count and nanotime accumulators to bench 
operations, those counters are not updated anywhere. grepped over all modules 
to confirm. The patch is larger because it now removes HFileReadWriteTest. 
Change is a continuation of earlier acked work without surprises. Going to 
commit to 0.98+ in a little while unless objection.


was (Author: apurtell):
Updated patch. I was looking this over for commit and noticed that although 
HFileReadWriteTest wants to use the op count and nanotime accumulators to bench 
operations, those counters are not updated anywhere. grepped over all modules 
to confirm. Change is a continuation of earlier acked work without surprises. 
Going to commit to 0.98+ in a little while unless objection.

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11331) [blockcache] lazy block decompression

2014-07-24 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073883#comment-14073883
 ] 

Nick Dimiduk commented on HBASE-11331:
--

bq. Or, if hot, move the decompressed and decoded block up into L1?

This sounds like a feature to add to CombinedCache. Can blocks become less 
hot and be demoted back down to a compressed state in L2, or is promotion a 
one-way street? I guess regular block eviction will take care of this naturally.

 [blockcache] lazy block decompression
 -

 Key: HBASE-11331
 URL: https://issues.apache.org/jira/browse/HBASE-11331
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11331.00.patch, 
 HBASE-11331LazyBlockDecompressperfcompare.pdf


 Maintaining data in its compressed form in the block cache will greatly 
 increase our effective blockcache size and should show a meaning improvement 
 in cache hit rates in well designed applications. The idea here is to lazily 
 decompress/decrypt blocks when they're consumed, rather than as soon as 
 they're pulled off of disk.
 This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073780#comment-14073780
 ] 

Lars Hofhansl edited comment on HBASE-11586 at 7/25/14 1:15 AM:


\+1 for branch-1+. 


was (Author: enis):
+1 for branch-1+. 

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-11586:
-

Assignee: Lars Hofhansl

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Lars Hofhansl
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073929#comment-14073929
 ] 

Lars Hofhansl commented on HBASE-11586:
---

Nice. I'll check 0.94 as well.

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Lars Hofhansl
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11531) RegionStates for regions under region-in-transition znode are not updated on startup

2014-07-24 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073935#comment-14073935
 ] 

Virag Kothari commented on HBASE-11531:
---

The test passed on cluster.

 RegionStates for regions under region-in-transition znode are not updated on 
 startup
 

 Key: HBASE-11531
 URL: https://issues.apache.org/jira/browse/HBASE-11531
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.99.0
Reporter: Virag Kothari
Assignee: Jimmy Xiang
 Attachments: hbase-11531.patch, hbase-11531_v2.patch, sample.patch


 While testing HBASE-11059, saw that if there are regions under 
 region-in-transition znode their states are not updated in META and master 
 memory on startup.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl

2014-07-24 Thread Qianxi Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073937#comment-14073937
 ] 

Qianxi Zhang commented on HBASE-11388:
--

[~jdcryans] ok, I will do it.

 The order parameter is wrong when invoking the constructor of the 
 ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl
 -

 Key: HBASE-11388
 URL: https://issues.apache.org/jira/browse/HBASE-11388
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.99.0, 0.98.3
Reporter: Qianxi Zhang
Assignee: Qianxi Zhang
Priority: Minor
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE_11388.patch, HBASE_11388_trunk_V1.patch


 The parameters is Configurationi, ClusterKey and id in the constructor 
 of the class ReplicationPeer. But he order parameter is Configurationi, 
 id and ClusterKey when invoking the constructor of the ReplicationPeer In 
 the method getPeer of the class ReplicationPeersZKImpl
 ReplicationPeer#76
 {code}
   public ReplicationPeer(Configuration conf, String key, String id) throws 
 ReplicationException {
 this.conf = conf;
 this.clusterKey = key;
 this.id = id;
 try {
   this.reloadZkWatcher();
 } catch (IOException e) {
   throw new ReplicationException(Error connecting to peer cluster with 
 peerId= + id, e);
 }
   }
 {code}
 ReplicationPeersZKImpl#498
 {code}
 ReplicationPeer peer =
 new ReplicationPeer(peerConf, peerId, 
 ZKUtil.getZooKeeperClusterKey(peerConf));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073952#comment-14073952
 ] 

Hadoop QA commented on HBASE-11586:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657725/HBASE-11586.patch
  against trunk revision .
  ATTACHMENT ID: 12657725

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestRegionRebalancing

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10183//console

This message is automatically generated.

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Lars Hofhansl
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never drained. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11588) RegionServerMetricsWrapperRunnable misused the 'period' parameter

2014-07-24 Thread Victor Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Xu updated HBASE-11588:
--

Attachment: HBASE-11588.patch

Upload a small patch.

 RegionServerMetricsWrapperRunnable misused the 'period' parameter
 -

 Key: HBASE-11588
 URL: https://issues.apache.org/jira/browse/HBASE-11588
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.98.4
Reporter: Victor Xu
Priority: Minor
 Attachments: HBASE-11588.patch


 The 'period' parameter in RegionServerMetricsWrapperRunnable is in 
 MILLISECOND. When initializing the 'lastRan' parameter, the original code 
 misused the 'period' as in SECOND.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >