[jira] [Commented] (HBASE-11438) [Visibility Controller] Support UTF8 character as Visibility Labels

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074114#comment-14074114
 ] 

ramkrishna.s.vasudevan commented on HBASE-11438:


bq.can we use unicode escapes?
Am not sure on the question here?  We could use unicode escape character also.  
See the test case added in TestVisibilityLabels.  It tries to add unicode 
characters with unicode escape character.

 [Visibility Controller] Support UTF8 character as Visibility Labels
 ---

 Key: HBASE-11438
 URL: https://issues.apache.org/jira/browse/HBASE-11438
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.5

 Attachments: HBASE-11438_v1.patch, HBASE-11438_v2.patch


 This would be an action item that we would be addressing so that the 
 visibility labels could have UTF8 characters in them.  Also allow the user to 
 use a client supplied API that allows to specify the visibility labels inside 
 double quotes such that UTF8 characters and cases like , |, ! and double 
 quotes itself could be specified with proper escape sequence.  Accumulo too 
 provides one such API in the client side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11583) Refactoring out the configuration changes for enabling VisibilityLabels in the unit tests.

2014-07-25 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074146#comment-14074146
 ] 

Srikanth Srungarapu commented on HBASE-11583:
-

The failed test is running on my machine successfully. The output is:
{code}
Picked up JAVA_TOOL_OPTIONS: -Djava.awt.headless=true
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] HBase
[INFO] HBase - Common
[INFO] HBase - Protocol
[INFO] HBase - Client
[INFO] HBase - Hadoop Compatibility
[INFO] HBase - Hadoop Two Compatibility
[INFO] HBase - Prefix Tree
[INFO] HBase - Server
[INFO] HBase - Testing Util
[INFO] HBase - Thrift
[INFO] HBase - Shell
[INFO] HBase - Integration Tests
[INFO] HBase - Examples
[INFO] HBase - Assembly
[INFO] 
[INFO] Using the builder 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder
 with a thread count of 1
[INFO] 
[INFO] 
[INFO] Building HBase 0.98.5-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.4:process (default) @ hbase ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.5.2:findbugs (default) @ hbase ---
[INFO] 
[INFO] 
[INFO] Building HBase - Common 0.98.5-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (generate) @ hbase-common ---
[INFO] Executing tasks

main:
 [exec] ~/gitspace/upstream/hbase/hbase-common 
~/gitspace/upstream/hbase/hbase-common
 [exec] ~/gitspace/upstream/hbase/hbase-common
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.5:add-source (versionInfo-source) @ 
hbase-common ---
[INFO] Source directory: 
/Users/ssrungarapu/gitspace/upstream/hbase/hbase-common/target/generated-sources/java
 added.
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.4:process (default) @ hbase-common 
---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hbase-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (default) @ hbase-common ---
[INFO] Executing tasks

main:
 [exec] tar: Error opening archive: Failed to open 
'hadoop-snappy-nativelibs.tar'
 [exec] Result: 1
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hbase-common 
---
[INFO] Compiling 2 source files to 
/Users/ssrungarapu/gitspace/upstream/hbase/hbase-common/target/classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:build-classpath 
(create-mrapp-generated-classpath) @ hbase-common ---
[INFO] Skipped writing classpath file 
'/Users/ssrungarapu/gitspace/upstream/hbase/hbase-common/target/test-classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hbase-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/Users/ssrungarapu/gitspace/upstream/hbase/hbase-common/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hbase-common ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (default-test) @ 
hbase-common ---
[INFO] Surefire report directory: 
/Users/ssrungarapu/gitspace/upstream/hbase/hbase-common/target/surefire-reports
[INFO] Using configured provider org.apache.maven.surefire.junit4.JUnit4Provider

---
 T E S T S
---

---
 T E S T S
---

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test 
(secondPartTestsExecution) @ hbase-common ---
[INFO] Tests are skipped.
[INFO] 
[INFO] 
[INFO] Building HBase - Protocol 0.98.5-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.4:process (default) @ hbase-protocol 
---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hbase-protocol ---
[INFO] Using 'UTF-8' encoding 

[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074155#comment-14074155
 ] 

Hudson commented on HBASE-11586:


SUCCESS: Integrated in HBase-0.98 #419 (See 
[https://builds.apache.org/job/HBase-0.98/419/])
HBASE-11586 HFile's HDFS op latency sampling code is not used (apurtell: rev 
27eef5f73960944f6cbaa3894fd58d5e5b3bfc28)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java


 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074159#comment-14074159
 ] 

Hadoop QA commented on HBASE-11384:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657783/HBASE-11384_6.patch
  against trunk revision .
  ATTACHMENT ID: 12657783

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+PrivilegedExceptionActionVisibilityLabelsResponse action = new 
PrivilegedExceptionActionVisibilityLabelsResponse() {
+PrivilegedExceptionActionVisibilityLabelsResponse action = new 
PrivilegedExceptionActionVisibilityLabelsResponse() {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10185//console

This message is automatically generated.

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074174#comment-14074174
 ] 

Hudson commented on HBASE-11586:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #398 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/398/])
HBASE-11586 HFile's HDFS op latency sampling code is not used (apurtell: rev 
27eef5f73960944f6cbaa3894fd58d5e5b3bfc28)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java


 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2014-07-25 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074175#comment-14074175
 ] 

Mikhail Antonov commented on HBASE-11544:
-

bq.  Anything from 1k to 128k should be good as chunk size. 64k seems fine.

bq. or simply never see a cell if it is too big to fit into this size.

[~lhofhansl] as the max cell size now is 10mb IIRC, for the robust solution 
sounds like we should be able to split the cell and pass the portion of byte 
array, representing the cell value?

Thinking on [~enis]'s note about mvcc readpoint i think yeah, sending partial 
rows might be much bigger change (though controlling throttling at bytes level 
would be definitely more efficient than at row level).

To address the issue with OOM, as a first cut may be we can have 2 thresholds 
on HRS side, one is for total amount of memory (% of HRS heap size?) which 
scanner buffers may take (across all clients), and second threshold for max 
cache size for individual scanners?

The first threshold would be used to reject new scanners if HRS feels it's 
about to OOM, if too many clients try to connect, and second one to prevent one 
client from eating up all memory by opening scanners for big rows/cells? 
Thoughts?

[~stack] - could you give some details on what was avg/max size of row/cell in 
your tests, just to estimate of what those thresholds might be in their default 
values?



 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
  Labels: noob

 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074184#comment-14074184
 ] 

Hudson commented on HBASE-11586:


SUCCESS: Integrated in HBase-1.0 #70 (See 
[https://builds.apache.org/job/HBase-1.0/70/])
HBASE-11586 HFile's HDFS op latency sampling code is not used (apurtell: rev 
13643807adffd7f5a798251594f275bc318d00eb)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java


 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11536) Puts of region location to Meta may be out of order which causes inconsistent of region location

2014-07-25 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074201#comment-14074201
 ] 

Liu Shaohui commented on HBASE-11536:
-

[~stack]
Agree that the one time it fails, it'd be a high profile situation and we'd fix 
it.

Using versionOfOfflineNode has a potential risk that if we migrate an existing 
hbase cluster from a zk cluster from a new one, this method will not works.

After an dicussion with [~fenghh],  we agree with [~jxiang]'s suggestion: using 
the regionserver timestamp as the version. The deafult timeout of update meta 
is 100s which is far larger than time-skew between regionservers. And we have 
an alert if time-skew between hbase server and ntp server is larger than 100ms.

In a long term, i think the update for meta only be done in one process eg: 
HMaster, which decide which update is illegal according the state machine in it.

Another related problem is the META region location(for trunk). It's possible 
that the updates of META region locations are out of order when the opening of 
meta region is timeout.

Looking forward your suggestion. Thanks [~stack]



 Puts of region location to Meta may be out of order which causes inconsistent 
 of region location
 

 Key: HBASE-11536
 URL: https://issues.apache.org/jira/browse/HBASE-11536
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Reporter: Liu Shaohui
Priority: Critical
 Attachments: 10.237.12.13.log, 10.237.12.15.log, 
 HBASE-11536-0.94-v1.diff


 In product hbase cluster, we found inconsistency of region location in the 
 meta table. Region cdfa2ed711bbdf054d9733a92fd43eb5 is onlined in 
 regionserver 10.237.12.13:11600 but the region location in Meta table is 
 10.237.12.15:11600.
 This is because of the out-of-order puts for meta table.
 # HMaster try to assign the region to 10.237.12.15:11600.
 # RegionServer: 10.237.12.15:11600. During the opening the region, the put of 
 region location(10.237.12.15:11600) to meta table is timeout(60s) and the 
 htable retry for second time. (regionserver serving meta has got the request 
 of the put. The timeout is beause  ther is a bad disk in this regionserver 
 and sync of hlog is very slow. 
 )
 During the retry in htable, the OpenRegionHandler is timeout(100s) and the 
 PostOpenDeployTasksThread is interrupted. Through the htable is closed in the 
 MetaEditor finally, the share connection the htable used is not closed and 
 the call of put for meta table is on-flying in the connection. Assumed that 
 this on-flying call of put to meta is  named call A.
 # RegionServer: 10.237.12.15:11600. For the timeout of OpenRegionHandler, the 
 OpenRegionHandler marks the assign state of this region to FAILED_OPEN.
 # HMaster watchs this event of FAILED_OPEN and assigns the region to another 
 regionserver: 10.237.12.13:11600
 # RegionServer: 10.237.12.13:11600. This regionserver opens the region 
 successfully . Assumed that the put of region location(10.237.12.13:11600) to 
 meta table in this regionserver is named B.
 There is no order guarantee for call A and B. If call A is processed after 
 call B in regionserver serving meta region, the region location in meta table 
 will be wrong.
 From the raw scan of meta table we found:
 {code}
 scan '.META.', {RAW = true, LIMIT = 1, VERSIONS = 10, STARTROW = 
 'xxx.adfa2ed711bbdf054d9733a92fd43eb5.'} 
 {code}
 {quote}
 xxx.adfa2ed711bbdf054d9733a92fd43eb5. column=info:server, 
 timestamp=1404885460553(= Wed Jul 09 13:57:40 +0800 2014), 
 value=10.237.12.15:11600 -- Retry put from 10.237.12.15
 xxx.adfa2ed711bbdf054d9733a92fd43eb5. column=info:server, 
 timestamp=1404885456731(= Wed Jul 09 13:57:36 +0800 2014), 
 value=10.237.12.13:11600 -- put from 10.237.12.13
 
 xxx.adfa2ed711bbdf054d9733a92fd43eb5. column=info:server, 
 timestamp=1404885353122( Wed Jul 09 13:55:53 +0800 2014), 
 value=10.237.12.15:11600  -- First put from 10.237.12.15
 {quote}
 Related hbase log is attached in this issue and disscusions are welcomed.
 For there is no order guarantee for puts from different htables, one solution 
 for this issue is to give an increased id for each assignment of a region and 
 use this id as the timestamp of put of region location to meta table. The 
 region location with large assign id will be got by hbase clients.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11536) Puts of region location to Meta may be out of order which causes inconsistent of region location

2014-07-25 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-11536:


Attachment: HBASE-11536-0.94-v1.diff

A patch for 0.94 using the regionserver timestamp as the version of meta put.

 Puts of region location to Meta may be out of order which causes inconsistent 
 of region location
 

 Key: HBASE-11536
 URL: https://issues.apache.org/jira/browse/HBASE-11536
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Reporter: Liu Shaohui
Priority: Critical
 Attachments: 10.237.12.13.log, 10.237.12.15.log, 
 HBASE-11536-0.94-v1.diff


 In product hbase cluster, we found inconsistency of region location in the 
 meta table. Region cdfa2ed711bbdf054d9733a92fd43eb5 is onlined in 
 regionserver 10.237.12.13:11600 but the region location in Meta table is 
 10.237.12.15:11600.
 This is because of the out-of-order puts for meta table.
 # HMaster try to assign the region to 10.237.12.15:11600.
 # RegionServer: 10.237.12.15:11600. During the opening the region, the put of 
 region location(10.237.12.15:11600) to meta table is timeout(60s) and the 
 htable retry for second time. (regionserver serving meta has got the request 
 of the put. The timeout is beause  ther is a bad disk in this regionserver 
 and sync of hlog is very slow. 
 )
 During the retry in htable, the OpenRegionHandler is timeout(100s) and the 
 PostOpenDeployTasksThread is interrupted. Through the htable is closed in the 
 MetaEditor finally, the share connection the htable used is not closed and 
 the call of put for meta table is on-flying in the connection. Assumed that 
 this on-flying call of put to meta is  named call A.
 # RegionServer: 10.237.12.15:11600. For the timeout of OpenRegionHandler, the 
 OpenRegionHandler marks the assign state of this region to FAILED_OPEN.
 # HMaster watchs this event of FAILED_OPEN and assigns the region to another 
 regionserver: 10.237.12.13:11600
 # RegionServer: 10.237.12.13:11600. This regionserver opens the region 
 successfully . Assumed that the put of region location(10.237.12.13:11600) to 
 meta table in this regionserver is named B.
 There is no order guarantee for call A and B. If call A is processed after 
 call B in regionserver serving meta region, the region location in meta table 
 will be wrong.
 From the raw scan of meta table we found:
 {code}
 scan '.META.', {RAW = true, LIMIT = 1, VERSIONS = 10, STARTROW = 
 'xxx.adfa2ed711bbdf054d9733a92fd43eb5.'} 
 {code}
 {quote}
 xxx.adfa2ed711bbdf054d9733a92fd43eb5. column=info:server, 
 timestamp=1404885460553(= Wed Jul 09 13:57:40 +0800 2014), 
 value=10.237.12.15:11600 -- Retry put from 10.237.12.15
 xxx.adfa2ed711bbdf054d9733a92fd43eb5. column=info:server, 
 timestamp=1404885456731(= Wed Jul 09 13:57:36 +0800 2014), 
 value=10.237.12.13:11600 -- put from 10.237.12.13
 
 xxx.adfa2ed711bbdf054d9733a92fd43eb5. column=info:server, 
 timestamp=1404885353122( Wed Jul 09 13:55:53 +0800 2014), 
 value=10.237.12.15:11600  -- First put from 10.237.12.15
 {quote}
 Related hbase log is attached in this issue and disscusions are welcomed.
 For there is no order guarantee for puts from different htables, one solution 
 for this issue is to give an increased id for each assignment of a region and 
 use this id as the timestamp of put of region location to meta table. The 
 region location with large assign id will be got by hbase clients.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074210#comment-14074210
 ] 

Hudson commented on HBASE-11586:


FAILURE: Integrated in HBase-TRUNK #5342 (See 
[https://builds.apache.org/job/HBase-TRUNK/5342/])
HBASE-11586 HFile's HDFS op latency sampling code is not used (apurtell: rev 
531eee003182647e9f944a5cbcb6117555c39e44)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java


 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-4624) Remove and convert @deprecated RemoteExceptionHandler.decodeRemoteException calls

2014-07-25 Thread Talat UYARER (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074214#comment-14074214
 ] 

Talat UYARER commented on HBASE-4624:
-

Hi [~jmhsieh] 

When I try to unpatch branch in my eclipse. It gives same error. I guess This 
error is not related with this patch. 

 Remove and convert @deprecated RemoteExceptionHandler.decodeRemoteException 
 calls
 -

 Key: HBASE-4624
 URL: https://issues.apache.org/jira/browse/HBASE-4624
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.99.0, 0.98.4, 2.0.0
Reporter: Jonathan Hsieh
Assignee: Talat UYARER
  Labels: noob
 Attachments: HBASE-4624.patch


 Moving issue w/ no recent movement out of 0.95



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074243#comment-14074243
 ] 

Anoop Sam John commented on HBASE-11384:


VC#postMutationBeforeWAL()
Here also covering auth check should be done. (Append/Increment case) 

VisibilityLabelsManager
EMPTY_INT_LIST - Not used
getAuthsAsOrdinals() This can be null . So null check needed wherever used.  
The current, null value is treated as no need to check for covering auths. 
Instead the check should be based on boolean VC.checkAuths

{code}
description
+  This property if enabled will check if the labels in the visibility 
expression are associated
+  with the user issing the mutation
+/description
{code}
issing  - issueing 



 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11564) Improve cancellation management in the rpc layer

2014-07-25 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074251#comment-14074251
 ] 

Nicolas Liochon commented on HBASE-11564:
-

bq. I might be mis-reading the patch though. 
You're reading correctly :-)
I was not aware of this test issue. The RetriesExhausted should occur only 
after quite a lot of retries and sleep. I went for this change because the 
performance were better with it.

 Improve cancellation management in the rpc layer
 

 Key: HBASE-11564
 URL: https://issues.apache.org/jira/browse/HBASE-11564
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, 2.0.0

 Attachments: 11564.v1.patch, 11564.v2.patch


 The current client code depends on interrupting the thread for canceling a 
 request. It's actually possible to rely on a callback in protobuf.
 The patch includes as well various performance improvements in replica 
 management. 
 On a version before HBASE-11492 the perf was ~35% better. I will redo the 
 test with the last version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11339) HBase MOB

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074262#comment-14074262
 ] 

ramkrishna.s.vasudevan commented on HBASE-11339:


Bulk loading mob files is what was discussed in internal discussions and why 
use table.put() in the sweep tool.  Using table.put is again flushing the data 
to the memstore and internally causes the flushes to happen thus affecting the 
write path of the system.
Bulk loading mob is possible and it should work fine considering HBASE-6630 
available where the bulk loaded files are also assigned with a sequence number 
and the same sequence number can be used to resolve a conflict in case the 
keyvalueheap finds two cells with same row, ts but different values.  
In our case of sweep tool one thing to note is that by using this tool we are 
trying to create a new store file for a same row, ts, cf, cq cell but update it 
with a new value. Here the new value is that of the new path that we are 
generating after the sweep tool merges some of the mob data into one single 
file.
So consider in our case row1, cf,c1, ts1 = path1.  The above data is written in 
Storefile 1
The updated path is path 2 and so we try to bulk load that new info into a new 
store file row1,cf1,c1,ts1 = path2.  Now the HFile containing the new value is 
bulk loaded into the system and we try to scan for row1.
What we would expect is to  get the cell with path2 as the value and that 
should come from the bulk loaded file.
*Does this happen - Yes in case of 0.96 - No in case of 0.98+* .
In 0.96 case the compacted file will have kvs with mvcc as 0 if the kvs are 
smaller than the smallest read point. So in case where a scanner is opened 
after a set of files have been compacted all the kvs will have mvcc = 0 in it.
In 0.98+ above that is not the case because 
{code}
long oldestHFileTimeStampToKeepMVCC = System.currentTimeMillis() - 
  (1000L * 60 * 60 * 24 * this.keepSeqIdPeriod);  

for (StoreFile file : filesToCompact) {
  if(allFiles  (file.getModificationTimeStamp()  
oldestHFileTimeStampToKeepMVCC)) {
// when isAllFiles is true, all files are compacted so we can calculate 
the smallest 
// MVCC value to keep
if(fd.minSeqIdToKeep  file.getMaxMemstoreTS()) {
  fd.minSeqIdToKeep = file.getMaxMemstoreTS();
}
  }
{code}
And so the performCompaction()
{code}
KeyValue kv = KeyValueUtil.ensureKeyValue(c);
if (cleanSeqId  kv.getSequenceId() = smallestReadPoint) {
  kv.setSequenceId(0);
}
{code}
is not able to setSeqId to 0 as atleast for 5 days we expect the value to be 
retained. 
Remember that in the above case we are assigning seq numbers to bulk loaded 
files also and the case there is that when the scanner starts the bulk loaded 
file is having the highest seq id and that is ensured by using 
HFileOutputFormat2 which writes 
{code}
w.appendFileInfo(StoreFile.BULKLOAD_TIME_KEY,
  Bytes.toBytes(System.currentTimeMillis()));
{code}
So on opening the reader for this bulk loaded store file we are able to get the 
sequence id.
{code}
if (isBulkLoadResult()){
  // generate the sequenceId from the fileName
  // fileName is of the form randomName_SeqId_id-when-loaded_
  String fileName = this.getPath().getName();
  int startPos = fileName.indexOf(SeqId_);
  if (startPos != -1) {
this.sequenceid = Long.parseLong(fileName.substring(startPos + 6,
fileName.indexOf('_', startPos + 6)));
// Handle reference files as done above.
if (fileInfo.isTopReference()) {
  this.sequenceid += 1;
}
  }
}
this.reader.setSequenceID(this.sequenceid);
{code}
Now when the scanner tries to read from the above two files which has same 
cells in it for row1,cf,c1,ts1 but with path1 and path 2 as the values, the 
mvcc in the compacted store files that has path 1 (is a non-zero positive 
value) in 0.98+ and 0 in 0.96 case) and the mvcc for the KV in the store file 
generated by bulk load will have 0 in it (both 0.98+ and 0.96).
In KeyValueHeap.java
{code}
public int compare(KeyValueScanner left, KeyValueScanner right) {
  int comparison = compare(left.peek(), right.peek());
  if (comparison != 0) {
return comparison;
  } else {
// Since both the keys are exactly the same, we break the tie in favor
// of the key which came latest.
long leftSequenceID = left.getSequenceID();
long rightSequenceID = right.getSequenceID();
if (leftSequenceID  rightSequenceID) {
  return -1;
} else if (leftSequenceID  rightSequenceID) {
  return 1;
} else {
  return 0;
}
  }
}
{code}
In 0.96 when the scanner tries to compare the different StoreFileScanner to 
retrieve from which file the scan has to happen, the if 

[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074276#comment-14074276
 ] 

ramkrishna.s.vasudevan commented on HBASE-11384:


bq.Here also covering auth check should be done. (Append/Increment case)
Good one.  Seeingn the hook i felt it is used in WAL replay.  My bad.  Need to 
have checked the actual usage of it.

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10483) Provide API for retrieving info port when hbase.master.info.port is set to 0

2014-07-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074302#comment-14074302
 ] 

Ted Yu commented on HBASE-10483:


Shaohui:
If you don't have time, I can come up with new patch. 

 Provide API for retrieving info port when hbase.master.info.port is set to 0
 

 Key: HBASE-10483
 URL: https://issues.apache.org/jira/browse/HBASE-10483
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Liu Shaohui
 Attachments: HBASE-10483-trunk-v1.diff, HBASE-10483-trunk-v2.diff


 When hbase.master.info.port is set to 0, info port is dynamically determined.
 An API should be provided so that client can retrieve the actual info port.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-25 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074310#comment-14074310
 ] 

Nicolas Liochon commented on HBASE-11585:
-

bq. shoudn't we update the start time or at least have another start time based 
on when the warmup has completed?
In my tests, it was not an issue because not warming up impacts a lot the 99+ 
latency percentiles, but the mean time (and as such the global time) it not 
really impacted. 

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11589) AccessControlException handling in HBase rpc server and client. AccessControlException should be a not retriable exception

2014-07-25 Thread Kashif J S (JIRA)
Kashif J S created HBASE-11589:
--

 Summary: AccessControlException handling in HBase rpc server and 
client. AccessControlException should be a not retriable exception
 Key: HBASE-11589
 URL: https://issues.apache.org/jira/browse/HBASE-11589
 Project: HBase
  Issue Type: Bug
  Components: IPC/RPC
Affects Versions: 0.98.3
 Environment: SLES 11 SP1
Reporter: Kashif J S


RPC server does not handle the AccessControlException thrown by 
authorizeConnection failure properly and in return sends IOException to the 
HBase client. 
Ultimately the client does retries and gets RetriesExhaustedException but does 
not getting any link or information or stack trace about AccessControlException.

In short summary, upon inspection of RPCServer.java, it seems 
for the Listener, the Reader read code as below does not handle 
AccessControlException

void doRead(….
…..
…..
try {
count = c.readAndProcess(); // This readAndProcess method throws 
AccessControlException from processOneRpc(byte[] buf) which is not handled ?
  } catch (InterruptedException ieo) {
throw ieo;
  } catch (Exception e) {
LOG.warn(getName() + : count of bytes read:  + count, e);
count = -1; //so that the (count  0) block is executed
  }

Below is the client logs if authorizeConnection throws AccessControlException:


2014-07-24 19:40:58,768 INFO  [main] 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 7 of 7 
failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Call to 
host-10-18-40-101/10.18.40.101:6 failed on local exception: 
java.io.EOFException
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3302)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3329)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:605)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:496)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:430)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
at 
org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.IfNode.interpret(IfNode.java:117)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at 
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at 
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:233)
at 
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:215)
at 

[jira] [Updated] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11384:
---

Attachment: HBASE-11384_7.patch

Addresses the comments.

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch, HBASE-11384_7.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11384:
---

Status: Patch Available  (was: Open)

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch, HBASE-11384_7.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11384:
---

Status: Open  (was: Patch Available)

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch, HBASE-11384_7.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11590) use a specific ThreadPoolExecutor

2014-07-25 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-11590:
---

 Summary: use a specific ThreadPoolExecutor
 Key: HBASE-11590
 URL: https://issues.apache.org/jira/browse/HBASE-11590
 Project: HBase
  Issue Type: Bug
  Components: Client, Performance
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 1.0.0, 2.0.0


The JDK TPE creates all the threads in the pool. As a consequence, we create 
(by default) 256 threads even if we just need a few.

The attached TPE create threads only if we have something in the queue.
On a PE test with replica on, it improved the 99 latency percentile by 5%. 

Warning: there are likely some race conditions, but I'm posting it here because 
there is may be an implementation available somewhere we can use, or a good 
reason not to do that. So feedback welcome as usual. 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11590) use a specific ThreadPoolExecutor

2014-07-25 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11590:


Status: Patch Available  (was: Open)

 use a specific ThreadPoolExecutor
 -

 Key: HBASE-11590
 URL: https://issues.apache.org/jira/browse/HBASE-11590
 Project: HBase
  Issue Type: Bug
  Components: Client, Performance
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 1.0.0, 2.0.0

 Attachments: tp.patch


 The JDK TPE creates all the threads in the pool. As a consequence, we create 
 (by default) 256 threads even if we just need a few.
 The attached TPE create threads only if we have something in the queue.
 On a PE test with replica on, it improved the 99 latency percentile by 5%. 
 Warning: there are likely some race conditions, but I'm posting it here 
 because there is may be an implementation available somewhere we can use, or 
 a good reason not to do that. So feedback welcome as usual. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11590) use a specific ThreadPoolExecutor

2014-07-25 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11590:


Attachment: tp.patch

 use a specific ThreadPoolExecutor
 -

 Key: HBASE-11590
 URL: https://issues.apache.org/jira/browse/HBASE-11590
 Project: HBase
  Issue Type: Bug
  Components: Client, Performance
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 1.0.0, 2.0.0

 Attachments: tp.patch


 The JDK TPE creates all the threads in the pool. As a consequence, we create 
 (by default) 256 threads even if we just need a few.
 The attached TPE create threads only if we have something in the queue.
 On a PE test with replica on, it improved the 99 latency percentile by 5%. 
 Warning: there are likely some race conditions, but I'm posting it here 
 because there is may be an implementation available somewhere we can use, or 
 a good reason not to do that. So feedback welcome as usual. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11438) [Visibility Controller] Support UTF8 character as Visibility Labels

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11438:
---

Status: Patch Available  (was: Open)

 [Visibility Controller] Support UTF8 character as Visibility Labels
 ---

 Key: HBASE-11438
 URL: https://issues.apache.org/jira/browse/HBASE-11438
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.5

 Attachments: HBASE-11438_v1.patch, HBASE-11438_v2.patch, 
 HBASE-11438_v3.patch


 This would be an action item that we would be addressing so that the 
 visibility labels could have UTF8 characters in them.  Also allow the user to 
 use a client supplied API that allows to specify the visibility labels inside 
 double quotes such that UTF8 characters and cases like , |, ! and double 
 quotes itself could be specified with proper escape sequence.  Accumulo too 
 provides one such API in the client side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11589) AccessControlException handling in HBase rpc server and client. AccessControlException should be a not retriable exception

2014-07-25 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074321#comment-14074321
 ] 

Nicolas Liochon commented on HBASE-11589:
-

Likely an issue in the 1.0  master branches as well.  Do you plan to submit a 
unit test or a fix Kashif?

 AccessControlException handling in HBase rpc server and client. 
 AccessControlException should be a not retriable exception
 --

 Key: HBASE-11589
 URL: https://issues.apache.org/jira/browse/HBASE-11589
 Project: HBase
  Issue Type: Bug
  Components: IPC/RPC
Affects Versions: 0.98.3
 Environment: SLES 11 SP1
Reporter: Kashif J S

 RPC server does not handle the AccessControlException thrown by 
 authorizeConnection failure properly and in return sends IOException to the 
 HBase client. 
 Ultimately the client does retries and gets RetriesExhaustedException but 
 does not getting any link or information or stack trace about 
 AccessControlException.
 In short summary, upon inspection of RPCServer.java, it seems 
 for the Listener, the Reader read code as below does not handle 
 AccessControlException
 void doRead(….
 …..
 …..
 try {
 count = c.readAndProcess(); // This readAndProcess method throws 
 AccessControlException from processOneRpc(byte[] buf) which is not handled ?
   } catch (InterruptedException ieo) {
 throw ieo;
   } catch (Exception e) {
 LOG.warn(getName() + : count of bytes read:  + count, e);
 count = -1; //so that the (count  0) block is executed
   }
 Below is the client logs if authorizeConnection throws AccessControlException:
 2014-07-24 19:40:58,768 INFO  [main] 
 client.HConnectionManager$HConnectionImplementation: getMaster attempt 7 of 7 
 failed; no more retrying.
 com.google.protobuf.ServiceException: java.io.IOException: Call to 
 host-10-18-40-101/10.18.40.101:6 failed on local exception: 
 java.io.EOFException
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
 at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3302)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3329)
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:605)
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:496)
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:430)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
 at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
 at 
 org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
 at 
 org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
 at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
 at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
 at org.jruby.ast.IfNode.interpret(IfNode.java:117)
 

[jira] [Updated] (HBASE-11438) [Visibility Controller] Support UTF8 character as Visibility Labels

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11438:
---

Status: Open  (was: Patch Available)

 [Visibility Controller] Support UTF8 character as Visibility Labels
 ---

 Key: HBASE-11438
 URL: https://issues.apache.org/jira/browse/HBASE-11438
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.5

 Attachments: HBASE-11438_v1.patch, HBASE-11438_v2.patch, 
 HBASE-11438_v3.patch


 This would be an action item that we would be addressing so that the 
 visibility labels could have UTF8 characters in them.  Also allow the user to 
 use a client supplied API that allows to specify the visibility labels inside 
 double quotes such that UTF8 characters and cases like , |, ! and double 
 quotes itself could be specified with proper escape sequence.  Accumulo too 
 provides one such API in the client side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11438) [Visibility Controller] Support UTF8 character as Visibility Labels

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11438:
---

Attachment: HBASE-11438_v3.patch

Updated patch that uses unicode escape in the test cases instead of string 
literals.

 [Visibility Controller] Support UTF8 character as Visibility Labels
 ---

 Key: HBASE-11438
 URL: https://issues.apache.org/jira/browse/HBASE-11438
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.5

 Attachments: HBASE-11438_v1.patch, HBASE-11438_v2.patch, 
 HBASE-11438_v3.patch


 This would be an action item that we would be addressing so that the 
 visibility labels could have UTF8 characters in them.  Also allow the user to 
 use a client supplied API that allows to specify the visibility labels inside 
 double quotes such that UTF8 characters and cases like , |, ! and double 
 quotes itself could be specified with proper escape sequence.  Accumulo too 
 provides one such API in the client side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11589) AccessControlException handling in HBase rpc server and client. AccessControlException should be a not retriable exception

2014-07-25 Thread Priyank Rastogi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyank Rastogi updated HBASE-11589:


Description: 
RPC server does not handle the AccessControlException thrown by 
authorizeConnection failure properly and in return sends IOException to the 
HBase client. 
Ultimately the client does retries and gets RetriesExhaustedException but does 
not getting any link or information or stack trace about AccessControlException.

In short summary, upon inspection of RPCServer.java, it seems 
for the Listener, the Reader read code as below does not handle 
AccessControlException
{code:title=Bar.java|borderStyle=solid}
void doRead(….
…..
…..
try {
count = c.readAndProcess(); // This readAndProcess method throws 
AccessControlException from processOneRpc(byte[] buf) which is not handled ?
  } catch (InterruptedException ieo) {
throw ieo;
  } catch (Exception e) {
LOG.warn(getName() + : count of bytes read:  + count, e);
count = -1; //so that the (count  0) block is executed
  }
{code:title}

Below is the client logs if authorizeConnection throws AccessControlException:


2014-07-24 19:40:58,768 INFO  [main] 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 7 of 7 
failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Call to 
host-10-18-40-101/10.18.40.101:6 failed on local exception: 
java.io.EOFException
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3302)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3329)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:605)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:496)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:430)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
at 
org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.IfNode.interpret(IfNode.java:117)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at 
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at 
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:233)
at 
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:215)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:203)
at 
org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:69)
at 

[jira] [Updated] (HBASE-11589) AccessControlException handling in HBase rpc server and client. AccessControlException should be a not retriable exception

2014-07-25 Thread Priyank Rastogi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyank Rastogi updated HBASE-11589:


Description: 
RPC server does not handle the AccessControlException thrown by 
authorizeConnection failure properly and in return sends IOException to the 
HBase client. 
Ultimately the client does retries and gets RetriesExhaustedException but does 
not getting any link or information or stack trace about AccessControlException.

In short summary, upon inspection of RPCServer.java, it seems 
for the Listener, the Reader read code as below does not handle 
AccessControlException
{noformat}
void doRead(….
…..
…..
  try {
count = c.readAndProcess(); // This readAndProcess method throws 
AccessControlException from processOneRpc(byte[] buf) which is not handled ?
  } catch (InterruptedException ieo) {
throw ieo;
  } catch (Exception e) {
LOG.warn(getName() + : count of bytes read:  + count, e);
count = -1; //so that the (count  0) block is executed
  }
{noformat}

Below is the client logs if authorizeConnection throws AccessControlException:


2014-07-24 19:40:58,768 INFO  [main] 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 7 of 7 
failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Call to 
host-10-18-40-101/10.18.40.101:6 failed on local exception: 
java.io.EOFException
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3302)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3329)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:605)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:496)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:430)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
at 
org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.IfNode.interpret(IfNode.java:117)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at 
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at 
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:233)
at 
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:215)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:203)
at 
org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:69)
at org.jruby.ast.DAsgnNode.interpret(DAsgnNode.java:110)

[jira] [Updated] (HBASE-11589) AccessControlException handling in HBase rpc server and client. AccessControlException should be a not retriable exception

2014-07-25 Thread Priyank Rastogi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Priyank Rastogi updated HBASE-11589:


Description: 
RPC server does not handle the AccessControlException thrown by 
authorizeConnection failure properly and in return sends IOException to the 
HBase client. 
Ultimately the client does retries and gets RetriesExhaustedException but does 
not getting any link or information or stack trace about AccessControlException.

In short summary, upon inspection of RPCServer.java, it seems 
for the Listener, the Reader read code as below does not handle 
AccessControlException
{code:title=Bar.java|borderStyle=solid}
void doRead(….
…..
…..
try {
count = c.readAndProcess(); // This readAndProcess method throws 
AccessControlException from processOneRpc(byte[] buf) which is not handled ?
  } catch (InterruptedException ieo) {
throw ieo;
  } catch (Exception e) {
LOG.warn(getName() + : count of bytes read:  + count, e);
count = -1; //so that the (count  0) block is executed
  }
{code}

Below is the client logs if authorizeConnection throws AccessControlException:


2014-07-24 19:40:58,768 INFO  [main] 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 7 of 7 
failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Call to 
host-10-18-40-101/10.18.40.101:6 failed on local exception: 
java.io.EOFException
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3302)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3329)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:605)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:496)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:430)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
at 
org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.IfNode.interpret(IfNode.java:117)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at 
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at 
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:233)
at 
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:215)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:203)
at 
org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:69)
at 

[jira] [Commented] (HBASE-11438) [Visibility Controller] Support UTF8 character as Visibility Labels

2014-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074354#comment-14074354
 ] 

Hadoop QA commented on HBASE-11438:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657821/HBASE-11438_v3.patch
  against trunk revision .
  ATTACHMENT ID: 12657821

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.visibility.TestExpressionParser

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10188//console

This message is automatically generated.

 [Visibility Controller] Support UTF8 character as Visibility Labels
 ---

 Key: HBASE-11438
 URL: https://issues.apache.org/jira/browse/HBASE-11438
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.5

 Attachments: HBASE-11438_v1.patch, HBASE-11438_v2.patch, 
 HBASE-11438_v3.patch


 This would be an action item that we would be addressing so that the 
 visibility labels could have UTF8 characters in them.  Also allow the user to 
 use a client supplied API that allows to specify the visibility labels inside 
 double quotes such that UTF8 characters and cases like , |, ! and double 
 quotes itself could be specified with proper escape sequence.  Accumulo too 
 provides one such API in the client side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11590) use a specific ThreadPoolExecutor

2014-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074355#comment-14074355
 ] 

Hadoop QA commented on HBASE-11590:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657819/tp.patch
  against trunk revision .
  ATTACHMENT ID: 12657819

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+this.batchPool = new 
ExecutorServiceWithQueue(Threads.newDaemonThreadFactory(toString() + 
-shared-), maxThreads, keepAliveTime * 1000,
+ maxThreads * 
conf.getInt(HConstants.HBASE_CLIENT_MAX_TOTAL_TASKS, 
HConstants.DEFAULT_HBASE_CLIENT_MAX_TOTAL_TASKS)));
+  private final ConcurrentSkipListSetThread availableThreads = new 
ConcurrentSkipListSetThread(THREAD_COMPARAROR);
+  public ExecutorServiceWithQueue(ThreadFactory threadFactory, int maxThread, 
long threadTimeout, BlockingQueueRunnable tasks) {
+public T get(long timeout, TimeUnit unit) throws InterruptedException, 
ExecutionException, TimeoutException {
+  public T ListFutureT invokeAll(Collection? extends CallableT 
tasks) throws InterruptedException {
+  public T ListFutureT invokeAll(Collection? extends CallableT 
tasks, long timeout, TimeUnit unit) throws InterruptedException {
+  public T T invokeAny(Collection? extends CallableT tasks) throws 
InterruptedException, ExecutionException {
+  public T T invokeAny(Collection? extends CallableT tasks, long 
timeout, TimeUnit unit) throws InterruptedException, ExecutionException, 
TimeoutException {
+  while ((!isShutdown || !tasks.isEmpty())  
(EnvironmentEdgeManager.currentTimeMillis()  nextTimeout)) {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestCacheConfig

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10187//console

This message is automatically generated.

 use a specific ThreadPoolExecutor
 -

 Key: HBASE-11590
 URL: https://issues.apache.org/jira/browse/HBASE-11590
 Project: HBase
  Issue Type: Bug
  Components: Client, Performance
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 1.0.0, 2.0.0

 Attachments: tp.patch


 The JDK TPE creates all the threads in the pool. As a consequence, we create 
 (by default) 256 threads even if we just need a few.
 The attached TPE create threads only if we have 

[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074365#comment-14074365
 ] 

Anoop Sam John commented on HBASE-11384:


I think we must bypass the covering auth check for super user. In oder to make 
sure the distributed log replay and replication works even when the config is 
ON in the cluster.

nit:
{code}
+  if (auths != null) {
+if (!auths.contains(labelOrdinal)) {
+  throw new AccessDeniedException(Visibility label  + identifier
+  +  not authorized for the user  + userName);
+}
+  } else {
+throw new AccessDeniedException(Visibility label  + identifier
++  not authorized for the user  + userName);
+  }
{code}
Can be 
{code}
+  if (auths == null || (!auths.contains(labelOrdinal))) {
+  throw new AccessDeniedException(Visibility label  + identifier
+  +  not authorized for the user  + userName);
+  }
{code}


 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch, HBASE-11384_7.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11384) [Visibility Controller]Check for users covering authorizations for every mutation

2014-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074378#comment-14074378
 ] 

Hadoop QA commented on HBASE-11384:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657817/HBASE-11384_7.patch
  against trunk revision .
  ATTACHMENT ID: 12657817

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestRegionRebalancing

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10186//console

This message is automatically generated.

 [Visibility Controller]Check for users covering authorizations for every 
 mutation
 -

 Key: HBASE-11384
 URL: https://issues.apache.org/jira/browse/HBASE-11384
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.3
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: HBASE-11384.patch, HBASE-11384_1.patch, 
 HBASE-11384_2.patch, HBASE-11384_3.patch, HBASE-11384_4.patch, 
 HBASE-11384_6.patch, HBASE-11384_7.patch


 As part of discussions, it is better that every mutation either Put/Delete 
 with Visibility expressions should validate if the expression has labels for 
 which the user has authorization.  If not fail the mutation.
 Suppose User A is assoicated with A,B and C.  The put has a visibility 
 expression AD. Then fail the mutation as D is not associated with User A.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11531) RegionStates for regions under region-in-transition znode are not updated on startup

2014-07-25 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11531:


   Resolution: Fixed
Fix Version/s: 2.0.0
   1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Cool, thanks a lot. Integrated into branch 1 and master.

 RegionStates for regions under region-in-transition znode are not updated on 
 startup
 

 Key: HBASE-11531
 URL: https://issues.apache.org/jira/browse/HBASE-11531
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.99.0
Reporter: Virag Kothari
Assignee: Jimmy Xiang
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-11531.patch, hbase-11531_v2.patch, sample.patch


 While testing HBASE-11059, saw that if there are regions under 
 region-in-transition znode their states are not updated in META and master 
 memory on startup.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11578) AssignmentManager should delete children znodes of region-in-transition on migrating from zk to non-zk

2014-07-25 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari resolved HBASE-11578.
---

Resolution: Fixed

 AssignmentManager should delete children znodes of region-in-transition on 
 migrating from zk to non-zk
 --

 Key: HBASE-11578
 URL: https://issues.apache.org/jira/browse/HBASE-11578
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Reporter: Virag Kothari
Assignee: Virag Kothari

 During final phase of migration from zk to non zk for region assignment, if 
 there are  znodes under region-in-transition hmaster would abort. We need to 
 remove all children znodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11591) Scanner fails to retrieve KV from bulk loaded file with highest sequence id than the cell's mvcc in a non-bulk loaded file

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-11591:
--

 Summary: Scanner fails to retrieve KV  from bulk loaded file with 
highest sequence id than the cell's mvcc in a non-bulk loaded file
 Key: HBASE-11591
 URL: https://issues.apache.org/jira/browse/HBASE-11591
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5


See discussion in HBASE-11339.
When we have a case where there are same KVs in two files one produced by 
flush/compaction and the other thro the bulk load.
Both the files have some same kvs which matches even in timestamp.
Steps:
Add some rows with a specific timestamp and flush the same.  
Bulk load a file with the same data.. Enusre that assign seqnum property is 
set.
The bulk load should use HFileOutputFormat2 (or ensure that we write the 
bulk_time_output key).
This would ensure that the bulk loaded file has the highest seq num.
Assume the cell in the flushed/compacted store file is 
row1,cf,cq,ts1, value1  and the cell in the bulk loaded file is
row1,cf,cq,ts1,value2 
(There are no parallel scans).
Issue a scan on the table in 0.96. The retrieved value is row1,cf1,cq,ts1,value2
But the same in 0.98 will retrieve row1,cf1,cq,ts2,value1. 
This is a behaviour change.  This is because of this code 
{code}
public int compare(KeyValueScanner left, KeyValueScanner right) {
  int comparison = compare(left.peek(), right.peek());
  if (comparison != 0) {
return comparison;
  } else {
// Since both the keys are exactly the same, we break the tie in favor
// of the key which came latest.
long leftSequenceID = left.getSequenceID();
long rightSequenceID = right.getSequenceID();
if (leftSequenceID  rightSequenceID) {
  return -1;
} else if (leftSequenceID  rightSequenceID) {
  return 1;
} else {
  return 0;
}
  }
}
{code}
Here  in 0.96 case the mvcc of the cell in both the files will have 0 and so 
the comparison will happen from the else condition .  Where the seq id of the 
bulk loaded file is greater and would sort out first ensuring that the scan 
happens from that bulk loaded file.
In case of 0.98+ as we are retaining the mvcc+seqid we are not making the mvcc 
as 0 (remains a non zero positive value).  Hence the compare() sorts out the 
cell in the flushed/compacted file.  Which means though we know the lateset 
file is the bulk loaded file we don't scan the data.
Seems to be a behaviour change.  Will check on other corner cases also but we 
are trying to know the behaviour of bulk load because we are evaluating if it 
can be used for MOB design.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11591) Scanner fails to retrieve KV from bulk loaded file with highest sequence id than the cell's mvcc in a non-bulk loaded file

2014-07-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11591:
---

Attachment: TestBulkload.java

Use this testcase in 0.98/trunk and 0.96.  For running in 0.96 pls comment out 
the line
{code}
HFileContext context = new HFileContext();
{code}
and change 
{code}
HFile.Writer writer = wf.withPath(fs, path).withFileContext(context).create();
{code}
to 
{code}
HFile.Writer writer = wf.withPath(fs, path).create();
{code}

 Scanner fails to retrieve KV  from bulk loaded file with highest sequence id 
 than the cell's mvcc in a non-bulk loaded file
 ---

 Key: HBASE-11591
 URL: https://issues.apache.org/jira/browse/HBASE-11591
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0, 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.5

 Attachments: TestBulkload.java


 See discussion in HBASE-11339.
 When we have a case where there are same KVs in two files one produced by 
 flush/compaction and the other thro the bulk load.
 Both the files have some same kvs which matches even in timestamp.
 Steps:
 Add some rows with a specific timestamp and flush the same.  
 Bulk load a file with the same data.. Enusre that assign seqnum property is 
 set.
 The bulk load should use HFileOutputFormat2 (or ensure that we write the 
 bulk_time_output key).
 This would ensure that the bulk loaded file has the highest seq num.
 Assume the cell in the flushed/compacted store file is 
 row1,cf,cq,ts1, value1  and the cell in the bulk loaded file is
 row1,cf,cq,ts1,value2 
 (There are no parallel scans).
 Issue a scan on the table in 0.96. The retrieved value is 
 row1,cf1,cq,ts1,value2
 But the same in 0.98 will retrieve row1,cf1,cq,ts2,value1. 
 This is a behaviour change.  This is because of this code 
 {code}
 public int compare(KeyValueScanner left, KeyValueScanner right) {
   int comparison = compare(left.peek(), right.peek());
   if (comparison != 0) {
 return comparison;
   } else {
 // Since both the keys are exactly the same, we break the tie in favor
 // of the key which came latest.
 long leftSequenceID = left.getSequenceID();
 long rightSequenceID = right.getSequenceID();
 if (leftSequenceID  rightSequenceID) {
   return -1;
 } else if (leftSequenceID  rightSequenceID) {
   return 1;
 } else {
   return 0;
 }
   }
 }
 {code}
 Here  in 0.96 case the mvcc of the cell in both the files will have 0 and so 
 the comparison will happen from the else condition .  Where the seq id of the 
 bulk loaded file is greater and would sort out first ensuring that the scan 
 happens from that bulk loaded file.
 In case of 0.98+ as we are retaining the mvcc+seqid we are not making the 
 mvcc as 0 (remains a non zero positive value).  Hence the compare() sorts out 
 the cell in the flushed/compacted file.  Which means though we know the 
 lateset file is the bulk loaded file we don't scan the data.
 Seems to be a behaviour change.  Will check on other corner cases also but we 
 are trying to know the behaviour of bulk load because we are evaluating if it 
 can be used for MOB design.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11531) RegionStates for regions under region-in-transition znode are not updated on startup

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074563#comment-14074563
 ] 

Hudson commented on HBASE-11531:


FAILURE: Integrated in HBase-1.0 #71 (See 
[https://builds.apache.org/job/HBase-1.0/71/])
HBASE-11531 RegionStates for regions under region-in-transition znode are not 
updated on startup (jxiang: rev b7e0bde3469695480bc58c06d4c568886a637cc0)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


 RegionStates for regions under region-in-transition znode are not updated on 
 startup
 

 Key: HBASE-11531
 URL: https://issues.apache.org/jira/browse/HBASE-11531
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.99.0
Reporter: Virag Kothari
Assignee: Jimmy Xiang
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-11531.patch, hbase-11531_v2.patch, sample.patch


 While testing HBASE-11059, saw that if there are regions under 
 region-in-transition znode their states are not updated in META and master 
 memory on startup.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11550) Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated

2014-07-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074585#comment-14074585
 ] 

Nick Dimiduk commented on HBASE-11550:
--

Patch is looking good, but I'm not convinced the ticket is valid.

From the description

bq. The sizes are supposed to be in increasing order.

and yet

{noformat}
+Collections.sort(this.bucketSizes, Collections.reverseOrder());
{noformat}

Reverse is opposite the description. Which is correct?

nit: Why switch to List everywhere?

Can you add a test exercising the lack of sort as a problem? Something that 
fails on master but passes with your patch. I tried this naive change but 
nothing fails. Skimming the code, I would expect the cache to be underutilized 
-- if this is a problem at all.

{noformat}
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
index c526834..38a9a43 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
@@ -57,8 +57,8 @@ public class TestBucketCache {
 return Arrays.asList(new Object[][] {
   { 8192, null }, // TODO: why is 8k the default blocksize for these tests?
   { 16 * 1024, new int[] {
-2 * 1024 + 1024, 4 * 1024 + 1024, 8 * 1024 + 1024, 16 * 1024 + 1024,
-28 * 1024 + 1024, 32 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024,
+64 * 1024 + 1024, 32 * 1024 + 1024, 8 * 1024 + 1024, 16 * 1024 + 1024,
+28 * 1024 + 1024, 4 * 1024 + 1024, 2 * 1024 + 1024, 96 * 1024 + 1024,
 128 * 1024 + 1024 } }
 });
   }
{noformat}

 Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated
 

 Key: HBASE-11550
 URL: https://issues.apache.org/jira/browse/HBASE-11550
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Attachments: HBASE-11550-v1.patch, HBASE-11550.patch


 User can pass bucket sizes through hbase.bucketcache.bucket.sizes config 
 entry.
 The sizes are supposed to be in increasing order. Validation should be added 
 in CacheConfig#getL2().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11550) Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated

2014-07-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074609#comment-14074609
 ] 

Ted Yu commented on HBASE-11550:


Using Nick's sample change above, first item is 64 * 1024 + 1024.
In BucketAllocator#roundUpToBucketSizeInfo() :
{code}
for (int i = 0; i  bucketSizes.length; ++i)
  if (blockSize = bucketSizes[i])
return bucketSizeInfos[i];
{code}
If we search for a bucket which is supposed to fit 32 * 1024 + 1024, the bucket 
for 64 * 1024 + 1024 would be returned. This would result in wasted storage.
{code}
+Collections.sort(this.bucketSizes, Collections.reverseOrder());
{code}
Order should not be reversed.

 Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated
 

 Key: HBASE-11550
 URL: https://issues.apache.org/jira/browse/HBASE-11550
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Attachments: HBASE-11550-v1.patch, HBASE-11550.patch


 User can pass bucket sizes through hbase.bucketcache.bucket.sizes config 
 entry.
 The sizes are supposed to be in increasing order. Validation should be added 
 in CacheConfig#getL2().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11592) [0.89-fb] Make the number of SplitLogWorkers online configurable

2014-07-25 Thread Gaurav Menghani (JIRA)
Gaurav Menghani created HBASE-11592:
---

 Summary: [0.89-fb] Make the number of SplitLogWorkers online 
configurable
 Key: HBASE-11592
 URL: https://issues.apache.org/jira/browse/HBASE-11592
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.89-fb
Reporter: Gaurav Menghani
Assignee: Gaurav Menghani
 Fix For: 0.89-fb


We would like to make the number of SplitLogWorkers online configurable with 
this JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11550) Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated

2014-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074687#comment-14074687
 ] 

stack commented on HBASE-11550:
---

bq. This would result in wasted storage.

Prove it.  Add tests to assert wastage.

 Bucket sizes passed through BUCKET_CACHE_BUCKETS_KEY should be validated
 

 Key: HBASE-11550
 URL: https://issues.apache.org/jira/browse/HBASE-11550
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Trivial
 Attachments: HBASE-11550-v1.patch, HBASE-11550.patch


 User can pass bucket sizes through hbase.bucketcache.bucket.sizes config 
 entry.
 The sizes are supposed to be in increasing order. Validation should be added 
 in CacheConfig#getL2().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074786#comment-14074786
 ] 

Lars Hofhansl commented on HBASE-11586:
---

Sorry I change the assignee to me... That was unintended.
Lemme check the 0.94 code. Every bit of memory barriers we can remove is a win.

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2014-07-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074806#comment-14074806
 ] 

Lars Hofhansl commented on HBASE-11544:
---

bq. as the max cell size now is 10mb IIRC, for the robust solution sounds like 
we should be able to split the cell and pass the portion of byte array, 
representing the cell value?

I think so. We need to decouple the optimal RPC size from the response size.

bq. mvcc readpoint i think yeah, sending partial rows might be much bigger 
change

Why? The readpoint is known/fixed by/for the scanner. We already allow sending 
partial rows (see Scan.setBatch(...) or the mentioned Scan.setMaxResultSize()). 
The only new part is hat we'd assemble on the client what is expected by the 
API. Of course that means that we can OOM the client when a *single* row gets 
really large.

So what I am saying is that we:
# get rid of scanner caching
# get rid of max result size
# define an rpcChunkSize (or something). We'd default that to a useful value 
(maybe 128k), the user can then optimize that depending on the network topology 
(faster networks would need a larger value)
# the server would gather data until at least one chunk is filled and then 
sends the chunk to the client
# the client would gather chunks until it has enough data to send a row up to 
the caller

It would be better to even have a full streaming protocol, but that's be a 
bigger change and not fit well into RPC/Protobuf.

As usually... Just my $0.02 :)

 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
  Labels: noob

 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11585) PE: Allows warm-up

2014-07-25 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074810#comment-14074810
 ] 

Jonathan Hsieh commented on HBASE-11585:


got it.  +1

 PE: Allows warm-up
 --

 Key: HBASE-11585
 URL: https://issues.apache.org/jira/browse/HBASE-11585
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Trivial
 Fix For: 1.0.0, 2.0.0

 Attachments: 11585.v1.patch


 When we measure the latency, warm-up helps to get repeatable and useful 
 measures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl

2014-07-25 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-11388.


   Resolution: Fixed
Fix Version/s: (was: 0.99.0)
 Hadoop Flags: Reviewed

You're right, so I just pushed to patch to 0.98. Thanks!

 The order parameter is wrong when invoking the constructor of the 
 ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl
 -

 Key: HBASE-11388
 URL: https://issues.apache.org/jira/browse/HBASE-11388
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.99.0, 0.98.3
Reporter: Qianxi Zhang
Assignee: Qianxi Zhang
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE_11388.patch, HBASE_11388_trunk_V1.patch


 The parameters is Configurationi, ClusterKey and id in the constructor 
 of the class ReplicationPeer. But he order parameter is Configurationi, 
 id and ClusterKey when invoking the constructor of the ReplicationPeer In 
 the method getPeer of the class ReplicationPeersZKImpl
 ReplicationPeer#76
 {code}
   public ReplicationPeer(Configuration conf, String key, String id) throws 
 ReplicationException {
 this.conf = conf;
 this.clusterKey = key;
 this.id = id;
 try {
   this.reloadZkWatcher();
 } catch (IOException e) {
   throw new ReplicationException(Error connecting to peer cluster with 
 peerId= + id, e);
 }
   }
 {code}
 ReplicationPeersZKImpl#498
 {code}
 ReplicationPeer peer =
 new ReplicationPeer(peerConf, peerId, 
 ZKUtil.getZooKeeperClusterKey(peerConf));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2014-07-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074812#comment-14074812
 ] 

Lars Hofhansl commented on HBASE-7336:
--

[~vrodionov], curious about how you will find good splitpoints *inside* a 
region. Regions can be assumed to be roughly of equal size (in terms of bytes, 
not rows), but inside a region the distribution of keys can be arbitrarily 
skewed, and hence unless you have more state you cannot find good splits inside 
a region.
(the region split points actually are a very rough histogram for data 
distribution)

 HFileBlock.readAtOffset does not work well with multiple threads
 

 Key: HBASE-7336
 URL: https://issues.apache.org/jira/browse/HBASE-7336
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.4, 0.95.0

 Attachments: 7336-0.94.txt, 7336-0.96.txt


 HBase grinds to a halt when many threads scan along the same set of blocks 
 and neither read short circuit is nor block caching is enabled for the dfs 
 client ... disabling the block cache makes sense on very large scans.
 It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
 culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11590) use a specific ThreadPoolExecutor

2014-07-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074822#comment-14074822
 ] 

Lars Hofhansl commented on HBASE-11590:
---

Is that not something we can control in ThreadPoolExecutor with corePoolSize 
and maximumPoolSize?

 use a specific ThreadPoolExecutor
 -

 Key: HBASE-11590
 URL: https://issues.apache.org/jira/browse/HBASE-11590
 Project: HBase
  Issue Type: Bug
  Components: Client, Performance
Affects Versions: 1.0.0, 2.0.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 1.0.0, 2.0.0

 Attachments: tp.patch


 The JDK TPE creates all the threads in the pool. As a consequence, we create 
 (by default) 256 threads even if we just need a few.
 The attached TPE create threads only if we have something in the queue.
 On a PE test with replica on, it improved the 99 latency percentile by 5%. 
 Warning: there are likely some race conditions, but I'm posting it here 
 because there is may be an implementation available somewhere we can use, or 
 a good reason not to do that. So feedback welcome as usual. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11531) RegionStates for regions under region-in-transition znode are not updated on startup

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074831#comment-14074831
 ] 

Hudson commented on HBASE-11531:


FAILURE: Integrated in HBase-TRUNK #5343 (See 
[https://builds.apache.org/job/HBase-TRUNK/5343/])
HBASE-11531 RegionStates for regions under region-in-transition znode are not 
updated on startup (jxiang: rev 28c771a306b91452d0bd2ad605fcfe14128da60d)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


 RegionStates for regions under region-in-transition znode are not updated on 
 startup
 

 Key: HBASE-11531
 URL: https://issues.apache.org/jira/browse/HBASE-11531
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.99.0
Reporter: Virag Kothari
Assignee: Jimmy Xiang
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-11531.patch, hbase-11531_v2.patch, sample.patch


 While testing HBASE-11059, saw that if there are regions under 
 region-in-transition znode their states are not updated in META and master 
 memory on startup.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2014-07-25 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074849#comment-14074849
 ] 

Vladimir Rodionov commented on HBASE-7336:
--

[~lhofhansl]

The major priority right now is to improve compaction and normal operations 
during compaction. Sure we need to track region stats to make optimal 
inter-region splits, but even without such a stats we can decrease data skew 
significantly. 

 HFileBlock.readAtOffset does not work well with multiple threads
 

 Key: HBASE-7336
 URL: https://issues.apache.org/jira/browse/HBASE-7336
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.4, 0.95.0

 Attachments: 7336-0.94.txt, 7336-0.96.txt


 HBase grinds to a halt when many threads scan along the same set of blocks 
 and neither read short circuit is nor block caching is enabled for the dfs 
 client ... disabling the block cache makes sense on very large scans.
 It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
 culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074868#comment-14074868
 ] 

Andrew Purtell commented on HBASE-11586:


Do we need the dynamic schema metrics that these block read and write path 
measurements feed? If not we could open a 0.94 specific JIRA to remove them. 
Then this change can be backported. We could do the work with two commits like 
that perhaps. 

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11339) HBase MOB

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074874#comment-14074874
 ] 

Andrew Purtell commented on HBASE-11339:


bq. If this is an issue we can raise a JIRA and find a soln for it. 

That is HBASE-11591

 HBase MOB
 -

 Key: HBASE-11339
 URL: https://issues.apache.org/jira/browse/HBASE-11339
 Project: HBase
  Issue Type: New Feature
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Attachments: HBase MOB Design-v2.pdf, HBase MOB Design.pdf, MOB user 
 guide.docx, hbase-11339-in-dev.patch


   It's quite useful to save the medium binary data like images, documents 
 into Apache HBase. Unfortunately directly saving the binary MOB(medium 
 object) to HBase leads to a worse performance since the frequent split and 
 compaction.
   In this design, the MOB data are stored in an more efficient way, which 
 keeps a high write/read performance and guarantees the data consistency in 
 Apache HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11591) Scanner fails to retrieve KV from bulk loaded file with highest sequence id than the cell's mvcc in a non-bulk loaded file

2014-07-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11591:
---

Priority: Critical  (was: Major)

 Scanner fails to retrieve KV  from bulk loaded file with highest sequence id 
 than the cell's mvcc in a non-bulk loaded file
 ---

 Key: HBASE-11591
 URL: https://issues.apache.org/jira/browse/HBASE-11591
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0, 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.99.0, 0.98.5

 Attachments: TestBulkload.java


 See discussion in HBASE-11339.
 When we have a case where there are same KVs in two files one produced by 
 flush/compaction and the other thro the bulk load.
 Both the files have some same kvs which matches even in timestamp.
 Steps:
 Add some rows with a specific timestamp and flush the same.  
 Bulk load a file with the same data.. Enusre that assign seqnum property is 
 set.
 The bulk load should use HFileOutputFormat2 (or ensure that we write the 
 bulk_time_output key).
 This would ensure that the bulk loaded file has the highest seq num.
 Assume the cell in the flushed/compacted store file is 
 row1,cf,cq,ts1, value1  and the cell in the bulk loaded file is
 row1,cf,cq,ts1,value2 
 (There are no parallel scans).
 Issue a scan on the table in 0.96. The retrieved value is 
 row1,cf1,cq,ts1,value2
 But the same in 0.98 will retrieve row1,cf1,cq,ts2,value1. 
 This is a behaviour change.  This is because of this code 
 {code}
 public int compare(KeyValueScanner left, KeyValueScanner right) {
   int comparison = compare(left.peek(), right.peek());
   if (comparison != 0) {
 return comparison;
   } else {
 // Since both the keys are exactly the same, we break the tie in favor
 // of the key which came latest.
 long leftSequenceID = left.getSequenceID();
 long rightSequenceID = right.getSequenceID();
 if (leftSequenceID  rightSequenceID) {
   return -1;
 } else if (leftSequenceID  rightSequenceID) {
   return 1;
 } else {
   return 0;
 }
   }
 }
 {code}
 Here  in 0.96 case the mvcc of the cell in both the files will have 0 and so 
 the comparison will happen from the else condition .  Where the seq id of the 
 bulk loaded file is greater and would sort out first ensuring that the scan 
 happens from that bulk loaded file.
 In case of 0.98+ as we are retaining the mvcc+seqid we are not making the 
 mvcc as 0 (remains a non zero positive value).  Hence the compare() sorts out 
 the cell in the flushed/compacted file.  Which means though we know the 
 lateset file is the bulk loaded file we don't scan the data.
 Seems to be a behaviour change.  Will check on other corner cases also but we 
 are trying to know the behaviour of bulk load because we are evaluating if it 
 can be used for MOB design.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11591) Scanner fails to retrieve KV from bulk loaded file with highest sequence id than the cell's mvcc in a non-bulk loaded file

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074880#comment-14074880
 ] 

Andrew Purtell commented on HBASE-11591:


Making critical for .5. It seems to me we should be respecting the file level 
sequence in 0.98 as we did in 0.96, and not doing so is a bulk loading bug. 
Feel free to adjust priority downward if you disagree.

 Scanner fails to retrieve KV  from bulk loaded file with highest sequence id 
 than the cell's mvcc in a non-bulk loaded file
 ---

 Key: HBASE-11591
 URL: https://issues.apache.org/jira/browse/HBASE-11591
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0, 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.99.0, 0.98.5

 Attachments: TestBulkload.java


 See discussion in HBASE-11339.
 When we have a case where there are same KVs in two files one produced by 
 flush/compaction and the other thro the bulk load.
 Both the files have some same kvs which matches even in timestamp.
 Steps:
 Add some rows with a specific timestamp and flush the same.  
 Bulk load a file with the same data.. Enusre that assign seqnum property is 
 set.
 The bulk load should use HFileOutputFormat2 (or ensure that we write the 
 bulk_time_output key).
 This would ensure that the bulk loaded file has the highest seq num.
 Assume the cell in the flushed/compacted store file is 
 row1,cf,cq,ts1, value1  and the cell in the bulk loaded file is
 row1,cf,cq,ts1,value2 
 (There are no parallel scans).
 Issue a scan on the table in 0.96. The retrieved value is 
 row1,cf1,cq,ts1,value2
 But the same in 0.98 will retrieve row1,cf1,cq,ts2,value1. 
 This is a behaviour change.  This is because of this code 
 {code}
 public int compare(KeyValueScanner left, KeyValueScanner right) {
   int comparison = compare(left.peek(), right.peek());
   if (comparison != 0) {
 return comparison;
   } else {
 // Since both the keys are exactly the same, we break the tie in favor
 // of the key which came latest.
 long leftSequenceID = left.getSequenceID();
 long rightSequenceID = right.getSequenceID();
 if (leftSequenceID  rightSequenceID) {
   return -1;
 } else if (leftSequenceID  rightSequenceID) {
   return 1;
 } else {
   return 0;
 }
   }
 }
 {code}
 Here  in 0.96 case the mvcc of the cell in both the files will have 0 and so 
 the comparison will happen from the else condition .  Where the seq id of the 
 bulk loaded file is greater and would sort out first ensuring that the scan 
 happens from that bulk loaded file.
 In case of 0.98+ as we are retaining the mvcc+seqid we are not making the 
 mvcc as 0 (remains a non zero positive value).  Hence the compare() sorts out 
 the cell in the flushed/compacted file.  Which means though we know the 
 lateset file is the bulk loaded file we don't scan the data.
 Seems to be a behaviour change.  Will check on other corner cases also but we 
 are trying to know the behaviour of bulk load because we are evaluating if it 
 can be used for MOB design.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11072) Abstract WAL splitting from ZK

2014-07-25 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074889#comment-14074889
 ] 

Mikhail Antonov commented on HBASE-11072:
-

[~sergey.soldatov] TestRegionRebalancing - does this test pass on your local?

 Abstract WAL splitting from ZK
 --

 Key: HBASE-11072
 URL: https://issues.apache.org/jira/browse/HBASE-11072
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Mikhail Antonov
Assignee: Sergey Soldatov
 Attachments: HBASE-11072-1_v2.patch, HBASE-11072-1_v3.patch, 
 HBASE-11072-1_v4.patch, HBASE-11072-2_v2.patch, HBASE-11072-v1.patch, 
 HBASE_11072-1.patch


 HM side:
  - SplitLogManager
 RS side:
  - SplitLogWorker
  - HLogSplitter and a few handler classes.
 This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11588) RegionServerMetricsWrapperRunnable misused the 'period' parameter

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074894#comment-14074894
 ] 

Andrew Purtell commented on HBASE-11588:


+1

 RegionServerMetricsWrapperRunnable misused the 'period' parameter
 -

 Key: HBASE-11588
 URL: https://issues.apache.org/jira/browse/HBASE-11588
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.98.4
Reporter: Victor Xu
Assignee: Victor Xu
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11588.patch


 The 'period' parameter in RegionServerMetricsWrapperRunnable is in 
 MILLISECOND. When initializing the 'lastRan' parameter, the original code 
 misused the 'period' as in SECOND.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11588) RegionServerMetricsWrapperRunnable misused the 'period' parameter

2014-07-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11588:
---

Fix Version/s: 2.0.0
   0.98.5
   0.99.0

 RegionServerMetricsWrapperRunnable misused the 'period' parameter
 -

 Key: HBASE-11588
 URL: https://issues.apache.org/jira/browse/HBASE-11588
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.98.4
Reporter: Victor Xu
Assignee: Victor Xu
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11588.patch


 The 'period' parameter in RegionServerMetricsWrapperRunnable is in 
 MILLISECOND. When initializing the 'lastRan' parameter, the original code 
 misused the 'period' as in SECOND.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11583) Refactoring out the configuration changes for enabling VisibilityLabels in the unit tests.

2014-07-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11583:
---

Fix Version/s: 2.0.0
   0.98.5
   0.99.0

 Refactoring out the configuration changes for enabling VisibilityLabels in 
 the unit tests.
 --

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11583.patch, HBASE-11583_v2.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11583) Refactoring out the configuration changes for enabling VisibilityLabels in the unit tests.

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074897#comment-14074897
 ] 

Andrew Purtell commented on HBASE-11583:


That test failure is showing up in other precommit builds so is probably not 
related.

+1

 Refactoring out the configuration changes for enabling VisibilityLabels in 
 the unit tests.
 --

 Key: HBASE-11583
 URL: https://issues.apache.org/jira/browse/HBASE-11583
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11583.patch, HBASE-11583_v2.patch


 All the unit tests contain the code for enabling the visibility changes. 
 Incorporating future configuration changes for Visibility Labels 
 configuration can be made easier by refactoring them out to a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-11593:
--

 Summary: TestCacheConfig failing consistently in precommit builds
 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell


As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11438) [Visibility Controller] Support UTF8 character as Visibility Labels

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074914#comment-14074914
 ] 

Andrew Purtell commented on HBASE-11438:


{quote}
bq.can we use unicode escapes?
Am not sure on the question here?  We could use unicode escape character also.  
See the test case added in TestVisibilityLabels.  It tries to add unicode 
characters with unicode escape character.
{quote}

Don't embed unicode characters in source files directly. 

Use unicode escape sequences in the string constants instead to achieve the 
same effect.


 [Visibility Controller] Support UTF8 character as Visibility Labels
 ---

 Key: HBASE-11438
 URL: https://issues.apache.org/jira/browse/HBASE-11438
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.5

 Attachments: HBASE-11438_v1.patch, HBASE-11438_v2.patch, 
 HBASE-11438_v3.patch


 This would be an action item that we would be addressing so that the 
 visibility labels could have UTF8 characters in them.  Also allow the user to 
 use a client supplied API that allows to specify the visibility labels inside 
 double quotes such that UTF8 characters and cases like , |, ! and double 
 quotes itself could be specified with proper escape sequence.  Accumulo too 
 provides one such API in the client side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11593:
--

Assignee: stack

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack

 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074947#comment-14074947
 ] 

Andrew Purtell commented on HBASE-11593:


One precommit build log where this test failed shows the kernel version as:
{noformat]
Linux asf901 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 
x86_64 x86_64 x86_64 GNU/Linux
{noformat}
and the Java version as:
{noformat}
java version 1.7.0_51
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
{noformat}

I ran TestCacheConfig in a loop 10 times locally on an Ubuntu system with 
kernel 3.13.0-29-generic #53-Ubuntu and 64-bit Java 7u51 and 7u60. It passed 
every time. 


 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack

 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074947#comment-14074947
 ] 

Andrew Purtell edited comment on HBASE-11593 at 7/25/14 9:25 PM:
-

One precommit build log where this test failed shows the kernel version as:
{noformat}
Linux asf901 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 
x86_64 x86_64 x86_64 GNU/Linux
{noformat}

and the Java version as:
{noformat}
java version 1.7.0_51
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
{noformat}

I ran TestCacheConfig in a loop 10 times locally on an Ubuntu system with 
kernel 3.13.0-29-generic #53-Ubuntu and 64-bit Java 7u51 and 7u60. It passed 
every time. 



was (Author: apurtell):
One precommit build log where this test failed shows the kernel version as:
{noformat]
Linux asf901 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 
x86_64 x86_64 x86_64 GNU/Linux
{noformat}
and the Java version as:
{noformat}
java version 1.7.0_51
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
{noformat}

I ran TestCacheConfig in a loop 10 times locally on an Ubuntu system with 
kernel 3.13.0-29-generic #53-Ubuntu and 64-bit Java 7u51 and 7u60. It passed 
every time. 


 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack

 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074950#comment-14074950
 ] 

Andrew Purtell commented on HBASE-11593:


Ok, thanks Stack, cannot reproduce here

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack

 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074946#comment-14074946
 ] 

stack commented on HBASE-11593:
---

I'm on this one. It is my test. Keeping block cache instance in static global 
for test sake seems to be messing us up.  Undoing it...

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack

 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074965#comment-14074965
 ] 

stack commented on HBASE-11593:
---

Thanks [~apurtell] I've been watching it and can't repro locally.  My guess is 
that issue is that it is a small test and all small tests run in the one jvm in 
// and are interfering with each other since blockcache is in static (for 
tests).

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack

 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11295) Long running scan produces OutOfOrderScannerNextException

2014-07-25 Thread Mark Baumgarten (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074999#comment-14074999
 ] 

Mark Baumgarten commented on HBASE-11295:
-

Thanks for your reply Yifu. I tried increasing (multiplying default values by 
four) different timeout settings (not really sure which - but altogether I 
fiddled with four different timeout values in my CDH cluster). My issue 
persists.

I am very new to hadoop and I don´t know what an acceptable max timeout setting 
might be(I guess I could try setting it to several hours instead of minutes and 
just see what happens). I also feel a bit uncertain where the specific RPC 
timeout setting is found in my CDH manager interface - maybe the error message 
could point to this specific setting?

I managed to get my table created by using hive instead of impala - so I 
stopped worrying about it too much. I guess I just have to fiddle some more - 
but thanks for replying.

/Mark  

 Long running scan produces OutOfOrderScannerNextException
 -

 Key: HBASE-11295
 URL: https://issues.apache.org/jira/browse/HBASE-11295
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.96.0
Reporter: Jeff Cunningham
 Attachments: OutOfOrderScannerNextException.tar.gz


 Attached Files:
 HRegionServer.java - instramented from 0.96.1.1-cdh5.0.0
 HBaseLeaseTimeoutIT.java - reproducing JUnit 4 test
 WaitFilter.java - Scan filter (extends FilterBase) that overrides 
 filterRowKey() to sleep during invocation
 SpliceFilter.proto - Protobuf defintiion for WaitFilter.java
 OutOfOrderScann_InstramentedServer.log - instramented server log
 Steps.txt - this note
 Set up:
 In HBaseLeaseTimeoutIT, create a scan, set the given filter (which sleeps in 
 overridden filterRowKey() method) and set it on the scan, and scan the table.
 This is done in test client_0x0_server_15x10().
 Here's what I'm seeing (see also attached log):
 A new request comes into server (ID 1940798815214593802 - 
 RpcServer.handler=96) and a RegionScanner is created for it, cached by ID, 
 immediately looked up again and cached RegionScannerHolder's nextCallSeq 
 incremeted (now at 1).
 The RegionScan thread goes to sleep in WaitFilter#filterRowKey().
 A short (variable) period later, another request comes into the server (ID 
 8946109289649235722 - RpcServer.handler=98) and the same series of events 
 happen to this request.
 At this point both RegionScanner threads are sleeping in 
 WaitFilter.filterRowKey(). After another period, the client retries another 
 scan request which thinks its next_call_seq is 0.  However, HRegionServer's 
 cached RegionScannerHolder thinks the matching RegionScanner's nextCallSeq 
 should be 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075010#comment-14075010
 ] 

Lars Hofhansl commented on HBASE-11586:
---

Honestly I am confused where we use/display these metrics. Are these the global 
fs{read|write}Latency histograms that are shown on the region server UI and via 
JMX, or something else altogether. I'll check the 0.94 in a bit.

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11593:
--

Attachment: 11593.txt

Removes static globals used hosting block cache instances.  Let me see what it 
breaks.

Patch is a little large because had to do cleanup in a few tests where cache 
settings are being done against global cache and then a local cache and it all 
gets hard to follow which cache is expected where.

Also set size bounds on the TestMapPool tests.  Could be any size and OOME'd 
(might because we are retaining BlockCache instances will see)

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack
 Attachments: 11593.txt


 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11593:
--

Status: Patch Available  (was: Open)

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack
 Attachments: 11593.txt


 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075031#comment-14075031
 ] 

Hadoop QA commented on HBASE-11593:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657915/11593.txt
  against trunk revision .
  ATTACHMENT ID: 12657915

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[46,24]
 AlgorithmId is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[47,24]
 CertificateAlgorithmId is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[48,24]
 CertificateIssuerName is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[49,24]
 CertificateSerialNumber is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[50,24]
 CertificateSubjectName is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[51,24]
 CertificateValidity is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[52,24]
 CertificateVersion is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[53,24]
 CertificateX509Key is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[54,24]
 X500Name is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[55,24]
 X509CertImpl is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[56,24]
 X509CertInfo is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[85,4]
 X509CertInfo is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[85,28]
 X509CertInfo is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[88,4]
 CertificateValidity is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[88,39]
 CertificateValidity is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[90,4]
 X500Name is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[90,25]
 X500Name is internal proprietary API and may be removed in a future release
[ERROR] 

[jira] [Commented] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075032#comment-14075032
 ] 

Hudson commented on HBASE-11388:


FAILURE: Integrated in HBase-0.98 #420 (See 
[https://builds.apache.org/job/HBase-0.98/420/])
HBASE-11388 The order parameter is wrong when invoking the constructor of the 
(jdcryans: rev abce9ecb4ccc9a797971cf922b316c573d42d92e)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java


 The order parameter is wrong when invoking the constructor of the 
 ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl
 -

 Key: HBASE-11388
 URL: https://issues.apache.org/jira/browse/HBASE-11388
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.99.0, 0.98.3
Reporter: Qianxi Zhang
Assignee: Qianxi Zhang
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE_11388.patch, HBASE_11388_trunk_V1.patch


 The parameters is Configurationi, ClusterKey and id in the constructor 
 of the class ReplicationPeer. But he order parameter is Configurationi, 
 id and ClusterKey when invoking the constructor of the ReplicationPeer In 
 the method getPeer of the class ReplicationPeersZKImpl
 ReplicationPeer#76
 {code}
   public ReplicationPeer(Configuration conf, String key, String id) throws 
 ReplicationException {
 this.conf = conf;
 this.clusterKey = key;
 this.id = id;
 try {
   this.reloadZkWatcher();
 } catch (IOException e) {
   throw new ReplicationException(Error connecting to peer cluster with 
 peerId= + id, e);
 }
   }
 {code}
 ReplicationPeersZKImpl#498
 {code}
 ReplicationPeer peer =
 new ReplicationPeer(peerConf, peerId, 
 ZKUtil.getZooKeeperClusterKey(peerConf));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075043#comment-14075043
 ] 

Andrew Purtell commented on HBASE-11586:


bq. Are these the global fs{read|write}Latency histograms that are shown on the 
region server UI and via JMX

Yes. The same measurement is also used when updating one of the dynamic schema 
metrics. 

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075081#comment-14075081
 ] 

stack commented on HBASE-11586:
---

Seem like useful metrics.  How hard to hook them up?  I could do it in new 
issue?

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11586) HFile's HDFS op latency sampling code is not used

2014-07-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075103#comment-14075103
 ] 

Andrew Purtell commented on HBASE-11586:


Sure, we can revert the current committed change and add back the regionserver 
metrics for this removed earlier. I assume the removal of the metric was an 
intended and reviewed change though. 

 HFile's HDFS op latency sampling code is not used
 -

 Key: HBASE-11586
 URL: https://issues.apache.org/jira/browse/HBASE-11586
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.4
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11586.patch, HBASE-11586.patch


 HFileReaderV2 calls HFile#offerReadLatency and HFileWriterV2 calls 
 HFile#offerWriteLatency but the samples are never retrieved. There are no 
 callers of HFile#getReadLatenciesNanos, HFile#getWriteLatenciesNanos, and 
 related. The three ArrayBlockingQueues we are using as sample buffers in 
 HFile will fill quickly and are never drained. 
 There are also no callers of HFile#getReadTimeMs or HFile#getWriteTimeMs, and 
 related, so we are incrementing a set of AtomicLong counters that will never 
 be read nor reset.
 We are calling System.nanoTime in block read and write paths twice but not 
 utilizing the measurements.
 We should hook this code back up to metrics or remove it.
 We are also not using HFile#getChecksumFailuresCount anywhere but in some 
 unit test code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11593:
--

Attachment: 11593v2.txt

Rebase

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack
 Attachments: 11593.txt, 11593v2.txt


 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-25 Thread Jeffrey Zhong (JIRA)
Jeffrey Zhong created HBASE-11594:
-

 Summary: Unhandled NoNodeException in distributed log replay mode 
 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor


This issue happens when a RS doesn't have any WAL to be replayed. Master 
immediately finishes recovery for the RS while a region in recovery is still 
opening. 

Below is the exception in region server log:
{noformat}
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for 
/hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
{noformat} 

The following is related master log:
{noformat}
2014-03-18 11:27:14,192 DEBUG 
[MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
master.DeadServer: Finished processing 
h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
2014-03-18 11:27:14,199 INFO  
[M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
master.MasterFileSystem: Log dir for server 
h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
exist
2014-03-18 11:27:14,203 INFO  
[M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
master.SplitLogManager: dead splitlog workers 
[h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
2014-03-18 11:27:14,204 DEBUG 
[M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
master.SplitLogManager: Scheduling batch of logs to split
2014-03-18 11:27:14,206 INFO  
[M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
master.SplitLogManager: started splitting 0 logs in []
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-25 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-11594:
--

Fix Version/s: 0.98.4
   0.99.0

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.4


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-25 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-11594:
--

Attachment: hbase-11594.patch

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.4

 Attachments: hbase-11594.patch


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-25 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-11594:
--

Status: Patch Available  (was: Open)

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.4

 Attachments: hbase-11594.patch


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11331) [blockcache] lazy block decompression

2014-07-25 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11331:
-

Attachment: HBASE-11331.01.patch

Rebased to master, fixed most tests in io.hfile.*

 [blockcache] lazy block decompression
 -

 Key: HBASE-11331
 URL: https://issues.apache.org/jira/browse/HBASE-11331
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11331.00.patch, HBASE-11331.01.patch, 
 HBASE-11331LazyBlockDecompressperfcompare.pdf


 Maintaining data in its compressed form in the block cache will greatly 
 increase our effective blockcache size and should show a meaning improvement 
 in cache hit rates in well designed applications. The idea here is to lazily 
 decompress/decrypt blocks when they're consumed, rather than as soon as 
 they're pulled off of disk.
 This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl

2014-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075168#comment-14075168
 ] 

Hudson commented on HBASE-11388:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #399 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/399/])
HBASE-11388 The order parameter is wrong when invoking the constructor of the 
(jdcryans: rev abce9ecb4ccc9a797971cf922b316c573d42d92e)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java


 The order parameter is wrong when invoking the constructor of the 
 ReplicationPeer In the method getPeer of the class ReplicationPeersZKImpl
 -

 Key: HBASE-11388
 URL: https://issues.apache.org/jira/browse/HBASE-11388
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.99.0, 0.98.3
Reporter: Qianxi Zhang
Assignee: Qianxi Zhang
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE_11388.patch, HBASE_11388_trunk_V1.patch


 The parameters is Configurationi, ClusterKey and id in the constructor 
 of the class ReplicationPeer. But he order parameter is Configurationi, 
 id and ClusterKey when invoking the constructor of the ReplicationPeer In 
 the method getPeer of the class ReplicationPeersZKImpl
 ReplicationPeer#76
 {code}
   public ReplicationPeer(Configuration conf, String key, String id) throws 
 ReplicationException {
 this.conf = conf;
 this.clusterKey = key;
 this.id = id;
 try {
   this.reloadZkWatcher();
 } catch (IOException e) {
   throw new ReplicationException(Error connecting to peer cluster with 
 peerId= + id, e);
 }
   }
 {code}
 ReplicationPeersZKImpl#498
 {code}
 ReplicationPeer peer =
 new ReplicationPeer(peerConf, peerId, 
 ZKUtil.getZooKeeperClusterKey(peerConf));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075195#comment-14075195
 ] 

Hadoop QA commented on HBASE-11593:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657945/11593v2.txt
  against trunk revision .
  ATTACHMENT ID: 12657945

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestPoolMap.java:[230,0]
 error: class, interface, or enum expected
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestPoolMap.java:[230,0]
 error: class, interface, or enum expected
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10190//console

This message is automatically generated.

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack
 Attachments: 11593.txt, 11593v2.txt


 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11593:
--

Attachment: 11593v3.txt

Really rebase.  Set down default block cache size in tests now that each server 
has its own rather than share what is in the global static.  The fact that each 
server now has its own blockcache is causing memory pressure.  Lets see how 
this does.  Also set down thread stack size on threads.

Watching the tests run, seems to use less mem in general.  I see locally an 
issue with TestRestoreSnapshotHelper when a 1G heap.  Lets see what hadoopqa 
says.

 TestCacheConfig failing consistently in precommit builds
 

 Key: HBASE-11593
 URL: https://issues.apache.org/jira/browse/HBASE-11593
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: stack
 Attachments: 11593.txt, 11593v2.txt, 11593v3.txt


 As stated in description



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11331) [blockcache] lazy block decompression

2014-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075225#comment-14075225
 ] 

stack commented on HBASE-11331:
---

bq. Do you think that's necessary for this feature, or an acceptable follow-on 
JIRA?

Follow-on.

Trying your latest patch.  Will make new report with more variety to it.

 [blockcache] lazy block decompression
 -

 Key: HBASE-11331
 URL: https://issues.apache.org/jira/browse/HBASE-11331
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11331.00.patch, HBASE-11331.01.patch, 
 HBASE-11331LazyBlockDecompressperfcompare.pdf


 Maintaining data in its compressed form in the block cache will greatly 
 increase our effective blockcache size and should show a meaning improvement 
 in cache hit rates in well designed applications. The idea here is to lazily 
 decompress/decrypt blocks when they're consumed, rather than as soon as 
 they're pulled off of disk.
 This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11593) TestCacheConfig failing consistently in precommit builds

2014-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075226#comment-14075226
 ] 

Hadoop QA commented on HBASE-11593:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657965/11593v3.txt
  against trunk revision .
  ATTACHMENT ID: 12657965

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[46,24]
 AlgorithmId is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[47,24]
 CertificateAlgorithmId is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[48,24]
 CertificateIssuerName is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[49,24]
 CertificateSerialNumber is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[50,24]
 CertificateSubjectName is internal proprietary API and may be removed in a 
future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[51,24]
 CertificateValidity is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[52,24]
 CertificateVersion is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[53,24]
 CertificateX509Key is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[54,24]
 X500Name is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[55,24]
 X509CertImpl is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[56,24]
 X509CertInfo is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[85,4]
 X509CertInfo is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[85,28]
 X509CertInfo is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[88,4]
 CertificateValidity is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[88,39]
 CertificateValidity is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[90,4]
 X500Name is internal proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/http/ssl/KeyStoreTestUtil.java:[90,25]
 X500Name is internal proprietary API and may be removed in a future release
[ERROR] 

[jira] [Commented] (HBASE-11331) [blockcache] lazy block decompression

2014-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075234#comment-14075234
 ] 

stack commented on HBASE-11331:
---

bq tilted against.

... it being on by default.  For some use cases enabling it will make sense but 
not general case.

 [blockcache] lazy block decompression
 -

 Key: HBASE-11331
 URL: https://issues.apache.org/jira/browse/HBASE-11331
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11331.00.patch, HBASE-11331.01.patch, 
 HBASE-11331LazyBlockDecompressperfcompare.pdf


 Maintaining data in its compressed form in the block cache will greatly 
 increase our effective blockcache size and should show a meaning improvement 
 in cache hit rates in well designed applications. The idea here is to lazily 
 decompress/decrypt blocks when they're consumed, rather than as soon as 
 they're pulled off of disk.
 This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11331) [blockcache] lazy block decompression

2014-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075233#comment-14075233
 ] 

stack commented on HBASE-11331:
---

[~ndimiduk] IMO, this can't be on by default given the report previous.  
Benefit is not enough.  Will post a new report in next few days but think the 
benefit vs cost will be about same; tilted against.

 [blockcache] lazy block decompression
 -

 Key: HBASE-11331
 URL: https://issues.apache.org/jira/browse/HBASE-11331
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-11331.00.patch, HBASE-11331.01.patch, 
 HBASE-11331LazyBlockDecompressperfcompare.pdf


 Maintaining data in its compressed form in the block cache will greatly 
 increase our effective blockcache size and should show a meaning improvement 
 in cache hit rates in well designed applications. The idea here is to lazily 
 decompress/decrypt blocks when they're consumed, rather than as soon as 
 they're pulled off of disk.
 This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11516:
---

Fix Version/s: 2.0.0
   0.99.0

 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch, 
 HBASE-11516_v3.patch, region_server_webui.png


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11588) RegionServerMetricsWrapperRunnable misused the 'period' parameter

2014-07-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11588:
---

Hadoop Flags: Reviewed

 RegionServerMetricsWrapperRunnable misused the 'period' parameter
 -

 Key: HBASE-11588
 URL: https://issues.apache.org/jira/browse/HBASE-11588
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.98.4
Reporter: Victor Xu
Assignee: Victor Xu
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11588.patch


 The 'period' parameter in RegionServerMetricsWrapperRunnable is in 
 MILLISECOND. When initializing the 'lastRan' parameter, the original code 
 misused the 'period' as in SECOND.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11588) RegionServerMetricsWrapperRunnable misused the 'period' parameter

2014-07-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11588:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch, Victor.

 RegionServerMetricsWrapperRunnable misused the 'period' parameter
 -

 Key: HBASE-11588
 URL: https://issues.apache.org/jira/browse/HBASE-11588
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.98.4
Reporter: Victor Xu
Assignee: Victor Xu
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: HBASE-11588.patch


 The 'period' parameter in RegionServerMetricsWrapperRunnable is in 
 MILLISECOND. When initializing the 'lastRan' parameter, the original code 
 misused the 'period' as in SECOND.



--
This message was sent by Atlassian JIRA
(v6.2#6252)