[jira] [Updated] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14089:
---
Fix Version/s: (was: 1.2.0)
   1.2.1

Pushed to 0.98 and up. Fully backwards compatible change. Checked all ZK tests 
locally on all branches, all pass, all modified branches. 

 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14089:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629169#comment-14629169
 ] 

Hudson commented on HBASE-14089:


FAILURE: Integrated in HBase-1.2 #70 (See 
[https://builds.apache.org/job/HBase-1.2/70/])
HBASE-14089 Remove unnecessary draw of system entropy from RecoverableZooKeeper 
(apurtell: rev 0ed03c287a443c7c85ddd76bfca2bffbcfa8de6c)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java


 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table

2015-07-15 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629200#comment-14629200
 ] 

Ashish Singhi commented on HBASE-8642:
--

Thanks [~andrew.purt...@gmail.com].

bq. We're missing shell tests for the new list_table_snapshots and 
delete_table_snapshots shell commands
Actually we are missing shell tests for the complete snapshot feature. I will 
try to soon create a jira and address this.

{quote}
It's a bit odd that the delete_table_snapshots command asks for confirmation 
where others do not. That is reasonable given how destructive it could be. We 
can adjust this minor detail with a follow on issue if need be
{quote}
AFAIK all the commands which perform operation on a list of a entity asks for a 
confirmations. For eg: drop_all, delete_all_snapshot e.t.c

 [Snapshot] List and delete snapshot by table
 

 Key: HBASE-8642
 URL: https://issues.apache.org/jira/browse/HBASE-8642
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2
Reporter: Julian Zhou
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 
 8642-trunk-0.95-v2.patch, HBASE-8642-0.98.patch, HBASE-8642-v1.patch, 
 HBASE-8642-v2.patch, HBASE-8642-v3.patch, HBASE-8642-v4.patch, 
 HBASE-8642.patch


 Support list and delete snapshots by table names.
 User scenario:
 A user wants to delete all the snapshots which were taken in January month 
 for a table 't' where snapshot names starts with 'Jan'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12296) Filters should work with ByteBufferedCell

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629215#comment-14629215
 ] 

Hudson commented on HBASE-12296:


FAILURE: Integrated in HBase-TRUNK #6654 (See 
[https://builds.apache.org/job/HBase-TRUNK/6654/])
HBASE-12296 Filters should work with ByteBufferedCell. (anoopsamjohn: rev 
ebdac4b52e67614db70b59be8cd8143efe701911)
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BitComparator.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
* hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestComparators.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultiRowRangeFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestLongComparator.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/LongComparator.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestBitComparator.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java


 Filters should work with ByteBufferedCell
 -

 Key: HBASE-12296
 URL: https://issues.apache.org/jira/browse/HBASE-12296
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12296_v1.patch, HBASE-12296_v1.patch


 Now we have added an extension for Cell in server side, ByteBufferedCell, 
 where Cells are backed by BB (on heap or off heap). When the Cell is backed 
 by off heap buffer, the getXXXArray() APIs has to create temp byte[] and do 
 data copy and return that. This will be bit costly.  We have avoided this in 
 areas like CellComparator/SQM etc. Filter area was not touched in that patch. 
  This Jira aims at doing it in Filter area. 
 Eg : SCVF checking the cell value for the given value condition. It uses 
 getValueArray() to get cell value bytes.  When the cell is BB backed, it has 
 to use getValueByteBuffer() API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14069) Add the ability for RegionSplitter to rolling split without using a SplitAlgorithm

2015-07-15 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629098#comment-14629098
 ] 

Heng Chen commented on HBASE-14069:
---

Is it means if  SplitAlgorithm class not set in command line,  there will be a 
default one (UniformSplit?) 

 Add the ability for RegionSplitter to rolling split without using a 
 SplitAlgorithm
 --

 Key: HBASE-14069
 URL: https://issues.apache.org/jira/browse/HBASE-14069
 Project: HBase
  Issue Type: New Feature
Reporter: Elliott Clark
Assignee: Abhilash

 RegionSplittler is the utility that can rolling split regions. It would be 
 nice to be able to split regions and have the normal split points get 
 computed for me so that I'm not reliant on knowing data distribution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13838) Fix shared TaskStatusTmpl.jamon issues (coloring, content, etc.)

2015-07-15 Thread Matt Warhaftig (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Warhaftig updated HBASE-13838:
---
Attachment: hbase-13838-v1.patch

 Fix shared TaskStatusTmpl.jamon issues (coloring, content, etc.)
 

 Key: HBASE-13838
 URL: https://issues.apache.org/jira/browse/HBASE-13838
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.1.0
Reporter: Lars George
Assignee: Matt Warhaftig
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: hbase-13838-v1.patch


 There are a few issues with the shared TaskStatusTmpl:
 - Client operations tab is always empty 
 For Master this is expected, but for RegionServers there is never anything 
 listed either. Fix for RS status page (probably caused by params not 
 containing Operation subclass anymore, but some PB generated classes?)
 - Hide “Client Operations” tab for master UI
 Since operations are RS only. Or we fix this and make other calls show here.
 - The alert-error for aborted tasks is not set in CSS at all
 When a task was aborted it should be amber or red, but the assigned style is 
 not in any of the linked stylesheets (abort-error). Add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13838) Fix shared TaskStatusTmpl.jamon issues (coloring, content, etc.)

2015-07-15 Thread Matt Warhaftig (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Warhaftig updated HBASE-13838:
---
Assignee: Matt Warhaftig
  Status: Patch Available  (was: Open)

Submitted patch 'hbase-13838-v1.patch'. Here are the details on the patch 
versus ticket request:

* Client operations tab is always empty
For Master this is expected, but for RegionServers there is never anything 
listed either. Fix for RS status page (probably caused by params not containing 
Operation subclass anymore, but some PB generated classes?)
{color:red}MW - Replaced {{Operations}} subclass requirement with 
{{ClientProtos}} declaring class requirement.{color}

* Hide “Client Operations” tab for master UI
Since operations are RS only. Or we fix this and make other calls show here.
{color:red}MW - Hid 'Client Operations' from Master UI.{color}

* The alert-error for aborted tasks is not set in CSS at all
When a task was aborted it should be amber or red, but the assigned style is 
not in any of the linked stylesheets (abort-error). Add.
{color:red}MW - Replaced 'alert-error' with a valid Bootstrap alert type. 
{color}

 Fix shared TaskStatusTmpl.jamon issues (coloring, content, etc.)
 

 Key: HBASE-13838
 URL: https://issues.apache.org/jira/browse/HBASE-13838
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.1.0
Reporter: Lars George
Assignee: Matt Warhaftig
  Labels: beginner
 Fix For: 2.0.0, 1.3.0


 There are a few issues with the shared TaskStatusTmpl:
 - Client operations tab is always empty 
 For Master this is expected, but for RegionServers there is never anything 
 listed either. Fix for RS status page (probably caused by params not 
 containing Operation subclass anymore, but some PB generated classes?)
 - Hide “Client Operations” tab for master UI
 Since operations are RS only. Or we fix this and make other calls show here.
 - The alert-error for aborted tasks is not set in CSS at all
 When a task was aborted it should be amber or red, but the assigned style is 
 not in any of the linked stylesheets (abort-error). Add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13971) Flushes stuck since 6 hours on a regionserver.

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629081#comment-14629081
 ] 

Hadoop QA commented on HBASE-13971:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745550/13971-v1.txt
  against master branch at commit 5315f0f11ffa0f750e5615617424baa9271611af.
  ATTACHMENT ID: 12745550

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14791//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14791//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14791//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14791//console

This message is automatically generated.

 Flushes stuck since 6 hours on a regionserver.
 --

 Key: HBASE-13971
 URL: https://issues.apache.org/jira/browse/HBASE-13971
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.3.0
 Environment: Caused while running IntegrationTestLoadAndVerify for 20 
 M rows on cluster with 32 region servers each with max heap size of 24GBs.
Reporter: Abhilash
Assignee: Ted Yu
Priority: Critical
 Attachments: 13971-v1.txt, 13971-v1.txt, jstack.1, jstack.2, 
 jstack.3, jstack.4, jstack.5, rsDebugDump.txt, screenshot-1.png


 One region server stuck while flushing(possible deadlock). Its trying to 
 flush two regions since last 6 hours (see the screenshot).
 Caused while running IntegrationTestLoadAndVerify for 20 M rows with 600 
 mapper jobs and 100 back references. ~37 Million writes on each regionserver 
 till now but no writes happening on any regionserver from past 6 hours  and 
 their memstore size is zero(I dont know if this is related). But this 
 particular regionserver has memstore size of 9GBs from past 6 hours.
 Relevant snaps from debug dump:
 Tasks:
 ===
 Task: Flushing 
 IntegrationTestLoadAndVerify,R\x9B\x1B\xBF\xAE\x08\xD1\xA2,1435179555993.8e2d075f94ce7699f416ec4ced9873cd.
 Status: RUNNING:Preparing to flush by snapshotting stores in 
 8e2d075f94ce7699f416ec4ced9873cd
 Running for 22034s
 Task: Flushing 
 IntegrationTestLoadAndVerify,\x93\xA385\x81Z\x11\xE6,1435179555993.9f8d0e01a40405b835bf6e5a22a86390.
 Status: RUNNING:Preparing to flush by snapshotting stores in 
 9f8d0e01a40405b835bf6e5a22a86390
 Running for 22033s
 Executors:
 ===
 ...
 Thread 139 (MemStoreFlusher.1):
   State: WAITING
   Blocked count: 139711
   Waited count: 239212
   Waiting on java.util.concurrent.CountDownLatch$Sync@b9c094a
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 
 

[jira] [Commented] (HBASE-14069) Add the ability for RegionSplitter to rolling split without using a SplitAlgorithm

2015-07-15 Thread Abhilash (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629117#comment-14629117
 ] 

Abhilash commented on HBASE-14069:
--

No. It wont use any of the algorithms defined in RegionSplitter. Just simply 
call splitRegion(regionName) for all regions one by one(the same function that 
is called while manually splitting a region). It will keep splitting regions 
unless we have given number of regions or all regions are less than a given 
size(BFS kind of order).

 Add the ability for RegionSplitter to rolling split without using a 
 SplitAlgorithm
 --

 Key: HBASE-14069
 URL: https://issues.apache.org/jira/browse/HBASE-14069
 Project: HBase
  Issue Type: New Feature
Reporter: Elliott Clark
Assignee: Abhilash

 RegionSplittler is the utility that can rolling split regions. It would be 
 nice to be able to split regions and have the normal split points get 
 computed for me so that I'm not reliant on knowing data distribution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14048) isPBMagicPrefix should always check for null data

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629183#comment-14629183
 ] 

Hadoop QA commented on HBASE-14048:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12745562/HBASE-14048-0.98.patch
  against 0.98 branch at commit 6c6c7c51f6bd31af1fa99e3d76ab54a7613c4086.
  ATTACHMENT ID: 12745562

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
23 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat.testWithMapReduceImpl(TestTableSnapshotInputFormat.java:241)
at 
org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduce(TableSnapshotInputFormatTestBase.java:111)
at 
org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduceSingleRegion(TableSnapshotInputFormatTestBase.java:90)
at 
org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:184)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testRegionCrossingHFileSplit(TestLoadIncrementalHFiles.java:195)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testRegionCrossingHFileSplit(TestLoadIncrementalHFiles.java:173)
at 
org.apache.hadoop.hbase.mapreduce.TestImportTsv.testDryModeWithoutBulkOutputAndTableExists(TestImportTsv.java:293)
at 
org.apache.hadoop.hbase.mapreduce.TestImportExport.testWithFilter(TestImportExport.java:447)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14793//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14793//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14793//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14793//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14793//console

This message is automatically generated.

 isPBMagicPrefix should always check for null data
 -

 Key: HBASE-14048
 URL: https://issues.apache.org/jira/browse/HBASE-14048
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.13
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.14

 Attachments: HBASE-14048-0.98.patch


 Example:
 {noformat}
 2015-07-09 04:20:30,649 ERROR [ver60020-EventThread] zookeeper.ClientCnxn - 
 Error while calling watcher 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.isPBMagicPrefix(ProtobufUtil.java:241)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.startNewSubprocedure(ZKProcedureMemberRpcs.java:203)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.waitForNewProcedures(ZKProcedureMemberRpcs.java:172)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.access$100(ZKProcedureMemberRpcs.java:55)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs$1.nodeChildrenChanged(ZKProcedureMemberRpcs.java:107)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:358)
 

[jira] [Commented] (HBASE-14048) isPBMagicPrefix should always check for null data

2015-07-15 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629076#comment-14629076
 ] 

Matteo Bertozzi commented on HBASE-14048:
-

+1

 isPBMagicPrefix should always check for null data
 -

 Key: HBASE-14048
 URL: https://issues.apache.org/jira/browse/HBASE-14048
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.13
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.14

 Attachments: HBASE-14048-0.98.patch


 Example:
 {noformat}
 2015-07-09 04:20:30,649 ERROR [ver60020-EventThread] zookeeper.ClientCnxn - 
 Error while calling watcher 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.isPBMagicPrefix(ProtobufUtil.java:241)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.startNewSubprocedure(ZKProcedureMemberRpcs.java:203)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.waitForNewProcedures(ZKProcedureMemberRpcs.java:172)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.access$100(ZKProcedureMemberRpcs.java:55)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs$1.nodeChildrenChanged(ZKProcedureMemberRpcs.java:107)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:358)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
 at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
 {noformat}
 This is observed with 0.98.
 There may be a deeper cause but let's start by fixing the obvious problem. 
 Audit ProcedureV2 also on later branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12296) Filters should work with ByteBufferedCell

2015-07-15 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12296:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master with a TODO as mentioned in the comment.
Thanks for the reviews Ram and Stack.

 Filters should work with ByteBufferedCell
 -

 Key: HBASE-12296
 URL: https://issues.apache.org/jira/browse/HBASE-12296
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12296_v1.patch, HBASE-12296_v1.patch


 Now we have added an extension for Cell in server side, ByteBufferedCell, 
 where Cells are backed by BB (on heap or off heap). When the Cell is backed 
 by off heap buffer, the getXXXArray() APIs has to create temp byte[] and do 
 data copy and return that. This will be bit costly.  We have avoided this in 
 areas like CellComparator/SQM etc. Filter area was not touched in that patch. 
  This Jira aims at doing it in Filter area. 
 Eg : SCVF checking the cell value for the given value condition. It uses 
 getValueArray() to get cell value bytes.  When the cell is BB backed, it has 
 to use getValueByteBuffer() API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13971) Flushes stuck since 6 hours on a regionserver.

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629195#comment-14629195
 ] 

Hadoop QA commented on HBASE-13971:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745564/13971-v1.txt
  against master branch at commit 6c6c7c51f6bd31af1fa99e3d76ab54a7613c4086.
  ATTACHMENT ID: 12745564

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14794//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14794//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14794//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14794//console

This message is automatically generated.

 Flushes stuck since 6 hours on a regionserver.
 --

 Key: HBASE-13971
 URL: https://issues.apache.org/jira/browse/HBASE-13971
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.3.0
 Environment: Caused while running IntegrationTestLoadAndVerify for 20 
 M rows on cluster with 32 region servers each with max heap size of 24GBs.
Reporter: Abhilash
Assignee: Ted Yu
Priority: Critical
 Attachments: 13971-v1.txt, 13971-v1.txt, 13971-v1.txt, jstack.1, 
 jstack.2, jstack.3, jstack.4, jstack.5, rsDebugDump.txt, screenshot-1.png


 One region server stuck while flushing(possible deadlock). Its trying to 
 flush two regions since last 6 hours (see the screenshot).
 Caused while running IntegrationTestLoadAndVerify for 20 M rows with 600 
 mapper jobs and 100 back references. ~37 Million writes on each regionserver 
 till now but no writes happening on any regionserver from past 6 hours  and 
 their memstore size is zero(I dont know if this is related). But this 
 particular regionserver has memstore size of 9GBs from past 6 hours.
 Relevant snaps from debug dump:
 Tasks:
 ===
 Task: Flushing 
 IntegrationTestLoadAndVerify,R\x9B\x1B\xBF\xAE\x08\xD1\xA2,1435179555993.8e2d075f94ce7699f416ec4ced9873cd.
 Status: RUNNING:Preparing to flush by snapshotting stores in 
 8e2d075f94ce7699f416ec4ced9873cd
 Running for 22034s
 Task: Flushing 
 IntegrationTestLoadAndVerify,\x93\xA385\x81Z\x11\xE6,1435179555993.9f8d0e01a40405b835bf6e5a22a86390.
 Status: RUNNING:Preparing to flush by snapshotting stores in 
 9f8d0e01a40405b835bf6e5a22a86390
 Running for 22033s
 Executors:
 ===
 ...
 Thread 139 (MemStoreFlusher.1):
   State: WAITING
   Blocked count: 139711
   Waited count: 239212
   Waiting on java.util.concurrent.CountDownLatch$Sync@b9c094a
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 
 

[jira] [Commented] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629212#comment-14629212
 ] 

Hudson commented on HBASE-14089:


SUCCESS: Integrated in HBase-1.2-IT #54 (See 
[https://builds.apache.org/job/HBase-1.2-IT/54/])
HBASE-14089 Remove unnecessary draw of system entropy from RecoverableZooKeeper 
(apurtell: rev 0ed03c287a443c7c85ddd76bfca2bffbcfa8de6c)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java


 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14048) isPBMagicPrefix should always check for null data

2015-07-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14048:
---
Attachment: HBASE-14048-0.98.patch

I checked all branches. Only an issue on 0.98. Trivial patch attached

 isPBMagicPrefix should always check for null data
 -

 Key: HBASE-14048
 URL: https://issues.apache.org/jira/browse/HBASE-14048
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.13
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.14

 Attachments: HBASE-14048-0.98.patch


 Example:
 {noformat}
 2015-07-09 04:20:30,649 ERROR [ver60020-EventThread] zookeeper.ClientCnxn - 
 Error while calling watcher 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.isPBMagicPrefix(ProtobufUtil.java:241)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.startNewSubprocedure(ZKProcedureMemberRpcs.java:203)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.waitForNewProcedures(ZKProcedureMemberRpcs.java:172)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.access$100(ZKProcedureMemberRpcs.java:55)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs$1.nodeChildrenChanged(ZKProcedureMemberRpcs.java:107)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:358)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
 at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
 {noformat}
 This is observed with 0.98.
 There may be a deeper cause but let's start by fixing the obvious problem. 
 Audit ProcedureV2 also on later branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14048) isPBMagicPrefix should always check for null data

2015-07-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14048:
---
Status: Patch Available  (was: Open)

 isPBMagicPrefix should always check for null data
 -

 Key: HBASE-14048
 URL: https://issues.apache.org/jira/browse/HBASE-14048
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.13
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.14

 Attachments: HBASE-14048-0.98.patch


 Example:
 {noformat}
 2015-07-09 04:20:30,649 ERROR [ver60020-EventThread] zookeeper.ClientCnxn - 
 Error while calling watcher 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.isPBMagicPrefix(ProtobufUtil.java:241)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.startNewSubprocedure(ZKProcedureMemberRpcs.java:203)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.waitForNewProcedures(ZKProcedureMemberRpcs.java:172)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs.access$100(ZKProcedureMemberRpcs.java:55)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureMemberRpcs$1.nodeChildrenChanged(ZKProcedureMemberRpcs.java:107)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:358)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
 at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
 {noformat}
 This is observed with 0.98.
 There may be a deeper cause but let's start by fixing the obvious problem. 
 Audit ProcedureV2 also on later branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629234#comment-14629234
 ] 

Hudson commented on HBASE-14089:


FAILURE: Integrated in HBase-1.3 #59 (See 
[https://builds.apache.org/job/HBase-1.3/59/])
HBASE-14089 Remove unnecessary draw of system entropy from RecoverableZooKeeper 
(apurtell: rev c42fc1dd56a651a52b00d03833b00fb800fa5cd9)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java


 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629235#comment-14629235
 ] 

Hudson commented on HBASE-8642:
---

FAILURE: Integrated in HBase-1.3 #59 (See 
[https://builds.apache.org/job/HBase-1.3/59/])
HBASE-8642 [Snapshot] List and delete snapshot by table (apurtell: rev 
789d2a94b7e8c2d97c1b52f4f1f0d47922b711a2)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClient.java
* hbase-shell/src/main/ruby/shell/commands/delete_table_snapshots.rb
* hbase-shell/src/main/ruby/shell/commands/list_table_snapshots.rb
* hbase-shell/src/main/ruby/hbase/admin.rb
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* hbase-shell/src/main/ruby/shell.rb


 [Snapshot] List and delete snapshot by table
 

 Key: HBASE-8642
 URL: https://issues.apache.org/jira/browse/HBASE-8642
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2
Reporter: Julian Zhou
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 
 8642-trunk-0.95-v2.patch, HBASE-8642-0.98.patch, HBASE-8642-v1.patch, 
 HBASE-8642-v2.patch, HBASE-8642-v3.patch, HBASE-8642-v4.patch, 
 HBASE-8642.patch


 Support list and delete snapshots by table names.
 User scenario:
 A user wants to delete all the snapshots which were taken in January month 
 for a table 't' where snapshot names starts with 'Jan'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14097) Log link to client scan troubleshooting section when scanner exceptions happen.

2015-07-15 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-14097:
---

 Summary: Log link to client scan troubleshooting section when 
scanner exceptions happen.
 Key: HBASE-14097
 URL: https://issues.apache.org/jira/browse/HBASE-14097
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13838) Fix shared TaskStatusTmpl.jamon issues (coloring, content, etc.)

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629232#comment-14629232
 ] 

Hadoop QA commented on HBASE-13838:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745568/hbase-13838-v1.patch
  against master branch at commit 6c6c7c51f6bd31af1fa99e3d76ab54a7613c4086.
  ATTACHMENT ID: 12745568

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1877 checkstyle errors (more than the master's current 1873 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14795//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14795//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14795//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14795//console

This message is automatically generated.

 Fix shared TaskStatusTmpl.jamon issues (coloring, content, etc.)
 

 Key: HBASE-13838
 URL: https://issues.apache.org/jira/browse/HBASE-13838
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.1.0
Reporter: Lars George
Assignee: Matt Warhaftig
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: hbase-13838-v1.patch


 There are a few issues with the shared TaskStatusTmpl:
 - Client operations tab is always empty 
 For Master this is expected, but for RegionServers there is never anything 
 listed either. Fix for RS status page (probably caused by params not 
 containing Operation subclass anymore, but some PB generated classes?)
 - Hide “Client Operations” tab for master UI
 Since operations are RS only. Or we fix this and make other calls show here.
 - The alert-error for aborted tasks is not set in CSS at all
 When a task was aborted it should be amber or red, but the assigned style is 
 not in any of the linked stylesheets (abort-error). Add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629084#comment-14629084
 ] 

Hudson commented on HBASE-14089:


FAILURE: Integrated in HBase-TRUNK #6653 (See 
[https://builds.apache.org/job/HBase-TRUNK/6653/])
HBASE-14089 Remove unnecessary draw of system entropy from RecoverableZooKeeper 
(apurtell: rev 6c6c7c51f6bd31af1fa99e3d76ab54a7613c4086)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java


 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13971) Flushes stuck since 6 hours on a regionserver.

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13971:
---
Attachment: 13971-v1.txt

 Flushes stuck since 6 hours on a regionserver.
 --

 Key: HBASE-13971
 URL: https://issues.apache.org/jira/browse/HBASE-13971
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.3.0
 Environment: Caused while running IntegrationTestLoadAndVerify for 20 
 M rows on cluster with 32 region servers each with max heap size of 24GBs.
Reporter: Abhilash
Assignee: Ted Yu
Priority: Critical
 Attachments: 13971-v1.txt, 13971-v1.txt, 13971-v1.txt, jstack.1, 
 jstack.2, jstack.3, jstack.4, jstack.5, rsDebugDump.txt, screenshot-1.png


 One region server stuck while flushing(possible deadlock). Its trying to 
 flush two regions since last 6 hours (see the screenshot).
 Caused while running IntegrationTestLoadAndVerify for 20 M rows with 600 
 mapper jobs and 100 back references. ~37 Million writes on each regionserver 
 till now but no writes happening on any regionserver from past 6 hours  and 
 their memstore size is zero(I dont know if this is related). But this 
 particular regionserver has memstore size of 9GBs from past 6 hours.
 Relevant snaps from debug dump:
 Tasks:
 ===
 Task: Flushing 
 IntegrationTestLoadAndVerify,R\x9B\x1B\xBF\xAE\x08\xD1\xA2,1435179555993.8e2d075f94ce7699f416ec4ced9873cd.
 Status: RUNNING:Preparing to flush by snapshotting stores in 
 8e2d075f94ce7699f416ec4ced9873cd
 Running for 22034s
 Task: Flushing 
 IntegrationTestLoadAndVerify,\x93\xA385\x81Z\x11\xE6,1435179555993.9f8d0e01a40405b835bf6e5a22a86390.
 Status: RUNNING:Preparing to flush by snapshotting stores in 
 9f8d0e01a40405b835bf6e5a22a86390
 Running for 22033s
 Executors:
 ===
 ...
 Thread 139 (MemStoreFlusher.1):
   State: WAITING
   Blocked count: 139711
   Waited count: 239212
   Waiting on java.util.concurrent.CountDownLatch$Sync@b9c094a
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
 org.apache.hadoop.hbase.wal.WALKey.getSequenceId(WALKey.java:305)
 
 org.apache.hadoop.hbase.regionserver.HRegion.getNextSequenceId(HRegion.java:2422)
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2168)
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2047)
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2011)
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1902)
 org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1828)
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:510)
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471)
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:75)
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259)
 java.lang.Thread.run(Thread.java:745)
 Thread 137 (MemStoreFlusher.0):
   State: WAITING
   Blocked count: 138931
   Waited count: 237448
   Waiting on java.util.concurrent.CountDownLatch$Sync@53f41f76
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
 org.apache.hadoop.hbase.wal.WALKey.getSequenceId(WALKey.java:305)
 
 org.apache.hadoop.hbase.regionserver.HRegion.getNextSequenceId(HRegion.java:2422)
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2168)
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2047)
 
 

[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629085#comment-14629085
 ] 

Hudson commented on HBASE-8642:
---

FAILURE: Integrated in HBase-TRUNK #6653 (See 
[https://builds.apache.org/job/HBase-TRUNK/6653/])
HBASE-8642 [Snapshot] List and delete snapshot by table (apurtell: rev 
e6bd0c8c155eaf08bb6fe65932fad79fe345c88c)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClient.java
* hbase-shell/src/main/ruby/hbase/admin.rb
* hbase-shell/src/main/ruby/shell/commands/list_table_snapshots.rb
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* hbase-shell/src/main/ruby/shell/commands/delete_table_snapshots.rb
* hbase-shell/src/main/ruby/shell.rb
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


 [Snapshot] List and delete snapshot by table
 

 Key: HBASE-8642
 URL: https://issues.apache.org/jira/browse/HBASE-8642
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2
Reporter: Julian Zhou
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 
 8642-trunk-0.95-v2.patch, HBASE-8642-0.98.patch, HBASE-8642-v1.patch, 
 HBASE-8642-v2.patch, HBASE-8642-v3.patch, HBASE-8642-v4.patch, 
 HBASE-8642.patch


 Support list and delete snapshots by table names.
 User scenario:
 A user wants to delete all the snapshots which were taken in January month 
 for a table 't' where snapshot names starts with 'Jan'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14097) Log link to client scan troubleshooting section when scanner exceptions happen.

2015-07-15 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14097:

Status: Patch Available  (was: Open)

 Log link to client scan troubleshooting section when scanner exceptions 
 happen.
 ---

 Key: HBASE-14097
 URL: https://issues.apache.org/jira/browse/HBASE-14097
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Attachments: HBASE-14097.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14076) ResultSerialization and MutationSerialization can throw InvalidProtocolBufferException when serializing a cell larger than 64MB

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629158#comment-14629158
 ] 

Hadoop QA commented on HBASE-14076:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/1274/HBASE-14076.hbase-11339.patch
  against hbase-11339 branch at commit 5315f0f11ffa0f750e5615617424baa9271611af.
  ATTACHMENT ID: 1274

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1923 checkstyle errors (more than the master's current 1922 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestProcessBasedCluster
  org.apache.hadoop.hbase.mapreduce.TestImportExport

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.TestHBaseTestingUtility.testMiniClusterBindToWildcard(TestHBaseTestingUtility.java:136)
at 
org.apache.hadoop.hbase.security.access.TestAccessController.testPermissionList(TestAccessController.java:1543)
at 
org.apache.hadoop.hbase.TestZooKeeper.testSanity(TestZooKeeper.java:258)
at 
org.apache.hadoop.hbase.TestZooKeeper.testRegionServerSessionExpired(TestZooKeeper.java:220)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14792//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14792//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14792//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14792//console

This message is automatically generated.

 ResultSerialization and MutationSerialization can throw 
 InvalidProtocolBufferException when serializing a cell larger than 64MB
 ---

 Key: HBASE-14076
 URL: https://issues.apache.org/jira/browse/HBASE-14076
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, hbase-11339, 1.2.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
 Attachments: HBASE-14076.hbase-11339.patch


 This was reported in CRUNCH-534 but is a problem how we handle 
 deserialization of large Cells ( 64MB) in ResultSerialization and 
 MutationSerialization.
 The fix is just re-using what it was done in HBASE-13230.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14074) HBase cluster crashed on-the-hour

2015-07-15 Thread JoneZhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629077#comment-14629077
 ] 

JoneZhang commented on HBASE-14074:
---

Got it
Thank you evey much


 HBase cluster crashed on-the-hour 
 --

 Key: HBASE-14074
 URL: https://issues.apache.org/jira/browse/HBASE-14074
 Project: HBase
  Issue Type: Bug
  Components: Admin
Affects Versions: 0.96.2
 Environment: Hadoop 2.5.1
 HBase 0.96.2
Reporter: JoneZhang

 I found hbase clutser crashed on-the-hour
 HBase master running log as follows
 2015-07-14 14:41:49,832 DEBUG [master:10.240.131.18:6.oldLogCleaner] 
 master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: 
 10-241-125-46%2C60020%2C1436841063572.1436851865226
 2015-07-14 14:45:49,822 DEBUG [master:10.240.131.18:6.oldLogCleaner] 
 master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: 
 10-241-85-137%2C60020%2C1436841341086.1436852143141
 2015-07-14 15:00:03,481 INFO  [main] util.VersionInfo: HBase 0.96.2-hadoop2
 2015-07-14 15:00:03,481 INFO  [main] util.VersionInfo: Subversion 
 https://svn.apache.org/repos/asf/hbase/tags/0.96.2RC2 -r 1581096
 2015-07-14 15:00:03,481 INFO  [main] util.VersionInfo: Compiled by stack on 
 Mon Mar 24 16:03:18 PDT 2014
 2015-07-14 15:00:03,729 INFO  [main] zookeeper.ZooKeeper: Client 
 environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
 2015-07-14 15:00:03,730 INFO  [main] zookeeper.ZooKeeper: Client 
 environment:host.name=10-240-131-18
 2015-07-14 15:00:03,730 INFO  [main] zookeeper.ZooKeeper: Client 
 environment:java.version=1.7.0_72
 ...
 2015-07-14 15:00:03,749 INFO  [main] zookeeper.RecoverableZooKeeper: Process 
 identifier=clean znode for master connecting to ZooKeeper 
 ensemble=10.240.131.17:2200,10.240.131.16:2200,10.240.131.15:2200,10.240.131.14:2200,10.240.131.18:2200
 2015-07-14 15:00:03,751 INFO  [main-SendThread(10-240-131-18:2200)] 
 zookeeper.ClientCnxn: Opening socket connection to server 
 10-240-131-18/10.240.131.18:2200. Will not attempt to authenticate using SASL 
 (unknown error)
 2015-07-14 15:00:03,757 INFO  [main-SendThread(10-240-131-18:2200)] 
 zookeeper.ClientCnxn: Socket connection established to 
 10-240-131-18/10.240.131.18:2200, initiating session
 2015-07-14 15:00:03,764 INFO  [main-SendThread(10-240-131-18:2200)] 
 zookeeper.ClientCnxn: Session establishment complete on server 
 10-240-131-18/10.240.131.18:2200, sessionid = 0x34e8a64b453024a, negotiated 
 timeout = 4
 2015-07-14 15:00:04,835 INFO  [main] zookeeper.ZooKeeper: Session: 
 0x34e8a64b453024a closed
 2015-07-14 15:00:04,835 INFO  [main-EventThread] zookeeper.ClientCnxn: 
 EventThread shut down
 After print  Didn't find this log in ZK... every hour at a time
 The master dead
 Zookeeper  running log as follows
 2015-07-14 15:00:03,756 [myid:3] - INFO  
 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2200:NIOServerCnxnFactory@197] - 
 Accepted socket connection from /10.240.131.18:52733
 2015-07-14 15:00:03,761 [myid:3] - INFO  
 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2200:ZooKeeperServer@868] - Client 
 attempting to establish new session at /10.240.131.18:52733
 2015-07-14 15:00:03,762 [myid:3] - INFO  
 [CommitProcessor:3:ZooKeeperServer@617] - Established session 
 0x34e8a64b453024a with negotiated timeout 4 for client 
 /10.240.131.18:52733
 2015-07-14 15:00:04,836 [myid:3] - INFO  
 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2200:NIOServerCnxn@1007] - Closed 
 socket connection for client /10.240.131.18:52733 which had sessionid 
 0x34e8a64b453024a



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14094) Procedure.proto can't be compiled to C++

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629125#comment-14629125
 ] 

Hudson commented on HBASE-14094:


SUCCESS: Integrated in HBase-1.3 #58 (See 
[https://builds.apache.org/job/HBase-1.3/58/])
HBASE-14094 Procedure.proto can't be compiled to C++ (eclark: rev 
2446da054527e7fc5087bd7fbe8b6c84d9620f61)
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ProcedureProtos.java
* hbase-protocol/src/main/protobuf/Procedure.proto
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormat.java


 Procedure.proto can't be compiled to C++
 

 Key: HBASE-14094
 URL: https://issues.apache.org/jira/browse/HBASE-14094
 Project: HBase
  Issue Type: Bug
  Components: proc-v2, Protobufs
Affects Versions: 2.0.0, 1.2.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14094-v1.patch, HBASE-14094.patch


 EOF is a defined symbol in c and C++.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629163#comment-14629163
 ] 

Hudson commented on HBASE-14089:


FAILURE: Integrated in HBase-1.1 #583 (See 
[https://builds.apache.org/job/HBase-1.1/583/])
HBASE-14089 Remove unnecessary draw of system entropy from RecoverableZooKeeper 
(apurtell: rev 7ff9ba2c0f94f66c4c14962c676ad81986daec23)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java


 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14097) Log link to client scan troubleshooting section when scanner exceptions happen.

2015-07-15 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14097:

Description: As per description.

 Log link to client scan troubleshooting section when scanner exceptions 
 happen.
 ---

 Key: HBASE-14097
 URL: https://issues.apache.org/jira/browse/HBASE-14097
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial

 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14097) Log link to client scan troubleshooting section when scanner exceptions happen.

2015-07-15 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14097:

Attachment: HBASE-14097.patch

 Log link to client scan troubleshooting section when scanner exceptions 
 happen.
 ---

 Key: HBASE-14097
 URL: https://issues.apache.org/jira/browse/HBASE-14097
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Attachments: HBASE-14097.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14089) Remove unnecessary draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629148#comment-14629148
 ] 

Hudson commented on HBASE-14089:


SUCCESS: Integrated in HBase-1.0 #992 (See 
[https://builds.apache.org/job/HBase-1.0/992/])
HBASE-14089 Remove unnecessary draw of system entropy from RecoverableZooKeeper 
(apurtell: rev 31ad8fb24f8582a295acae8d74c2277b02e5490d)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java


 Remove unnecessary draw of system entropy from RecoverableZooKeeper
 ---

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14096) add license information to images

2015-07-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629032#comment-14629032
 ] 

Andrew Purtell commented on HBASE-14096:


Wonder if there's a RAT enhancement request somewhere in here

 add license information to images
 -

 Key: HBASE-14096
 URL: https://issues.apache.org/jira/browse/HBASE-14096
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: Sean Busbey

 we should include a license header for images we ship
 * jpg: exif ImageDescription or UserComment
 * png: disclaimer text field



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8642) [Snapshot] List and delete snapshot by table

2015-07-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-8642:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to 0.98, branch-1, and master.

I also tested the new commands by creating three snapshots, listing them, then 
deleting them in one go with delete_table_snapshots. 

It's a bit odd that the delete_table_snapshots command asks for confirmation 
where others do not. That is reasonable given how destructive it could be. We 
can adjust this minor detail with a follow on issue if need be.

 [Snapshot] List and delete snapshot by table
 

 Key: HBASE-8642
 URL: https://issues.apache.org/jira/browse/HBASE-8642
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2
Reporter: Julian Zhou
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.14, 1.3.0

 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 
 8642-trunk-0.95-v2.patch, HBASE-8642-0.98.patch, HBASE-8642-v1.patch, 
 HBASE-8642-v2.patch, HBASE-8642-v3.patch, HBASE-8642-v4.patch, 
 HBASE-8642.patch


 Support list and delete snapshots by table names.
 User scenario:
 A user wants to delete all the snapshots which were taken in January month 
 for a table 't' where snapshot names starts with 'Jan'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14094) Procedure.proto can't be compiled to C++

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629051#comment-14629051
 ] 

Hudson commented on HBASE-14094:


FAILURE: Integrated in HBase-TRUNK #6652 (See 
[https://builds.apache.org/job/HBase-TRUNK/6652/])
HBASE-14094 Procedure.proto can't be compiled to C++ (eclark: rev 
5315f0f11ffa0f750e5615617424baa9271611af)
* hbase-protocol/src/main/protobuf/Procedure.proto
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ProcedureProtos.java
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormat.java


 Procedure.proto can't be compiled to C++
 

 Key: HBASE-14094
 URL: https://issues.apache.org/jira/browse/HBASE-14094
 Project: HBase
  Issue Type: Bug
  Components: proc-v2, Protobufs
Affects Versions: 2.0.0, 1.2.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14094-v1.patch, HBASE-14094.patch


 EOF is a defined symbol in c and C++.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14094) Procedure.proto can't be compiled to C++

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629049#comment-14629049
 ] 

Hudson commented on HBASE-14094:


SUCCESS: Integrated in HBase-1.3-IT #42 (See 
[https://builds.apache.org/job/HBase-1.3-IT/42/])
HBASE-14094 Procedure.proto can't be compiled to C++ (eclark: rev 
2446da054527e7fc5087bd7fbe8b6c84d9620f61)
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ProcedureProtos.java
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormat.java
* 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java
* hbase-protocol/src/main/protobuf/Procedure.proto


 Procedure.proto can't be compiled to C++
 

 Key: HBASE-14094
 URL: https://issues.apache.org/jira/browse/HBASE-14094
 Project: HBase
  Issue Type: Bug
  Components: proc-v2, Protobufs
Affects Versions: 2.0.0, 1.2.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14094-v1.patch, HBASE-14094.patch


 EOF is a defined symbol in c and C++.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14045) Bumping thrift version to 0.9.2.

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629050#comment-14629050
 ] 

Hudson commented on HBASE-14045:


SUCCESS: Integrated in HBase-1.3-IT #42 (See 
[https://builds.apache.org/job/HBase-1.3-IT/42/])
HBASE-14045 Bumping thrift version to 0.9.2. (ssrungarapu: rev 
d13e597c7339e69de48712b8dced1698c52487bb)
* pom.xml


 Bumping thrift version to 0.9.2.
 

 Key: HBASE-14045
 URL: https://issues.apache.org/jira/browse/HBASE-14045
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14045-branch-1.patch, HBASE-14045.patch, 
 compat_report.html


 From mailing list conversation:
 {quote}
 Currently, HBase is using Thrift 0.9.0 version, with the latest  version 
 being 0.9.2. Currently, the HBase Thrift gateway is vulnerable to crashes due 
 to THRIFT-2660 when used with default transport and the workaround for this 
 problem is switching to framed transport. Unfortunately, the recently added 
 impersonation support \[1\] doesn't work with framed transport leaving thrift 
 gateway using this feature susceptible to crashes.  Updating thrift version 
 to 0.9.2 will help us in mitigating this problem. Given that security is one 
 of key requirements for the production clusters, it would be good to ensure 
 our users that security features in thrift gateway can be used without any 
 major concerns. Aside this, there are also some nice fixes pertaining to  
 leaky resources in 0.9.2 like \[2\] and \[3\].
 As far compatibility guarantees are concerned, thrift assures 100% wire 
 compatibility. However, it is my understanding that there were some minor 
 additions (new API) in 0.9.2 \[4\] which won't work in 0.9.0, but that won't 
 affect us since we are not using those features. And I tried running test 
 suite and did manual testing with thrift version set to 0.9.2 and things are 
 running smoothly. If there are no objections to this change, I would be more 
 than happy to file a jira and follow this up.
 \[1\] https://issues.apache.org/jira/browse/HBASE-11349
 \[2\] https://issues.apache.org/jira/browse/THRIFT-2274
 \[3\] https://issues.apache.org/jira/browse/THRIFT-2359
 \[4\] 
 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310800version=12324954
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14027) Clean up netty dependencies

2015-07-15 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14027:

  Resolution: Fixed
Release Note: HBase's convenience binary artifact no longer contains the 
netty 3.2.4 jar . This jar was not directly used by HBase, but may have been 
relied on by downstream applications.
  Status: Resolved  (was: Patch Available)

verified ITTAG, ITBLL generate/verify, and ITImportTSV on a cluster.

thanks for the review Stack!

 Clean up netty dependencies
 ---

 Key: HBASE-14027
 URL: https://issues.apache.org/jira/browse/HBASE-14027
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-14027.1.patch, HBASE-14027.2.patch, 
 HBASE-14027.3.patch


 We have multiple copies of Netty (3?) getting shipped around. clean some up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14041) Client MetaCache is cleared if a ThrottlingException is thrown

2015-07-15 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14041:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

thanks for the patch. Pushed to all target fix versions.

[~ndimiduk] let me know if you'd like this in branch-1.1.

 Client MetaCache is cleared if a ThrottlingException is thrown
 --

 Key: HBASE-14041
 URL: https://issues.apache.org/jira/browse/HBASE-14041
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0
Reporter: Eungsop Yoo
Assignee: Eungsop Yoo
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t-v2.patch, 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t-v3.patch, 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t.patch


 During performance test with the request throttling, I saw that hbase:meta 
 table had been read a lot. Currently the MetaCache of the client is cleared, 
 if a ThrottlingException is thrown. It seems to be not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627776#comment-14627776
 ] 

Hadoop QA commented on HBASE-12213:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12745396/HBASE-12213_13_withBBI.patch
  against master branch at commit 2f327c911056d02813f642503db9a4383e8b4a2f.
  ATTACHMENT ID: 12745396

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 70 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:red}-1 javac{color}.  The applied patch generated 24 javac compiler 
warnings (more than the master's current 20 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1874 checkstyle errors (more than the master's current 1873 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   *a single ByteBuffer and offset in that Buffer where the bytes 
starts. Since this API gets
+   *called in a loop we are passing a pair to it which could be 
created outside the loop and 
+   *the method would set the values on the pair that is passed in by 
the caller. This it avoid
+   *more object creations that would happen if the pair that is 
returned is created by this method
+  searcher = DecoderFactory.checkOut(block.asSubByteBuffer(block.limit() - 
block.position()), true);

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.cxf.jaxb.DocLiteralInInterceptorTest.testInterceptorInboundWrapped(DocLiteralInInterceptorTest.java:95)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14778//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14778//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14778//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14778//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14778//console

This message is automatically generated.

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_2.patch, HBASE-12213_4.patch, HBASE-12213_8_withBBI.patch, 
 HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-15 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-14085:
---

 Summary: Correct LICENSE and NOTICE files in artifacts
 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker


+Problems:

* checked LICENSE/NOTICE on binary
** binary artifact LICENSE file has not been updated to include the additional 
license terms for contained third party dependencies
** binary artifact NOTICE file does not include a copyright line
** binary artifact NOTICE file does not appear to propagate appropriate info 
from the NOTICE files from bundled dependencies
* checked NOTICE on source
** source artifact NOTICE file does not include a copyright line
** source NOTICE file includes notices for third party dependencies not 
included in the artifact
* checked NOTICE files shipped in maven jars
** copyright line only says 2015 when it's very likely the contents are under 
copyright prior to this year
* nit: NOTICE file on jars in maven say HBase - ${module} rather than Apache 
HBase - ${module} as required 

refs:

http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
http://www.apache.org/dev/licensing-howto.html#binary
http://www.apache.org/dev/licensing-howto.html#simple




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14041) Client MetaCache is cleared if a ThrottlingException is thrown

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627817#comment-14627817
 ] 

Hudson commented on HBASE-14041:


SUCCESS: Integrated in HBase-1.3-IT #40 (See 
[https://builds.apache.org/job/HBase-1.3-IT/40/])
HBASE-14041 Do not clear MetaCache if a ThrottlingException is thrown (busbey: 
rev 173f343aea56285f1d83ea3346a797113c02c12e)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


 Client MetaCache is cleared if a ThrottlingException is thrown
 --

 Key: HBASE-14041
 URL: https://issues.apache.org/jira/browse/HBASE-14041
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0
Reporter: Eungsop Yoo
Assignee: Eungsop Yoo
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t-v2.patch, 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t-v3.patch, 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t.patch


 During performance test with the request throttling, I saw that hbase:meta 
 table had been read a lot. Currently the MetaCache of the client is cleared, 
 if a ThrottlingException is thrown. It seems to be not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14027) Clean up netty dependencies

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627818#comment-14627818
 ] 

Hudson commented on HBASE-14027:


SUCCESS: Integrated in HBase-1.3-IT #40 (See 
[https://builds.apache.org/job/HBase-1.3-IT/40/])
HBASE-14027 clean up multiple netty jars. (busbey: rev 
93e26ce550b5585710c8a9aa386b10f89011ed31)
* hbase-server/pom.xml
* hbase-it/pom.xml
* pom.xml


 Clean up netty dependencies
 ---

 Key: HBASE-14027
 URL: https://issues.apache.org/jira/browse/HBASE-14027
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-14027.1.patch, HBASE-14027.2.patch, 
 HBASE-14027.3.patch


 We have multiple copies of Netty (3?) getting shipped around. clean some up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14075) HBaseClusterManager should use port(if given) to find pid

2015-07-15 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-14075:
--
Attachment: HBASE-14075-master_v5.patch

 HBaseClusterManager should use port(if given) to find pid
 -

 Key: HBASE-14075
 URL: https://issues.apache.org/jira/browse/HBASE-14075
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-14075-master_v2.patch, 
 HBASE-14075-master_v3.patch, HBASE-14075-master_v4.patch, 
 HBASE-14075-master_v5.patch, HBASE-14075.patch


 This issue is found while we run ITBLL in distributed cluster. Our testing 
 env is kind of special that we run multiple regionserver instance on a single 
 physical machine, so {noformat}ps -ef | grep proc_regionserver{noformat} will 
 return more than one line, thus cause the tool might check/kill the wrong 
 process
 Actually in HBaseClusterManager we already introduce port as a parameter for 
 methods like isRunning, kill, etc. So the only thing to do here is to get pid 
 through port if port is given



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14083) Fix separator width in Backup Masters of WebUI

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627702#comment-14627702
 ] 

Hadoop QA commented on HBASE-14083:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745395/HBASE-14083.patch
  against master branch at commit 2f327c911056d02813f642503db9a4383e8b4a2f.
  ATTACHMENT ID: 12745395

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14779//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14779//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14779//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14779//console

This message is automatically generated.

 Fix separator width in Backup Masters of WebUI
 --

 Key: HBASE-14083
 URL: https://issues.apache.org/jira/browse/HBASE-14083
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Yuhao Bi
Assignee: Yuhao Bi
Priority: Minor
 Attachments: After.PNG, Before.PNG, HBASE-14083.patch


 The horizontal line separator above total count in backup master is shorter 
 than the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-15 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-14085 started by Sean Busbey.
---
 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker

 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12296) Filters should work with ByteBufferedCell

2015-07-15 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12296:
---
Attachment: HBASE-12296_v1.patch

Attaching same patch for a new QA run.

 Filters should work with ByteBufferedCell
 -

 Key: HBASE-12296
 URL: https://issues.apache.org/jira/browse/HBASE-12296
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12296_v1.patch, HBASE-12296_v1.patch


 Now we have added an extension for Cell in server side, ByteBufferedCell, 
 where Cells are backed by BB (on heap or off heap). When the Cell is backed 
 by off heap buffer, the getXXXArray() APIs has to create temp byte[] and do 
 data copy and return that. This will be bit costly.  We have avoided this in 
 areas like CellComparator/SQM etc. Filter area was not touched in that patch. 
  This Jira aims at doing it in Filter area. 
 Eg : SCVF checking the cell value for the given value condition. It uses 
 getValueArray() to get cell value bytes.  When the cell is BB backed, it has 
 to use getValueByteBuffer() API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14027) Clean up netty dependencies

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627782#comment-14627782
 ] 

Hudson commented on HBASE-14027:


FAILURE: Integrated in HBase-1.2 #68 (See 
[https://builds.apache.org/job/HBase-1.2/68/])
HBASE-14027 clean up multiple netty jars. (busbey: rev 
37e273b8bb7b895a1e143ecf9dfdff29e03e4837)
* hbase-server/pom.xml
* hbase-it/pom.xml
* pom.xml


 Clean up netty dependencies
 ---

 Key: HBASE-14027
 URL: https://issues.apache.org/jira/browse/HBASE-14027
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-14027.1.patch, HBASE-14027.2.patch, 
 HBASE-14027.3.patch


 We have multiple copies of Netty (3?) getting shipped around. clean some up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14075) HBaseClusterManager should use port(if given) to find pid

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627679#comment-14627679
 ] 

Hadoop QA commented on HBASE-14075:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12745385/HBASE-14075-master_v4.patch
  against master branch at commit 2f327c911056d02813f642503db9a4383e8b4a2f.
  ATTACHMENT ID: 12745385

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 53 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14777//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14777//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14777//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14777//console

This message is automatically generated.

 HBaseClusterManager should use port(if given) to find pid
 -

 Key: HBASE-14075
 URL: https://issues.apache.org/jira/browse/HBASE-14075
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-14075-master_v2.patch, 
 HBASE-14075-master_v3.patch, HBASE-14075-master_v4.patch, HBASE-14075.patch


 This issue is found while we run ITBLL in distributed cluster. Our testing 
 env is kind of special that we run multiple regionserver instance on a single 
 physical machine, so {noformat}ps -ef | grep proc_regionserver{noformat} will 
 return more than one line, thus cause the tool might check/kill the wrong 
 process
 Actually in HBaseClusterManager we already introduce port as a parameter for 
 methods like isRunning, kill, etc. So the only thing to do here is to get pid 
 through port if port is given



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14041) Client MetaCache is cleared if a ThrottlingException is thrown

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627754#comment-14627754
 ] 

Hudson commented on HBASE-14041:


FAILURE: Integrated in HBase-TRUNK #6649 (See 
[https://builds.apache.org/job/HBase-TRUNK/6649/])
HBASE-14041 Do not clear MetaCache if a ThrottlingException is thrown (busbey: 
rev a63e3ac83ffb91948f464e4f62111d29adc02812)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java


 Client MetaCache is cleared if a ThrottlingException is thrown
 --

 Key: HBASE-14041
 URL: https://issues.apache.org/jira/browse/HBASE-14041
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0
Reporter: Eungsop Yoo
Assignee: Eungsop Yoo
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t-v2.patch, 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t-v3.patch, 
 0001-Do-not-clear-MetaCache-if-a-ThrottlingException-is-t.patch


 During performance test with the request throttling, I saw that hbase:meta 
 table had been read a lot. Currently the MetaCache of the client is cleared, 
 if a ThrottlingException is thrown. It seems to be not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14027) Clean up netty dependencies

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627755#comment-14627755
 ] 

Hudson commented on HBASE-14027:


FAILURE: Integrated in HBase-TRUNK #6649 (See 
[https://builds.apache.org/job/HBase-TRUNK/6649/])
HBASE-14027 clean up multiple netty jars. (busbey: rev 
25f7e804cd9e39e829fa02b476fa63ce7099ba46)
* hbase-server/pom.xml
* hbase-it/pom.xml
* pom.xml


 Clean up netty dependencies
 ---

 Key: HBASE-14027
 URL: https://issues.apache.org/jira/browse/HBASE-14027
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-14027.1.patch, HBASE-14027.2.patch, 
 HBASE-14027.3.patch


 We have multiple copies of Netty (3?) getting shipped around. clean some up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14027) Clean up netty dependencies

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627797#comment-14627797
 ] 

Hudson commented on HBASE-14027:


SUCCESS: Integrated in HBase-1.2-IT #52 (See 
[https://builds.apache.org/job/HBase-1.2-IT/52/])
HBASE-14027 clean up multiple netty jars. (busbey: rev 
37e273b8bb7b895a1e143ecf9dfdff29e03e4837)
* hbase-it/pom.xml
* hbase-server/pom.xml
* pom.xml


 Clean up netty dependencies
 ---

 Key: HBASE-14027
 URL: https://issues.apache.org/jira/browse/HBASE-14027
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-14027.1.patch, HBASE-14027.2.patch, 
 HBASE-14027.3.patch


 We have multiple copies of Netty (3?) getting shipped around. clean some up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12213:
---
Status: Open  (was: Patch Available)

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_2.patch, HBASE-12213_4.patch, 
 HBASE-12213_8_withBBI.patch, HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12296) Filters should work with ByteBufferedCell

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627634#comment-14627634
 ] 

ramkrishna.s.vasudevan commented on HBASE-12296:


+1 for patch. Discussed with Anoop internally on the ByteArrayComparable thing. 
Later if needed we can think of deprecating it and adding a new one. 

 Filters should work with ByteBufferedCell
 -

 Key: HBASE-12296
 URL: https://issues.apache.org/jira/browse/HBASE-12296
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12296_v1.patch


 Now we have added an extension for Cell in server side, ByteBufferedCell, 
 where Cells are backed by BB (on heap or off heap). When the Cell is backed 
 by off heap buffer, the getXXXArray() APIs has to create temp byte[] and do 
 data copy and return that. This will be bit costly.  We have avoided this in 
 areas like CellComparator/SQM etc. Filter area was not touched in that patch. 
  This Jira aims at doing it in Filter area. 
 Eg : SCVF checking the cell value for the given value condition. It uses 
 getValueArray() to get cell value bytes.  When the cell is BB backed, it has 
 to use getValueByteBuffer() API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14083) Fix separator width in Backup Masters of WebUI

2015-07-15 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HBASE-14083:
-
Attachment: HBASE-14083.patch

 Fix separator width in Backup Masters of WebUI
 --

 Key: HBASE-14083
 URL: https://issues.apache.org/jira/browse/HBASE-14083
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Yuhao Bi
Assignee: Yuhao Bi
Priority: Minor
 Attachments: After.PNG, Before.PNG, HBASE-14083.patch


 The horizontal line separator above total count in backup master is shorter 
 than the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627654#comment-14627654
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-12213 at 7/15/15 7:20 AM:
-

Consolidating the changes done in this patch as per the discussions/comments 
over in RB
- This patch now allows the read path to work with ByteBuff a new abstract 
class added (since we cannot subclass ByteBuffers).
The name ByteBuff was selected to avoid conflict with netty's ByteBuf and also 
that ByteBuffer is already used by nio.
- This abstract class can have a SingleByteBuff impl or MultipleByteBuff impl. 
 In case of the blocks coming out of L1 cache HDFS it will always be 
singleByteBuff.  This SingleByteBuff wraps the incoming BB from the HDFS and L1 
cache.
- In case of BucketCache, we will create a MultiByteBuff (an array of BBs) and 
the read path would work on this MultiByteBuffs using the API in the ByteBuff 
interface.  For now, even from the BucketCache we copy the buckets to a single 
onheap BB. This can be changed only after HBASE-12295 goes in. Once HBASE-12295 
we will not copy the buckets and instead serve them directly from the buckets 
using the ByteBuff's APIs thus ensuring that an offheap bucket cache will serve 
the reads from the offheap.
- After this change goes in and HBASE-12295, we need to ensure that we use the 
BufferBacked cells in the read path both for the non DBE case and DBE case.
- There are some changes done in the HFileReaderImpl blockSeek that tries to 
use the ByteBuff APIs such that they are more optimized and performance 
oriented, like getIntStrictlyFwd(), getLongStrictlyFwd() ( the naming of this 
API is under discussion and also thinking if we could pass a delta position 
from the current postion).  But the point is that these APIs try to utilize the 
position based BBUtils Unsafe accessing of the Bytebuffers and thus bypassing 
the ByteBuffer's bookkeeping that it does on the read APIs.




was (Author: ram_krish):
Consolidating the changes done in this patch as per the discussions/comments 
over in RB
- This patch now allows the read path to work with ByteBuff a new abstract 
class added (since we cannot subclass ByteBuffers).
The name ByteBuff was selected to avoid conflict with netty's ByteBuf and also 
that ByteBuffer is already used by nio.
- This abstract class can have a SingleByteBuffer impl or MultipleByteBuffer 
impl.  In case of the blocks coming out of L1 cache HDFS it will always be 
singleByteBuffer.  This SingleByteBuffer wraps the incoming BB from the HDFS 
and L1 cache.
- In case of BucketCache, we will create a the MultiByteBuffs (an array of 
BBs) and the read path would work on this MultiByteBuffs using the API in the 
ByteBuff interface.  For now, even from the BucketCAche we copy the buckets to 
a single onheap BB. This can be changed only after HBASE-12295 goes in. Once 
HBASE-12295 we will not copy the buckets and instead serve them directly from 
the buckets using the ByteBuff's APIs thus ensuring that an offheap bucket 
cache will serve the reads from the offheap.
- After this change goes in and HBASE-12295, we need to ensure that we use the 
BufferBacked cells in the read path both for the non DBE case and DBE case.
- There are some changes done in the HFileReaderImpl blockSeek that tries to 
use the ByteBuff APIs such that they are more optimized and performance 
oriented, like getIntStrictlyFwd(), getLongStrictlyFwd() ( the naming of this 
API is under discussion and also thinking if we could pass a delta position 
from the current postion).  But the point is that these APIs try to utilize the 
position based BBUtils Unsafe accessing of the Bytebuffers and thus bypassing 
the ByteBuffer's bookkeeping that it does on the read APIs.



 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_2.patch, HBASE-12213_4.patch, HBASE-12213_8_withBBI.patch, 
 HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14083) Fix separator width in Backup Masters of WebUI

2015-07-15 Thread Yuhao Bi (JIRA)
Yuhao Bi created HBASE-14083:


 Summary: Fix separator width in Backup Masters of WebUI
 Key: HBASE-14083
 URL: https://issues.apache.org/jira/browse/HBASE-14083
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Yuhao Bi
Assignee: Yuhao Bi
Priority: Minor


The horizontal line separator above total count in backup master is shorter 
than the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12213:
---
Status: Patch Available  (was: Open)

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_2.patch, HBASE-12213_4.patch, HBASE-12213_8_withBBI.patch, 
 HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14084) Observe some out-of-date doc on Integration Tests part

2015-07-15 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627663#comment-14627663
 ] 

Dima Spivak commented on HBASE-14084:
-

FWIW, I have a long-neglected JIRA that I do plan on getting to to add back 
standalone support for ChaosMonkey (HBASE-11276), but good point about it being 
worthwhile to update the docs on setting up environments for running the ITs. +1

 Observe some out-of-date doc on Integration Tests part
 

 Key: HBASE-14084
 URL: https://issues.apache.org/jira/browse/HBASE-14084
 Project: HBase
  Issue Type: Task
  Components: documentation
Affects Versions: 1.1.0
Reporter: Yu Li

 As titled, have checked src/main/asciidoc/_chapters/developer.adoc and 
 confirmed some out-of-date part, such as the doc still refers to 
 org.apache.hadoop.hbase.util.ChaosMonkey which doesn't exist anymore
 On the other hand, I think run ITBLL against distributed cluster is a 
 really good way to do real-world integration/system testing, but existing 
 document about this is not explicit enough. Actually it costs me quite a 
 while to setup the env and make the testing run smoothly (encountered issues 
 like always launching minicluster if run with bin/hbase script, finally got 
 it run using bin/hadoop script, etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13706) CoprocessorClassLoader should not exempt Hive classes

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627587#comment-14627587
 ] 

Hadoop QA commented on HBASE-13706:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745367/HBASE-13706.patch
  against master branch at commit 2f327c911056d02813f642503db9a4383e8b4a2f.
  ATTACHMENT ID: 12745367

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14775//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14775//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14775//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14775//console

This message is automatically generated.

 CoprocessorClassLoader should not exempt Hive classes
 -

 Key: HBASE-13706
 URL: https://issues.apache.org/jira/browse/HBASE-13706
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.12
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.1.2

 Attachments: HBASE-13706.patch


 CoprocessorClassLoader is used to load classes from the coprocessor jar.
 Certain classes are exempt from being loaded by this ClassLoader, which means 
 they will be ignored in the coprocessor jar, but loaded from parent classpath 
 instead.
 One problem is that we categorically exempt org.apache.hadoop.
 But it happens that Hive packages start with org.apache.hadoop.
 There is no reason to exclude hive classes from theCoprocessorClassLoader.
 HBase does not even include Hive jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14075) HBaseClusterManager should use port(if given) to find pid

2015-07-15 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627603#comment-14627603
 ] 

Yu Li commented on HBASE-14075:
---

Sure, let me post it on RB. Thanks for the kind reminder :-)

 HBaseClusterManager should use port(if given) to find pid
 -

 Key: HBASE-14075
 URL: https://issues.apache.org/jira/browse/HBASE-14075
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-14075-master_v2.patch, 
 HBASE-14075-master_v3.patch, HBASE-14075-master_v4.patch, HBASE-14075.patch


 This issue is found while we run ITBLL in distributed cluster. Our testing 
 env is kind of special that we run multiple regionserver instance on a single 
 physical machine, so {noformat}ps -ef | grep proc_regionserver{noformat} will 
 return more than one line, thus cause the tool might check/kill the wrong 
 process
 Actually in HBaseClusterManager we already introduce port as a parameter for 
 methods like isRunning, kill, etc. So the only thing to do here is to get pid 
 through port if port is given



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627654#comment-14627654
 ] 

ramkrishna.s.vasudevan commented on HBASE-12213:


Consolidating the changes done in this patch as per the discussions/comments 
over in RB
- This patch now allows the read path to work with ByteBuff a new abstract 
class added (since we cannot subclass ByteBuffers).
The name ByteBuff was selected to avoid conflict with netty's ByteBuf and also 
that ByteBuffer is already used by nio.
- This abstract class can have a SingleByteBuffer impl or MultipleByteBuffer 
impl.  In case of the blocks coming out of L1 cache HDFS it will always be 
singleByteBuffer.  This SingleByteBuffer wraps the incoming BB from the HDFS 
and L1 cache.
- In case of BucketCache, we will create a the MultiByteBuffs (an array of 
BBs) and the read path would work on this MultiByteBuffs using the API in the 
ByteBuff interface.  For now, even from the BucketCAche we copy the buckets to 
a single onheap BB. This can be changed only after HBASE-12295 goes in. Once 
HBASE-12295 we will not copy the buckets and instead serve them directly from 
the buckets using the ByteBuff's APIs thus ensuring that an offheap bucket 
cache will serve the reads from the offheap.
- After this change goes in and HBASE-12295, we need to ensure that we use the 
BufferBacked cells in the read path both for the non DBE case and DBE case.
- There are some changes done in the HFileReaderImpl blockSeek that tries to 
use the ByteBuff APIs such that they are more optimized and performance 
oriented, like getIntStrictlyFwd(), getLongStrictlyFwd() ( the naming of this 
API is under discussion and also thinking if we could pass a delta position 
from the current postion).  But the point is that these APIs try to utilize the 
position based BBUtils Unsafe accessing of the Bytebuffers and thus bypassing 
the ByteBuffer's bookkeeping that it does on the read APIs.



 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_2.patch, HBASE-12213_4.patch, HBASE-12213_8_withBBI.patch, 
 HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14075) HBaseClusterManager should use port(if given) to find pid

2015-07-15 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627628#comment-14627628
 ] 

Yu Li commented on HBASE-14075:
---

Have created https://reviews.apache.org/r/36502 and add you as reviewer, thanks.

 HBaseClusterManager should use port(if given) to find pid
 -

 Key: HBASE-14075
 URL: https://issues.apache.org/jira/browse/HBASE-14075
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-14075-master_v2.patch, 
 HBASE-14075-master_v3.patch, HBASE-14075-master_v4.patch, HBASE-14075.patch


 This issue is found while we run ITBLL in distributed cluster. Our testing 
 env is kind of special that we run multiple regionserver instance on a single 
 physical machine, so {noformat}ps -ef | grep proc_regionserver{noformat} will 
 return more than one line, thus cause the tool might check/kill the wrong 
 process
 Actually in HBaseClusterManager we already introduce port as a parameter for 
 methods like isRunning, kill, etc. So the only thing to do here is to get pid 
 through port if port is given



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14075) HBaseClusterManager should use port(if given) to find pid

2015-07-15 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627629#comment-14627629
 ] 

Yu Li commented on HBASE-14075:
---

Have created https://reviews.apache.org/r/36502 and add you as reviewer, thanks.

 HBaseClusterManager should use port(if given) to find pid
 -

 Key: HBASE-14075
 URL: https://issues.apache.org/jira/browse/HBASE-14075
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-14075-master_v2.patch, 
 HBASE-14075-master_v3.patch, HBASE-14075-master_v4.patch, HBASE-14075.patch


 This issue is found while we run ITBLL in distributed cluster. Our testing 
 env is kind of special that we run multiple regionserver instance on a single 
 physical machine, so {noformat}ps -ef | grep proc_regionserver{noformat} will 
 return more than one line, thus cause the tool might check/kill the wrong 
 process
 Actually in HBaseClusterManager we already introduce port as a parameter for 
 methods like isRunning, kill, etc. So the only thing to do here is to get pid 
 through port if port is given



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14083) Fix separator width in Backup Masters of WebUI

2015-07-15 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HBASE-14083:
-
Status: Patch Available  (was: Open)

 Fix separator width in Backup Masters of WebUI
 --

 Key: HBASE-14083
 URL: https://issues.apache.org/jira/browse/HBASE-14083
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Yuhao Bi
Assignee: Yuhao Bi
Priority: Minor
 Attachments: After.PNG, Before.PNG, HBASE-14083.patch


 The horizontal line separator above total count in backup master is shorter 
 than the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12213:
---
Attachment: HBASE-12213_13_withBBI.patch

Updated patch updating the review comments over in RB. Submitting for QA report.

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_2.patch, HBASE-12213_4.patch, HBASE-12213_8_withBBI.patch, 
 HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12295) Prevent block eviction under us if reads are in progress from the BBs

2015-07-15 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627586#comment-14627586
 ] 

Anoop Sam John commented on HBASE-12295:


Just adding to what Ram said abt diff Cell impl

Write path 
---
KeyValue  and NoTagsKeyValue

Read path
--
SizeCachedKeyValue, SizeCachedNoTagsKeyValue
OffheapCell, OffheapNoTagsCell  (this will be having size cached and it will be 
extending ByteBufferedCell  and also will be marked as Shared memory using the 
new interface)
DBE was/is having an impl already (ClonedSeekerState)  We will add one more 
counterpart for this which will be offheap backed - We need a rename. 
ClonedSeekerState does not look like a Cell at all
Prefix tree was/is having an impl already (ClonedPrefixCell) - We will add one 
more counterpart for this which will be offheap backed - We need a rename?

also read path will have KeyOnlyKeyValue stuff as well as the FakeCell impls 
for FirstOnROw, LastOnRow etc.

 Prevent block eviction under us if reads are in progress from the BBs
 -

 Key: HBASE-12295
 URL: https://issues.apache.org/jira/browse/HBASE-12295
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-12295.pdf, HBASE-12295_1.patch, HBASE-12295_1.pdf, 
 HBASE-12295_10.patch, HBASE-12295_12.patch, HBASE-12295_2.patch, 
 HBASE-12295_4.patch, HBASE-12295_4.pdf, HBASE-12295_5.pdf, 
 HBASE-12295_9.patch, HBASE-12295_trunk.patch


 While we try to serve the reads from the BBs directly from the block cache, 
 we need to ensure that the blocks does not get evicted under us while 
 reading.  This JIRA is to discuss and implement a strategy for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14084) Observe some out-of-date doc on Integration Tests part

2015-07-15 Thread Yu Li (JIRA)
Yu Li created HBASE-14084:
-

 Summary: Observe some out-of-date doc on Integration Tests part
 Key: HBASE-14084
 URL: https://issues.apache.org/jira/browse/HBASE-14084
 Project: HBase
  Issue Type: Task
  Components: documentation
Affects Versions: 1.1.0
Reporter: Yu Li


As titled, have checked src/main/asciidoc/_chapters/developer.adoc and 
confirmed some out-of-date part, such as the doc still refers to 
org.apache.hadoop.hbase.util.ChaosMonkey which doesn't exist anymore

On the other hand, I think run ITBLL against distributed cluster is a really 
good way to do real-world integration/system testing, but existing document 
about this is not explicit enough. Actually it costs me quite a while to setup 
the env and make the testing run smoothly (encountered issues like always 
launching minicluster if run with bin/hbase script, finally got it run using 
bin/hadoop script, etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14083) Fix separator width in Backup Masters of WebUI

2015-07-15 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HBASE-14083:
-
Attachment: Before.PNG

 Fix separator width in Backup Masters of WebUI
 --

 Key: HBASE-14083
 URL: https://issues.apache.org/jira/browse/HBASE-14083
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Yuhao Bi
Assignee: Yuhao Bi
Priority: Minor
 Attachments: Before.PNG


 The horizontal line separator above total count in backup master is shorter 
 than the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14083) Fix separator width in Backup Masters of WebUI

2015-07-15 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HBASE-14083:
-
Attachment: After.PNG

 Fix separator width in Backup Masters of WebUI
 --

 Key: HBASE-14083
 URL: https://issues.apache.org/jira/browse/HBASE-14083
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Yuhao Bi
Assignee: Yuhao Bi
Priority: Minor
 Attachments: After.PNG, Before.PNG


 The horizontal line separator above total count in backup master is shorter 
 than the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14089) Remove unused draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14089:
---
Description: I had a look at instances where we use SecureRandom, which 
could block if insufficient entropy, in the 0.98 and master branch code. 
(Random in contrast is a PRNG seeded by System#nanoTime, it doesn't draw from 
system entropy.) Most uses are in encryption related code, our native 
encryption and SSL, but we do also use SecureRandom for salting znode metadata 
in RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
Conceivably we could block unexpectedly when constructing data to write out to 
a znode if entropy gets too low until more is available.   (was: I had a look 
at instances where we use SecureRandom, which could block if insufficient 
entropy, in the 0.98 and master branch code. (Random in contrast is a PRNG 
seeded by System#nanoTime, it doesn't draw from system entropy.) Most uses are 
in encryption related code, our native encryption and SSL, but we do also use 
SecureRandom for salting znode metadata in RecoverableZooKeeper#appendMetadata, 
which is called whenever we do setData. Conceivably we could block unexpectedly 
when constructing data to write out to a znode if entropy gets too low until 
more is available. Those salt values are never used and so appear to serve no 
purpose. We should remove the use of SecureRandom here and just pad with zeros 
for backwards compatibility.)

 Remove unused draw of system entropy from RecoverableZooKeeper
 --

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.2, 1.3.0, 1.0.3


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14089) Remove unused draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628361#comment-14628361
 ] 

Andrew Purtell commented on HBASE-14089:


Initially I thought the salt values are never used but then realized they 
contribute to the uniqueness of the identifier. However, we don't need strong 
randomness here so let me make a patch that replaces SecureRandom with Random.

 Remove unused draw of system entropy from RecoverableZooKeeper
 --

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.2, 1.3.0, 1.0.3


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14084) Observe some out-of-date doc on Integration Tests part

2015-07-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628472#comment-14628472
 ] 

Sean Busbey commented on HBASE-14084:
-

I was just complaining to [~stack] about the lack of docs on how to run the ITs 
against a real cluster last night!

Happy to review if anyone has time to tackle this near-term. Won't have a 
chance to do it myself until probably August.

 Observe some out-of-date doc on Integration Tests part
 

 Key: HBASE-14084
 URL: https://issues.apache.org/jira/browse/HBASE-14084
 Project: HBase
  Issue Type: Task
  Components: documentation
Affects Versions: 1.1.0
Reporter: Yu Li

 As titled, have checked src/main/asciidoc/_chapters/developer.adoc and 
 confirmed some out-of-date part, such as the doc still refers to 
 org.apache.hadoop.hbase.util.ChaosMonkey which doesn't exist anymore
 On the other hand, I think run ITBLL against distributed cluster is a 
 really good way to do real-world integration/system testing, but existing 
 document about this is not explicit enough. Actually it costs me quite a 
 while to setup the env and make the testing run smoothly (encountered issues 
 like always launching minicluster if run with bin/hbase script, finally got 
 it run using bin/hadoop script, etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14077) Add package to hbase-protocol protobuf files.

2015-07-15 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628486#comment-14628486
 ] 

Elliott Clark commented on HBASE-14077:
---

K, pushing to branch-1 and master.

 Add package to hbase-protocol protobuf files.
 -

 Key: HBASE-14077
 URL: https://issues.apache.org/jira/browse/HBASE-14077
 Project: HBase
  Issue Type: Bug
  Components: Protobufs
Affects Versions: 2.0.0, 1.2.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14077.patch


 c++ generated code is currently in the default namespace. That's bad 
 practice; so lets fix it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14090) Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS

2015-07-15 Thread stack (JIRA)
stack created HBASE-14090:
-

 Summary: Redo FS layout; let go of tables/regions/stores directory 
hierarchy in DFS
 Key: HBASE-14090
 URL: https://issues.apache.org/jira/browse/HBASE-14090
 Project: HBase
  Issue Type: Sub-task
Reporter: stack


Our layout as is won't work if 1M regions; e.g. HDFS will fall over if 
directories of hundreds of thousands of files. HBASE-13991 (Humongous Tables) 
would address this specific directory problem only by adding subdirs under 
table dir but there are other issues with our current layout:

 * Our table/regions/column family 'facade' has to be maintained in two 
locations -- in master memory and in the hdfs directory layout -- and the farce 
needs to be kept synced or worse, the model management is split between master 
memory and DFS layout. 'Syncing' in HDFS has us dropping constructs such as 
'Reference' and 'HalfHFiles' on split, 'HFileLinks' when archiving, and so on. 
This 'tie' makes it hard to make changes.
 * While HDFS has atomic rename, useful for fencing and for having files added 
atomically, if the model were solely owned by hbase, there are hbase primitives 
we could make use of -- changes in a row are atomic and coprocessors -- to 
simplify table transactions and provide more consistent views of our model to 
clients; file 'moves' could be a memory operation only rather than an HDFS 
call; sharing files between tables/snapshots and when it is safe to remove them 
would be simplified if one owner only; and so on.

This is an umbrella blue-sky issue to discuss what a new layout would look like 
and how we might get there. I'll follow up with some sketches of what new 
layout could look like that come of some chats a few of us have been having. We 
are also under the 'delusion' that move to a new layout could be done as part 
of a rolling upgrade and that the amount of work involved is not gargantuan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14089) Remove unused draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-14089:
--

 Summary: Remove unused draw of system entropy from 
RecoverableZooKeeper
 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.2, 1.3.0, 1.0.3


I had a look at instances where we use SecureRandom, which could block if 
insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) Most 
uses are in encryption related code, our native encryption and SSL, but we do 
also use SecureRandom for salting znode metadata in 
RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
Conceivably we could block unexpectedly when constructing data to write out to 
a znode if entropy gets too low until more is available. Those salt values are 
never used and so appear to serve no purpose. We should remove the use of 
SecureRandom here and just pad with zeros for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14090) Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS

2015-07-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628360#comment-14628360
 ] 

stack commented on HBASE-14090:
---

See pushback in HBASE-13991 for arguments for doing more than just patching 
current layout. In particular see [~mbertozzi]'s list here: 
https://issues.apache.org/jira/browse/HBASE-13991?focusedCommentId=14608540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14608540



 Redo FS layout; let go of tables/regions/stores directory hierarchy in DFS
 --

 Key: HBASE-14090
 URL: https://issues.apache.org/jira/browse/HBASE-14090
 Project: HBase
  Issue Type: Sub-task
Reporter: stack

 Our layout as is won't work if 1M regions; e.g. HDFS will fall over if 
 directories of hundreds of thousands of files. HBASE-13991 (Humongous Tables) 
 would address this specific directory problem only by adding subdirs under 
 table dir but there are other issues with our current layout:
  * Our table/regions/column family 'facade' has to be maintained in two 
 locations -- in master memory and in the hdfs directory layout -- and the 
 farce needs to be kept synced or worse, the model management is split between 
 master memory and DFS layout. 'Syncing' in HDFS has us dropping constructs 
 such as 'Reference' and 'HalfHFiles' on split, 'HFileLinks' when archiving, 
 and so on. This 'tie' makes it hard to make changes.
  * While HDFS has atomic rename, useful for fencing and for having files 
 added atomically, if the model were solely owned by hbase, there are hbase 
 primitives we could make use of -- changes in a row are atomic and 
 coprocessors -- to simplify table transactions and provide more consistent 
 views of our model to clients; file 'moves' could be a memory operation only 
 rather than an HDFS call; sharing files between tables/snapshots and when it 
 is safe to remove them would be simplified if one owner only; and so on.
 This is an umbrella blue-sky issue to discuss what a new layout would look 
 like and how we might get there. I'll follow up with some sketches of what 
 new layout could look like that come of some chats a few of us have been 
 having. We are also under the 'delusion' that move to a new layout could be 
 done as part of a rolling upgrade and that the amount of work involved is not 
 gargantuan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14077) Add package to hbase-protocol protobuf files.

2015-07-15 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14077:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed thanks for the reviews.

 Add package to hbase-protocol protobuf files.
 -

 Key: HBASE-14077
 URL: https://issues.apache.org/jira/browse/HBASE-14077
 Project: HBase
  Issue Type: Bug
  Components: Protobufs
Affects Versions: 2.0.0, 1.2.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14077.patch


 c++ generated code is currently in the default namespace. That's bad 
 practice; so lets fix it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14088) Close Connection in LoadTestTool#applyColumnFamilyOptions

2015-07-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628346#comment-14628346
 ] 

Ted Yu commented on HBASE-14088:


{code}
try (Connection conn = ConnectionFactory.createConnection(conf);
Admin admin = conn.getAdmin()) {
{code}
conn would be closed by the try-with-resources construct.

 Close Connection in LoadTestTool#applyColumnFamilyOptions
 -

 Key: HBASE-14088
 URL: https://issues.apache.org/jira/browse/HBASE-14088
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14088.patch


 We never close connection in LoadTestTool#applyColumnFamilyOptions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628345#comment-14628345
 ] 

ramkrishna.s.vasudevan commented on HBASE-12213:


The test failure seems to be unrelated.
{code}
org.apache.hadoop.hbase.master.TestDistributedLogSplitting.xml : XML document 
structures must start and end within the same entity. Nested exception: XML 
document structures must start and end within the same entity.
at org.dom4j.io.SAXReader.read(SAXReader.java:482)
at org.dom4j.io.SAXReader.read(SAXReader.java:264)
at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:123)
at hudson.tasks.junit.TestResult.parse(TestResult.java:273)
at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:229)
at hudson.tasks.junit.TestResult.parse(TestResult.java:164)
at hudson.tasks.junit.TestResult.parse(TestResult.java:147)
at hudson.tasks.junit.TestResult.init(TestResult.java:123)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:117)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:90)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2474)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}
Sometime this happens. The test case seems to pass locally.

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_14_withBBI.patch, HBASE-12213_2.patch, HBASE-12213_4.patch, 
 HBASE-12213_8_withBBI.patch, HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14025) Update CHANGES.txt for 1.2

2015-07-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628399#comment-14628399
 ] 

Sean Busbey commented on HBASE-14025:
-

We're saying that the committer does still set the Fix Version/s to all 
affected branches. It's the RM who laters things.

{quote}
if (say) 1.2.0 was not released, yet, but 1.2 was already branched the 
committer has to remember that when committing to branch-1 which will be (say) 
1.3.0
{quote}

Yes. And they can then set 1.2.0 and 1.3.0 as fix versions.

{quote}
* one cannot just copy the (again, say) CHANGES.txt from 1.2.0 (once it's 
released) into 1.3.0 and fill in the 1.3.0 changes from jira. If jira is the 
source of truth that should be possible
* I'd still want to see 1.2.0 changes, followed by the new 1.3.x changes, no?
* since we branch 1.2 before 1.2.0 is released (and hence branch-1 becomes 
1.3.0) we can simply copy the CHANGES.txt from 1.2.0 and fill in the 1.3.x 
changes, instead.
{quote}

This should be possible starting with 1.3.0. For 1.2.0 I had to go copy the 
notes from 1.1.0 and 1.0.0 to build the CHANGES.txt file with all minor version 
information.

If we left 1.3.0 as a fix version rather than have the RM clean it up as a part 
of releasing 1.2.0, then we wouldn't be able to do this iterative building 
because some of the jiras would be repeated in both the 1.2.0 section and the 
1.3.0 section.


 Update CHANGES.txt for 1.2
 --

 Key: HBASE-14025
 URL: https://issues.apache.org/jira/browse/HBASE-14025
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 1.2.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 1.2.0


 Since it's more effort than I expected, making a ticket to track actually 
 updating CHANGES.txt so that new RMs have an idea what to expect.
 Maybe will make doc changes if there's enough here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14077) Add package to hbase-protocol protobuf files.

2015-07-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628460#comment-14628460
 ] 

Andrew Purtell commented on HBASE-14077:


+1

 Add package to hbase-protocol protobuf files.
 -

 Key: HBASE-14077
 URL: https://issues.apache.org/jira/browse/HBASE-14077
 Project: HBase
  Issue Type: Bug
  Components: Protobufs
Affects Versions: 2.0.0, 1.2.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14077.patch


 c++ generated code is currently in the default namespace. That's bad 
 practice; so lets fix it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14089) Remove unused draw of system entropy from RecoverableZooKeeper

2015-07-15 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14089:
---
Attachment: HBASE-14089.patch

Trivial patch.

 Remove unused draw of system entropy from RecoverableZooKeeper
 --

 Key: HBASE-14089
 URL: https://issues.apache.org/jira/browse/HBASE-14089
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.2, 1.3.0, 1.0.3

 Attachments: HBASE-14089.patch


 I had a look at instances where we use SecureRandom, which could block if 
 insufficient entropy, in the 0.98 and master branch code. (Random in contrast 
 is a PRNG seeded by System#nanoTime, it doesn't draw from system entropy.) 
 Most uses are in encryption related code, our native encryption and SSL, but 
 we do also use SecureRandom for salting znode metadata in 
 RecoverableZooKeeper#appendMetadata, which is called whenever we do setData. 
 Conceivably we could block unexpectedly when constructing data to write out 
 to a znode if entropy gets too low until more is available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14092) Add --no-lock and --no-balancer options to hbck

2015-07-15 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14092:
-

 Summary: Add --no-lock and --no-balancer options to hbck
 Key: HBASE-14092
 URL: https://issues.apache.org/jira/browse/HBASE-14092
 Project: HBase
  Issue Type: Bug
  Components: hbck, util
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0


HBCK is sometimes used as a way to check the health of the cluster. When doing 
that it's not necessary to turn off the balancer. As such it's not needed to 
lock other runs of hbck out.

We should add the --no-lock and --no-balancer command line flags.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14087) Add missing ASL headers

2015-07-15 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-14087:
---

 Summary: Add missing ASL headers
 Key: HBASE-14087
 URL: https://issues.apache.org/jira/browse/HBASE-14087
 Project: HBase
  Issue Type: Sub-task
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker


we have a couple of files that are missing their headers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12213:
---
Status: Open  (was: Patch Available)

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_2.patch, HBASE-12213_4.patch, HBASE-12213_8_withBBI.patch, 
 HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12213:
---
Attachment: HBASE-12213_14_withBBI.patch

Updated patch fixing the checkstyle and javadoc.  Also removes the 
ByteBuff.copy() API - the static one and move that to as abstract method 
put(offset, src, srcOffset, srcLength) in the Bytebuff as per Anoop's 
suggestion. 

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_14_withBBI.patch, HBASE-12213_2.patch, HBASE-12213_4.patch, 
 HBASE-12213_8_withBBI.patch, HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12295) Prevent block eviction under us if reads are in progress from the BBs

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12295:
---
Status: Patch Available  (was: Open)

 Prevent block eviction under us if reads are in progress from the BBs
 -

 Key: HBASE-12295
 URL: https://issues.apache.org/jira/browse/HBASE-12295
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-12295.pdf, HBASE-12295_1.patch, HBASE-12295_1.pdf, 
 HBASE-12295_10.patch, HBASE-12295_12.patch, HBASE-12295_14.patch, 
 HBASE-12295_2.patch, HBASE-12295_4.patch, HBASE-12295_4.pdf, 
 HBASE-12295_5.pdf, HBASE-12295_9.patch, HBASE-12295_trunk.patch


 While we try to serve the reads from the BBs directly from the block cache, 
 we need to ensure that the blocks does not get evicted under us while 
 reading.  This JIRA is to discuss and implement a strategy for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14088) Close Connection in LoadTestTool#applyColumnFamilyOptions

2015-07-15 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-14088:

Status: Patch Available  (was: Open)

 Close Connection in LoadTestTool#applyColumnFamilyOptions
 -

 Key: HBASE-14088
 URL: https://issues.apache.org/jira/browse/HBASE-14088
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14088.patch


 We never close connection in LoadTestTool#applyColumnFamilyOptions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12295) Prevent block eviction under us if reads are in progress from the BBs

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12295:
---
Attachment: HBASE-12295_14.patch

Patch for QA.

 Prevent block eviction under us if reads are in progress from the BBs
 -

 Key: HBASE-12295
 URL: https://issues.apache.org/jira/browse/HBASE-12295
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-12295.pdf, HBASE-12295_1.patch, HBASE-12295_1.pdf, 
 HBASE-12295_10.patch, HBASE-12295_12.patch, HBASE-12295_14.patch, 
 HBASE-12295_2.patch, HBASE-12295_4.patch, HBASE-12295_4.pdf, 
 HBASE-12295_5.pdf, HBASE-12295_9.patch, HBASE-12295_trunk.patch


 While we try to serve the reads from the BBs directly from the block cache, 
 we need to ensure that the blocks does not get evicted under us while 
 reading.  This JIRA is to discuss and implement a strategy for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-13329) ArrayIndexOutOfBoundsException in CellComparator#getMinimumMidpointArray

2015-07-15 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-13329:
--
Comment: was deleted

(was: I see. Makes sense. Thanks.)

 ArrayIndexOutOfBoundsException in CellComparator#getMinimumMidpointArray
 

 Key: HBASE-13329
 URL: https://issues.apache.org/jira/browse/HBASE-13329
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.0.1
 Environment: linux-debian-jessie
 ec2 - t2.micro instances
Reporter: Ruben Aguiar
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2

 Attachments: 13329-asserts.patch, 13329-v1.patch, 13329.txt, 
 HBASE-13329.test.00.branch-1.1.patch


 While trying to benchmark my opentsdb cluster, I've created a script that 
 sends to hbase always the same value (in this case 1). After a few minutes, 
 the whole region server crashes and the region itself becomes impossible to 
 open again (cannot assign or unassign). After some investigation, what I saw 
 on the logs is that when a Memstore flush is called on a large region (128mb) 
 the process errors, killing the regionserver. On restart, replaying the edits 
 generates the same error, making the region unavailable. Tried to manually 
 unassign, assign or close_region. That didn't work because the code that 
 reads/replays it crashes.
 From my investigation this seems to be an overflow issue. The logs show that 
 the function getMinimumMidpointArray tried to access index -32743 of an 
 array, extremely close to the minimum short value in Java. Upon investigation 
 of the source code, it seems an index short is used, being incremented as 
 long as the two vectors are the same, probably making it overflow on large 
 vectors with equal data. Changing it to int should solve the problem.
 Here follows the hadoop logs of when the regionserver went down. Any help is 
 appreciated. Any other information you need please do tell me:
 2015-03-24 18:00:56,187 INFO  [regionserver//10.2.0.73:16020.logRoller] 
 wal.FSHLog: Rolled WAL 
 /hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427220018516
  with entries=143, filesize=134.70 MB; new WAL 
 /hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427220056140
 2015-03-24 18:00:56,188 INFO  [regionserver//10.2.0.73:16020.logRoller] 
 wal.FSHLog: Archiving 
 hdfs://10.2.0.74:8020/hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427219987709
  to 
 hdfs://10.2.0.74:8020/hbase/oldWALs/10.2.0.73%2C16020%2C1427216382590.default.1427219987709
 2015-03-24 18:04:35,722 INFO  [MemStoreFlusher.0] regionserver.HRegion: 
 Started memstore flush for 
 tsdb,,1427133969325.52bc1994da0fea97563a4a656a58bec2., current region 
 memstore size 128.04 MB
 2015-03-24 18:04:36,154 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: 
 ABORTING region server 10.2.0.73,16020,1427216382590: Replay of WAL required. 
 Forcing server shutdown
 org.apache.hadoop.hbase.DroppedSnapshotException: region: 
 tsdb,,1427133969325.52bc1994da0fea97563a4a656a58bec2.
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1999)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1770)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1702)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:445)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: -32743
   at 
 org.apache.hadoop.hbase.CellComparator.getMinimumMidpointArray(CellComparator.java:478)
   at 
 org.apache.hadoop.hbase.CellComparator.getMidpoint(CellComparator.java:448)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishBlock(HFileWriterV2.java:165)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV2.checkBlockBoundary(HFileWriterV2.java:146)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:263)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:932)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:121)
   at 
 

[jira] [Issue Comment Deleted] (HBASE-13329) ArrayIndexOutOfBoundsException in CellComparator#getMinimumMidpointArray

2015-07-15 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-13329:
--
Comment: was deleted

(was: I see. Makes sense. Thanks.)

 ArrayIndexOutOfBoundsException in CellComparator#getMinimumMidpointArray
 

 Key: HBASE-13329
 URL: https://issues.apache.org/jira/browse/HBASE-13329
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.0.1
 Environment: linux-debian-jessie
 ec2 - t2.micro instances
Reporter: Ruben Aguiar
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2

 Attachments: 13329-asserts.patch, 13329-v1.patch, 13329.txt, 
 HBASE-13329.test.00.branch-1.1.patch


 While trying to benchmark my opentsdb cluster, I've created a script that 
 sends to hbase always the same value (in this case 1). After a few minutes, 
 the whole region server crashes and the region itself becomes impossible to 
 open again (cannot assign or unassign). After some investigation, what I saw 
 on the logs is that when a Memstore flush is called on a large region (128mb) 
 the process errors, killing the regionserver. On restart, replaying the edits 
 generates the same error, making the region unavailable. Tried to manually 
 unassign, assign or close_region. That didn't work because the code that 
 reads/replays it crashes.
 From my investigation this seems to be an overflow issue. The logs show that 
 the function getMinimumMidpointArray tried to access index -32743 of an 
 array, extremely close to the minimum short value in Java. Upon investigation 
 of the source code, it seems an index short is used, being incremented as 
 long as the two vectors are the same, probably making it overflow on large 
 vectors with equal data. Changing it to int should solve the problem.
 Here follows the hadoop logs of when the regionserver went down. Any help is 
 appreciated. Any other information you need please do tell me:
 2015-03-24 18:00:56,187 INFO  [regionserver//10.2.0.73:16020.logRoller] 
 wal.FSHLog: Rolled WAL 
 /hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427220018516
  with entries=143, filesize=134.70 MB; new WAL 
 /hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427220056140
 2015-03-24 18:00:56,188 INFO  [regionserver//10.2.0.73:16020.logRoller] 
 wal.FSHLog: Archiving 
 hdfs://10.2.0.74:8020/hbase/WALs/10.2.0.73,16020,1427216382590/10.2.0.73%2C16020%2C1427216382590.default.1427219987709
  to 
 hdfs://10.2.0.74:8020/hbase/oldWALs/10.2.0.73%2C16020%2C1427216382590.default.1427219987709
 2015-03-24 18:04:35,722 INFO  [MemStoreFlusher.0] regionserver.HRegion: 
 Started memstore flush for 
 tsdb,,1427133969325.52bc1994da0fea97563a4a656a58bec2., current region 
 memstore size 128.04 MB
 2015-03-24 18:04:36,154 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: 
 ABORTING region server 10.2.0.73,16020,1427216382590: Replay of WAL required. 
 Forcing server shutdown
 org.apache.hadoop.hbase.DroppedSnapshotException: region: 
 tsdb,,1427133969325.52bc1994da0fea97563a4a656a58bec2.
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1999)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1770)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1702)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:445)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
   at 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: -32743
   at 
 org.apache.hadoop.hbase.CellComparator.getMinimumMidpointArray(CellComparator.java:478)
   at 
 org.apache.hadoop.hbase.CellComparator.getMidpoint(CellComparator.java:448)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishBlock(HFileWriterV2.java:165)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV2.checkBlockBoundary(HFileWriterV2.java:146)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:263)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:932)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:121)
   at 
 

[jira] [Updated] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12213:
---
Status: Patch Available  (was: Open)

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
 HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
 HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
 HBASE-12213_14_withBBI.patch, HBASE-12213_2.patch, HBASE-12213_4.patch, 
 HBASE-12213_8_withBBI.patch, HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip


 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-12374:
--

Assignee: Anoop Sam John  (was: ramkrishna.s.vasudevan)

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John

 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14086) remove unused bundled dependencies

2015-07-15 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-14086:
---

 Summary: remove unused bundled dependencies
 Key: HBASE-14086
 URL: https://issues.apache.org/jira/browse/HBASE-14086
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker


We have some files with compatible non-ASL licenses that don't appear to be 
used, so remove them.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14027) Clean up netty dependencies

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627869#comment-14627869
 ] 

Hudson commented on HBASE-14027:


FAILURE: Integrated in HBase-1.3 #54 (See 
[https://builds.apache.org/job/HBase-1.3/54/])
HBASE-14027 clean up multiple netty jars. (busbey: rev 
93e26ce550b5585710c8a9aa386b10f89011ed31)
* pom.xml
* hbase-server/pom.xml
* hbase-it/pom.xml


 Clean up netty dependencies
 ---

 Key: HBASE-14027
 URL: https://issues.apache.org/jira/browse/HBASE-14027
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-14027.1.patch, HBASE-14027.2.patch, 
 HBASE-14027.3.patch


 We have multiple copies of Netty (3?) getting shipped around. clean some up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14088) Close Connection in LoadTestTool#applyColumnFamilyOptions

2015-07-15 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-14088:

Attachment: HBASE-14088.patch

Here is simple patch

 Close Connection in LoadTestTool#applyColumnFamilyOptions
 -

 Key: HBASE-14088
 URL: https://issues.apache.org/jira/browse/HBASE-14088
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14088.patch


 We never close connection in LoadTestTool#applyColumnFamilyOptions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12296) Filters should work with ByteBufferedCell

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627944#comment-14627944
 ] 

Hadoop QA commented on HBASE-12296:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12745418/HBASE-12296_v1.patch
  against master branch at commit a63e3ac83ffb91948f464e4f62111d29adc02812.
  ATTACHMENT ID: 12745418

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1874 checkstyle errors (more than the master's current 1873 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14780//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14780//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14780//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14780//console

This message is automatically generated.

 Filters should work with ByteBufferedCell
 -

 Key: HBASE-12296
 URL: https://issues.apache.org/jira/browse/HBASE-12296
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12296_v1.patch, HBASE-12296_v1.patch


 Now we have added an extension for Cell in server side, ByteBufferedCell, 
 where Cells are backed by BB (on heap or off heap). When the Cell is backed 
 by off heap buffer, the getXXXArray() APIs has to create temp byte[] and do 
 data copy and return that. This will be bit costly.  We have avoided this in 
 areas like CellComparator/SQM etc. Filter area was not touched in that patch. 
  This Jira aims at doing it in Filter area. 
 Eg : SCVF checking the cell value for the given value condition. It uses 
 getValueArray() to get cell value bytes.  When the cell is BB backed, it has 
 to use getValueByteBuffer() API instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13709) Updates to meta table server columns may be eclipsed

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627844#comment-14627844
 ] 

Hudson commented on HBASE-13709:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1011 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1011/])
HBASE-13743 Backport HBASE-13709 (Updates to meta table server columns may be 
eclipsed) to 0.98 (apurtell: rev 063e1b2bd91ad173fa2714df99fb66f6330ec55a)
* hbase-server/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* hbase-protocol/src/main/protobuf/Admin.proto
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestOpenRegionHandler.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerNoMaster.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java


 Updates to meta table server columns may be eclipsed
 

 Key: HBASE-13709
 URL: https://issues.apache.org/jira/browse/HBASE-13709
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: hbase-13709_v1.patch, hbase-13709_v1.patch, 
 hbase-13709_v2.patch


 HBASE-11536 fixes a case where on a very rare occasion, the meta updates may 
 be processed out of order. The fix is to use the RS's timestamp for the 
 server column in meta update, but that actually opens up a vulnerability for 
 clock skew (see the discussions in the jira). 
 For the region replicas case, we can reproduce a problem where the server 
 name field is eclipsed by the masters earlier update because the RS is 
 lagging behind. However, this is not specific to replicas, but occurs more 
 frequently with it. 
 One option that was discussed was to send the master's ts with open region 
 RPC and use it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14086) remove unused bundled dependencies

2015-07-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627847#comment-14627847
 ] 

Sean Busbey commented on HBASE-14086:
-

src/main/site/resources/css/freebsd_docbook.css

 remove unused bundled dependencies
 --

 Key: HBASE-14086
 URL: https://issues.apache.org/jira/browse/HBASE-14086
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker

 We have some files with compatible non-ASL licenses that don't appear to be 
 used, so remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >