[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641853#comment-14641853
 ] 

ramkrishna.s.vasudevan commented on HBASE-14155:


From phoenix, once we flush the data this problem seems to happen I think. 
Will take this up and see.

 StackOverflowError in reverse scan
 --

 Key: HBASE-14155
 URL: https://issues.apache.org/jira/browse/HBASE-14155
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: James Taylor
  Labels: Phoenix
 Attachments: ReproReverseScanStackOverflow.java, 
 ReproReverseScanStackOverflowCoprocessor.java


 A stack overflow may occur when a reverse scan is done. To reproduce (on a 
 Mac), use the following steps:
 - Download the Phoenix 4.5.0 RC here: 
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
 - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
 (removing any earlier Phoenix version if there was one installed)
 - Stop and restart HBase
 - From the bin directory of the Phoenix binary distribution, start sqlline 
 like this: ./sqlline.py localhost
 - Create a new table and populate it like this:
 {code}
 create table desctest (k varchar primary key desc);
 upsert into desctest values ('a');
 upsert into desctest values ('ab');
 upsert into desctest values ('b');
 {code}
 - Note that the following query works fine at this point:
 {code}
 select * from desctest order by k;
 +--+
 |K |
 +--+
 | a|
 | ab   |
 | b|
 +--+
 {code}
 - Stop and start HBase
 - Rerun the above query again and you'll get  a StackOverflowError at 
 StoreFileScanner.seekToPreviousRow()
 {code}
 select * from desctest order by k;
 java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.hadoop.hbase.DoNotRetryIOException: 
 DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
   at 
 org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
   at 
 org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
   at 
 org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
   at 
 org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.StackOverflowError
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
 {code}
 I've attempted to reproduce this in a standalone HBase unit test, but have 
 not been able to (but I'll attach my attempt which mimics what Phoenix is 
 doing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641429#comment-14641429
 ] 

Hadoop QA commented on HBASE-13992:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747159/HBASE-13992.11.patch
  against master branch at commit dad4cad30e5b0c69694ee90908ad8e74c592d821.
  ATTACHMENT ID: 12747159

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:red}-1 javac{color}.  The applied patch generated 26 javac compiler 
warnings (more than the master's current 24 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportFileSystemState(TestMobExportSnapshot.java:285)
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportFileSystemState(TestMobExportSnapshot.java:259)
at 
org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot.testExportWithTargetName(TestMobExportSnapshot.java:217)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:288)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:262)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testEmptyExportFileSystemState(TestExportSnapshot.java:206)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:288)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:262)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testSnapshotWithRefsExportFileSystemState(TestExportSnapshot.java:256)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testSnapshotWithRefsExportFileSystemState(TestExportSnapshot.java:236)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14891//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14891//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14891//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14891//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14891//console

This message is automatically generated.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.10.patch, HBASE-13992.11.patch, 
 HBASE-13992.5.patch, HBASE-13992.6.patch, HBASE-13992.7.patch, 
 HBASE-13992.8.patch, HBASE-13992.9.patch, HBASE-13992.patch, 
 HBASE-13992.patch.3, HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 

[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641773#comment-14641773
 ] 

James Taylor commented on HBASE-14155:
--

One more piece of information: this does not occur with 0.98 (at least 0.98.12 
which is what I tested with)

 StackOverflowError in reverse scan
 --

 Key: HBASE-14155
 URL: https://issues.apache.org/jira/browse/HBASE-14155
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: James Taylor
  Labels: Phoenix
 Attachments: ReproReverseScanStackOverflow.java, 
 ReproReverseScanStackOverflowCoprocessor.java


 A stack overflow may occur when a reverse scan is done. To reproduce (on a 
 Mac), use the following steps:
 - Download the Phoenix 4.5.0 RC here: 
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
 - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
 (removing any earlier Phoenix version if there was one installed)
 - Stop and restart HBase
 - From the bin directory of the Phoenix binary distribution, start sqlline 
 like this: ./sqlline.py localhost
 - Create a new table and populate it like this:
 {code}
 create table desctest (k varchar primary key desc);
 upsert into desctest values ('a');
 upsert into desctest values ('ab');
 upsert into desctest values ('b');
 {code}
 - Note that the following query works fine at this point:
 {code}
 select * from desctest order by k;
 +--+
 |K |
 +--+
 | a|
 | ab   |
 | b|
 +--+
 {code}
 - Stop and start HBase
 - Rerun the above query again and you'll get  a StackOverflowError at 
 StoreFileScanner.seekToPreviousRow()
 {code}
 select * from desctest order by k;
 java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.hadoop.hbase.DoNotRetryIOException: 
 DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
   at 
 org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
   at 
 org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
   at 
 org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
   at 
 org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.StackOverflowError
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
 {code}
 I've attempted to reproduce this in a standalone HBase unit test, but have 
 not been able to (but I'll attach my attempt which mimics what Phoenix is 
 doing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14155) StackOverflowError in reverse scan

2015-07-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated HBASE-14155:
-
Labels: Phoenix  (was: )

 StackOverflowError in reverse scan
 --

 Key: HBASE-14155
 URL: https://issues.apache.org/jira/browse/HBASE-14155
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: James Taylor
  Labels: Phoenix
 Attachments: ReproReverseScanStackOverflow.java, 
 ReproReverseScanStackOverflowCoprocessor.java


 A stack overflow may occur when a reverse scan is done. To reproduce (on a 
 Mac), use the following steps:
 - Download the Phoenix 4.5.0 RC here: 
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
 - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
 (removing any earlier Phoenix version if there was one installed)
 - Stop and restart HBase
 - From the bin directory of the Phoenix binary distribution, start sqlline 
 like this: ./sqlline.py localhost
 - Create a new table and populate it like this:
 {code}
 create table desctest (k varchar primary key desc);
 upsert into desctest values ('a');
 upsert into desctest values ('ab');
 upsert into desctest values ('b');
 {code}
 - Note that the following query works fine at this point:
 {code}
 select * from desctest order by k;
 +--+
 |K |
 +--+
 | a|
 | ab   |
 | b|
 +--+
 {code}
 - Stop and start HBase
 - Rerun the above query again and you'll get  a StackOverflowError at 
 StoreFileScanner.seekToPreviousRow()
 {code}
 select * from desctest order by k;
 java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.hadoop.hbase.DoNotRetryIOException: 
 DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
   at 
 org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
   at 
 org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
   at 
 org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
   at 
 org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.StackOverflowError
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
 {code}
 I've attempted to reproduce this in a standalone HBase unit test, but have 
 not been able to (but I'll attach my attempt which mimics what Phoenix is 
 doing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14155) StackOverflowError in reverse scan

2015-07-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated HBASE-14155:
-
Attachment: ReproReverseScanStackOverflowCoprocessor.java
ReproReverseScanStackOverflow.java

This is my failed attempt to reproduce the issue in a standalone unit test.

 StackOverflowError in reverse scan
 --

 Key: HBASE-14155
 URL: https://issues.apache.org/jira/browse/HBASE-14155
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: James Taylor
  Labels: Phoenix
 Attachments: ReproReverseScanStackOverflow.java, 
 ReproReverseScanStackOverflowCoprocessor.java


 A stack overflow may occur when a reverse scan is done. To reproduce (on a 
 Mac), use the following steps:
 - Download the Phoenix 4.5.0 RC here: 
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
 - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
 (removing any earlier Phoenix version if there was one installed)
 - Stop and restart HBase
 - From the bin directory of the Phoenix binary distribution, start sqlline 
 like this: ./sqlline.py localhost
 - Create a new table and populate it like this:
 {code}
 create table desctest (k varchar primary key desc);
 upsert into desctest values ('a');
 upsert into desctest values ('ab');
 upsert into desctest values ('b');
 {code}
 - Note that the following query works fine at this point:
 {code}
 select * from desctest order by k;
 +--+
 |K |
 +--+
 | a|
 | ab   |
 | b|
 +--+
 {code}
 - Stop and start HBase
 - Rerun the above query again and you'll get  a StackOverflowError at 
 StoreFileScanner.seekToPreviousRow()
 {code}
 select * from desctest order by k;
 java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.hadoop.hbase.DoNotRetryIOException: 
 DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
   at 
 org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
   at 
 org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
   at 
 org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
   at 
 org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.StackOverflowError
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
   at 
 org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
 {code}
 I've attempted to reproduce this in a standalone HBase unit test, but have 
 not been able to (but I'll attach my attempt which mimics what Phoenix is 
 doing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14155) StackOverflowError in reverse scan

2015-07-25 Thread James Taylor (JIRA)
James Taylor created HBASE-14155:


 Summary: StackOverflowError in reverse scan
 Key: HBASE-14155
 URL: https://issues.apache.org/jira/browse/HBASE-14155
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: James Taylor


A stack overflow may occur when a reverse scan is done. To reproduce (on a 
Mac), use the following steps:
- Download the Phoenix 4.5.0 RC here: 
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
- Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
(removing any earlier Phoenix version if there was one installed)
- Stop and restart HBase
- From the bin directory of the Phoenix binary distribution, start sqlline like 
this: ./sqlline.py localhost
- Create a new table and populate it like this:
{code}
create table desctest (k varchar primary key desc);
upsert into desctest values ('a');
upsert into desctest values ('ab');
upsert into desctest values ('b');
{code}
- Note that the following query works fine at this point:
{code}
select * from desctest order by k;
+--+
|K |
+--+
| a|
| ab   |
| b|
+--+
{code}
- Stop and start HBase
- Rerun the above query again and you'll get  a StackOverflowError at 
StoreFileScanner.seekToPreviousRow()
{code}
select * from desctest order by k;
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.StackOverflowError
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
{code}

I've attempted to reproduce this in a standalone HBase unit test, but have not 
been able to (but I'll attach my attempt which mimics what Phoenix is doing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641799#comment-14641799
 ] 

stack commented on HBASE-12751:
---

Should we read at with the same mvcc in these two locations below (could edit 
we write the WAL disagree with the file sequenceid we write out)?
{code}
  97  RegionEventDescriptor regionEventDesc = 
ProtobufUtil.toRegionEventDescriptor(
  98 -  RegionEventDescriptor.EventType.REGION_CLOSE, getRegionInfo(), 
getSequenceId().get(),
  99 +  RegionEventDescriptor.EventType.REGION_CLOSE, getRegionInfo(), 
mvcc.memstoreRead.get(),
 100getRegionServerServices().getServerName(), storeFiles);
 101 -WALUtil.writeRegionEventMarker(wal, getTableDesc(), getRegionInfo(), 
regionEventDesc,
 102 -  getSequenceId());
 103 +WALUtil.writeRegionEventMarker(wal, getTableDesc(), getRegionInfo(), 
regionEventDesc);
 104 •
 105  // Store SeqId in HDFS when a region closes
 106  // checking region folder exists is due to many tests which delete 
the table folder while a
 107  // table is still online
 108  if (this.fs.getFileSystem().exists(this.fs.getRegionDir())) {
 109WALSplitter.writeRegionSequenceIdFile(this.fs.getFileSystem(), 
this.fs.getRegionDir(),
 110 -getSequenceId().get(), 0);
 111 +mvcc.memstoreReadPoint(), 0);
{code}

... and later on so log matches the condition check.

bq. That allows a creating the sequence id/mvcc number for real.

So mvcc has its own running sequenceid-like thing now and it runs independent 
of sequenceid. Previous we were just using WAL sequenceid everywhere (though we 
had that +1B hackery going on). It took some work to get us down to one 
sequence only. The patch adds back another. If just fixing the ordering problem 
(ordering in memstore vs WAL), could we just add to MemStore in the WAL 
appender/ringbuffer consuming thread? Would that be too slow? You'd have to 
rollback still if failure but less moving parts?

bq. mvcc.completeMemstoreInsert(mvcc.beginMemstoreInsert());

The above seems odd. Make a method on mvcc to do the above? Could return the 
current read point (going by what is needed  later in the patch)

And the mvcc initialize semantic has changed? It used to be 
advanceMemstoreReadPointIfNeeded but now we override initialize and call it 
often. It is a bit hard to follow what initialize does now given it is called 
frequently.

Why does WALKey take an mvcc? And then, how comes you can get mvcc from WALKey? 
I'd think WALKey would be made once on construction and would not be 
concerned with stuff like mvccs and writeEntries. Or if it was, that it'd be 
internal to WALKey implementation (You'd not be able to get mvcc from WALKey). 
WALKey has an mvcc but then you can call setWriteEntry? Would be cool if WALKey 
knew nought of mvccs.

In the below:

{code}
115   we = mvcc.beginMemstoreInsert();
116   regionSequenceId = we.getWriteNumber();
{code}

... is this the region sequenceid? Or is it the mvcc number?









 Allow RowLock to be reader writer
 -

 Key: HBASE-12751
 URL: https://issues.apache.org/jira/browse/HBASE-12751
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
 HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
 HBASE-12751-v13.patch, HBASE-12751-v14.patch, HBASE-12751-v15.patch, 
 HBASE-12751-v16.patch, HBASE-12751-v17.patch, HBASE-12751-v2.patch, 
 HBASE-12751-v3.patch, HBASE-12751-v4.patch, HBASE-12751-v5.patch, 
 HBASE-12751-v6.patch, HBASE-12751-v7.patch, HBASE-12751-v8.patch, 
 HBASE-12751-v9.patch, HBASE-12751.patch


 Right now every write operation grabs a row lock. This is to prevent values 
 from changing during a read modify write operation (increment or check and 
 put). However it limits parallelism in several different scenarios.
 If there are several puts to the same row but different columns or stores 
 then this is very limiting.
 If there are puts to the same column then mvcc number should ensure a 
 consistent ordering. So locking is not needed.
 However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-07-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13825:
---
Comment: was deleted

(was: Hi [~mantonov], we don't have hbase.table.max.rowsize set but we also 
don't see any RowTooBigExceptions being thrown either in the region server logs 
(which I cannot send out unfortunately))

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-07-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13825:
---
Comment: was deleted

(was: Sorry for the multiple postings, slow internet connection so I retried 
adding the comment a few too many times.)

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-07-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13825:
---
Comment: was deleted

(was: Hi [~mantonov], we don't have hbase.table.max.rowsize set but we also 
don't see any RowTooBigExceptions being thrown either in the region server logs 
(which I cannot send out unfortunately))

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)