[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654372#comment-14654372
 ] 

Esteban Gutierrez commented on HBASE-13825:
---

+1 [~apurtell] also I think you addressed some of the comments from 
[~anoopsamjohn] from HBASE-14076. I'm going to open a JIRA to port the changes 
to master as well. Thanks!

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, 
 HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch


 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653898#comment-14653898
 ] 

Andrew Purtell commented on HBASE-13825:


If you have a sec [~esteban], the branch-1 and 0.98 patches here incorporate 
your work on HBASE-14076, what do you think?

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
 HBASE-13825.patch


 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654180#comment-14654180
 ] 

Andrew Purtell commented on HBASE-13825:


The precommit test was bad because someone killed our test JVM externally:
{noformat}
xecutionException: java.lang.RuntimeException: The forked VM terminated without 
properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server  
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51/jre/bin/java 
-enableassertions -XX:MaxDirectMemorySize=1G -Xmx2800m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -jar 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/surefire/surefirebooter8543005017696418773.jar
 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/surefire/surefire2508603723119457542tmp
 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/surefire/surefire_9171447714369068258025tmp
{noformat}

The zombie was AmbariManagementControllerTest, that's not us. 

Tests pass for me locally. 

Let me check on that checkstyle thing

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
 HBASE-13825.patch


 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654218#comment-14654218
 ] 

Andrew Purtell commented on HBASE-13825:


Valid checkstyle issues:
- ClusterID: Unused import - com.google.protobuf.InvalidProtocolBufferException 
. Missed that one.
- HColumnDescriptor: Unused import - 
com.google.protobuf.InvalidProtocolBufferException . Also missed this one.

Nothing else jumps out as relevant or related. New patches coming up.


 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
 HBASE-13825.patch


 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-08-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654166#comment-14654166
 ] 

Hadoop QA commented on HBASE-13825:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12748597/HBASE-13825-branch-1.patch
  against branch-1 branch at commit 931e77d4507e1650c452cefadda450e0bf3f0528.
  ATTACHMENT ID: 12748597

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3828 checkstyle errors (more than the master's current 3825 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14970//console

This message is automatically generated.

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani
Assignee: Andrew Purtell
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, 
 HBASE-13825.patch


 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-07-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642732#comment-14642732
 ] 

Anoop Sam John commented on HBASE-13825:


HBASE-14076 related.

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-07-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642140#comment-14642140
 ] 

Andrew Purtell commented on HBASE-13825:


Last week on the mailing list someone else wrote in with a problem similar to 
this, where the common issue is hitting the static CodedInputStream limit in 
the client. The new case was deserializing HBase PB types in a MapReduce 
worker, something that can't be helped with server side response limit options. 
I am going to pick up this issue next week and plan to address it with a site 
configuration option for adjusting the static CodedInputStream limit. 

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-17 Thread Dev Lakhani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589712#comment-14589712
 ] 

Dev Lakhani commented on HBASE-13825:
-

Hi [~mantonov], we don't have hbase.table.max.rowsize set but we also don't see 
any RowTooBigExceptions being thrown either in the region server logs (which I 
cannot send out unfortunately)

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-17 Thread Dev Lakhani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589713#comment-14589713
 ] 

Dev Lakhani commented on HBASE-13825:
-

Hi [~mantonov], we don't have hbase.table.max.rowsize set but we also don't see 
any RowTooBigExceptions being thrown either in the region server logs (which I 
cannot send out unfortunately)

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-17 Thread Dev Lakhani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589711#comment-14589711
 ] 

Dev Lakhani commented on HBASE-13825:
-

Hi [~mantonov], we don't have hbase.table.max.rowsize set but we also don't see 
any RowTooBigExceptions being thrown either in the region server logs (which I 
cannot send out unfortunately)

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-17 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589611#comment-14589611
 ] 

Mikhail Antonov commented on HBASE-13825:
-

Wondering if you have regionserver logs for that event by chance? Also curious 
if you have hbase.table.max.rowsize property set in the config.

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-17 Thread Dev Lakhani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589715#comment-14589715
 ] 

Dev Lakhani commented on HBASE-13825:
-

Sorry for the multiple postings, slow internet connection so I retried adding 
the comment a few too many times.

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-06 Thread Dev Lakhani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575960#comment-14575960
 ] 

Dev Lakhani commented on HBASE-13825:
-

This probably needs to be configurable as a Hbase option as we cannot change 
client code.

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-03 Thread Dev Lakhani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14570472#comment-14570472
 ] 

Dev Lakhani commented on HBASE-13825:
-

Thanks for the suggestion [~apurtell] , this is what the stack trace suggests 
but please can you help with a code snippet? When you say change it in the 
client do you mean the Hbase client or the application client calling the 
get. I am only able/permitted to use pre-built Hbase jars from maven so 
cannot change Hbase code in any way.

CodedInputStream.setSizeLimit() suggests using a static method which does not 
exist. Furthermore I have no instances of CodedInputStream in by application 
client so where should I set this size limit?

Is it work adding a HBase parameter for this?  

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572025#comment-14572025
 ] 

Andrew Purtell commented on HBASE-13825:


bq. When you say change it in the client do you mean the Hbase client or...

The HBase client

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-02 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14570288#comment-14570288
 ] 

Andrew Purtell commented on HBASE-13825:


One option is to use CodedInputStream#setSizeLimit in the client to effectively 
disable this check by setting it to Integer.MAX.

 Get operations on large objects fail with protocol errors
 -

 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1
Reporter: Dev Lakhani

 When performing a get operation on a column family with more than 64MB of 
 data, the operation fails with:
 Caused by: Portable(java.io.IOException): Call to host:port failed on local 
 exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
 message was too large.  May be malicious.  Use 
 CodedInputStream.setSizeLimit() to increase the size limit.
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
 at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
 at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
 at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
 at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
 at 
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
 at 
 org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
 This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
 that issue is related to cluster status. 
 Scan and put operations on the same data work fine
 Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)