[ 
https://issues.apache.org/jira/browse/HBASE-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12783312#action_12783312
 ] 

Lars George commented on HBASE-1995:
------------------------------------

I just realized that the test are partially broken. Looking at these code 
constructs:

{code}
    // Null row (should NOT work)
    try {
      Put put = new Put((byte[])null);
      put.add(FAMILY, QUALIFIER, VALUE);
      ht.put(put);
      throw new IOException("Inserting a null row worked, should throw 
exception");
    } catch(Exception e) {}
{code}

This is not right I believe. I tried this in a small test app:

{code}
public class TestException {
  public static void main(String[] args) {
    System.out.println("start");
    try {
      System.out.println("in try/catch");
      throw new IOException("this is an IOE");
    } catch (Exception e) {}
    System.out.println("done");
  }
}
{code}

It prints

{code}
LarsMacBookPro:hbase-trunk-ro larsgeorge$ java -cp build/test 
org.apache.hadoop.hbase.client.TestException
start
in try/catch
done
{code}

which makes sense to me. The catch is catching the exception thrown if the test 
does not cause to throw the expected exception as well. To fix we need to 
facilitate the proper Assert.fail() method like so

{code}
    // Null row (should NOT work)
    try {
      Put put = new Put((byte[])null);
      put.add(FAMILY, QUALIFIER, VALUE);
      ht.put(put);
      fail("Inserting a null row worked, should throw exception");
    } catch(Exception e) {}
{code}

I have changed that in the TestFromClientSide class and found that I actually 
had a check wrong. I amended my test to set up a proper second table so that 
the lower maximum KV size is considered. Creating a new patch.

> Add configurable max value size check
> -------------------------------------
>
>                 Key: HBASE-1995
>                 URL: https://issues.apache.org/jira/browse/HBASE-1995
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: client
>            Reporter: Lars George
>            Assignee: Lars George
>            Priority: Minor
>             Fix For: 0.21.0
>
>         Attachments: HBASE-1995.patch
>
>
> After discussing with Michael, the issue was that if you have a region size 
> of for example 256M and you add a 1G value into a column then that region 
> cannot split and a cluster can become lopsided. It also means that you have a 
> really large block and the memstore would need to handle the 1G data. I 
> proposed to add a configurable upper boundary check that is throwing an 
> exception on the client side if the user tries to add a value larger than 
> that limit. Ideally it would be set to the maximum region size or less. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to