[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898902#comment-13898902
 ] 

Hudson commented on HBASE-10505:


SUCCESS: Integrated in hbase-0.96 #291 (See 
[https://builds.apache.org/job/hbase-0.96/291/])
HBASE-10505 Import.filterKv does not call Filter.filterRowKey. (larsh: rev 
1567524)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java


> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898899#comment-13898899
 ] 

Hudson commented on HBASE-10505:


FAILURE: Integrated in HBase-0.94-JDK7 #47 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/47/])
HBASE-10505 Import.filterKv does not call Filter.filterRowKey. (larsh: rev 
1567522)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java


> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10252) Don't write back to WAL/memstore when Increment amount is zero (mostly for query rather than update intention)

2014-02-11 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898896#comment-13898896
 ] 

Feng Honghua commented on HBASE-10252:
--

bq.The test presumes that even though the increment value is 0, if the cell 
does not exist yet, then the cell is created (with a value of 0). That is how 
it worked in 0.96 and previous. My guess is that you did not intend to remove 
this behavior? If that is the case, I'll make a small patch in a new issue to 
restore cell creation though the value is zero. Thanks boss.
This patch is to save the write to wal/memstore when the increment value is 0, 
so 'if the increment value is 0, and if the cell does not exist yet, then the 
cell won't be created' is a natural resultant behavior of this patch...
I'm not familiar with asynchbase itself, but wonder why it cares about whether 
a cell exists after an increment operation? seems what really matters from a 
'client' perspective is whether the value it reads back is correct after some 
increment is performed (such as a read can confirm that the value it reads back 
is 0 immediately after a first increment with value=0, if that's the case, it's 
deemed correct, it shouldn't care about whether that cell exists or not. The 
value is 0 for increment under both scenarios 1) non-existing cell and 2) an 
existing cell with value=0). if my understanding is correct, seems the 
according asynchbase test,rather than the hbase code,should be corrected here. 
:-)

> Don't write back to WAL/memstore when Increment amount is zero (mostly for 
> query rather than update intention)
> --
>
> Key: HBASE-10252
> URL: https://issues.apache.org/jira/browse/HBASE-10252
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10252-trunk-v0.patch, HBASE-10252-trunk-v1.patch
>
>
> When user calls Increment by providing amount=0, we don't write the original 
> value to WAL or memstore : adding 0 yields a 'new' value just with the same 
> value as the original one.
> 1. user provides 0 amount for query rather than for update, this fix is ok; 
> this intention is the most possible case;
> 2. user provides 0 amount for an update, this fix is also ok : no need to 
> touch back-end value if that value isn't changed;
> 3. either case we both return correct value, and keep subsequent query 
> results correct : if the 0 amount Increment is the first update, the query is 
> the same for retrieving a 0 value or retrieving nothing;



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10499) In write heavy scenario one of the regions does not get flushed causing RegionTooBusyException

2014-02-11 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10499:
---

Attachment: HBASE-10499.patch

Adding a patch that adds a log in case of memstore size is 0.

> In write heavy scenario one of the regions does not get flushed causing 
> RegionTooBusyException
> --
>
> Key: HBASE-10499
> URL: https://issues.apache.org/jira/browse/HBASE-10499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 0.98.1
>
> Attachments: HBASE-10499.patch, 
> hbase-root-regionserver-ip-10-93-128-92.zip, t1.dump, t2.dump
>
>
> I got this while testing 0.98RC.  But am not sure if it is specific to this 
> version.  Doesn't seem so to me.  
> Also it is something similar to HBASE-5312 and HBASE-5568.
> Using 10 threads i do writes to 4 RS using YCSB. The table created has 200 
> regions.  In one of the run with 0.98 server and 0.98 client I faced this 
> problem like the hlogs became more and the system requested flushes for those 
> many regions.
> One by one everything was flushed except one and that one thing remained 
> unflushed.  The ripple effect of this on the client side
> {code}
> com.yahoo.ycsb.DBException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 54 actions: RegionTooBusyException: 54 times,
> at com.yahoo.ycsb.db.HBaseClient.cleanup(HBaseClient.java:245)
> at com.yahoo.ycsb.DBWrapper.cleanup(DBWrapper.java:73)
> at com.yahoo.ycsb.ClientThread.run(Client.java:307)
> Caused by: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 54 actions: RegionTooBusyException: 54 times,
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:187)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:171)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:897)
> at 
> org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:961)
> at 
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1225)
> at com.yahoo.ycsb.db.HBaseClient.cleanup(HBaseClient.java:232)
> ... 2 more
> {code}
> On one of the RS
> {code}
> 2014-02-11 08:45:58,714 INFO  [regionserver60020.logRoller] wal.FSHLog: Too 
> many hlogs: logs=38, maxlogs=32; forcing flush of 23 regions(s): 
> 97d8ae2f78910cc5ded5fbb1ddad8492, d396b8a1da05c871edcb68a15608fdf2, 
> 01a68742a1be3a9705d574ad68fec1d7, 1250381046301e7465b6cf398759378e, 
> 127c133f47d0419bd5ab66675aff76d4, 9f01c5d25ddc6675f750968873721253, 
> 29c055b5690839c2fa357cd8e871741e, ca4e33e3eb0d5f8314ff9a870fc43463, 
> acfc6ae756e193b58d956cb71ccf0aa3, 187ea304069bc2a3c825bc10a59c7e84, 
> 0ea411edc32d5c924d04bf126fa52d1e, e2f9331fc7208b1b230a24045f3c869e, 
> d9309ca864055eddf766a330352efc7a, 1a71bdf457288d449050141b5ff00c69, 
> 0ba9089db28e977f86a27f90bbab9717, fdbb3242d3b673bbe4790a47bc30576f, 
> bbadaa1f0e62d8a8650080b824187850, b1a5de30d8603bd5d9022e09c574501b, 
> cc6a9fabe44347ed65e7c325faa72030, 313b17dbff2497f5041b57fe13fa651e, 
> 6b788c498503ddd3e1433a4cd3fb4e39, 3d71274fe4f815882e9626e1cfa050d1, 
> acc43e4b42c1a041078774f4f20a3ff5
> ..
> 2014-02-11 08:47:49,580 INFO  [regionserver60020.logRoller] wal.FSHLog: Too 
> many hlogs: logs=53, maxlogs=32; forcing flush of 2 regions(s): 
> fdbb3242d3b673bbe4790a47bc30576f, 6b788c498503ddd3e1433a4cd3fb4e39
> {code}
> {code}
> 2014-02-11 09:42:44,237 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a 
> delay of 16689
> 2014-02-11 09:42:44,237 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a 
> delay of 15868
> 2014-02-11 09:42:54,238 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a 
> delay of 20847
> 2014-02-11 09:42:54,238 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a 
> delay of 20099
> 2014-02-11 09:43:04,238 INFO  [regions

[jira] [Commented] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898887#comment-13898887
 ] 

Hudson commented on HBASE-10487:


FAILURE: Integrated in HBase-0.98 #151 (See 
[https://builds.apache.org/job/HBase-0.98/151/])
HBASE-10487 Avoid allocating new KeyValue and according bytes-copying for 
appended kvs which don't have existing values (Honghua) (tedyu: rev 1567514)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10495) upgrade script is printing usage two times with help option.

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898886#comment-13898886
 ] 

Hudson commented on HBASE-10495:


FAILURE: Integrated in HBase-0.98 #151 (See 
[https://builds.apache.org/job/HBase-0.98/151/])
HBASE-10495 upgrade script is printing usage two times with help 
option.(Rajesh) (rajeshbabu: rev 1567495)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java


> upgrade script is printing usage two times with help option.
> 
>
> Key: HBASE-10495
> URL: https://issues.apache.org/jira/browse/HBASE-10495
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Affects Versions: 0.96.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10495.patch
>
>
> while testing 0.98 RC found usage is printing two times with help option.
> {code}
> HOST-10-18-91-14:/home/rajeshbabu/98RC3/hbase-0.98.0-hadoop2/bin # ./hbase 
> upgrade -h
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10252) Don't write back to WAL/memstore when Increment amount is zero (mostly for query rather than update intention)

2014-02-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898875#comment-13898875
 ] 

stack commented on HBASE-10252:
---

So, this patch breaks an asynchbase test (see the below -- thanks to [~tsuna] 
for help debugging).  The test presumes that even though the increment value is 
0, if the cell does not exist yet, then the cell is created (with a value of 
0).  That is how it worked in 0.96 and previous.

[~fenghh] My guess is that you did not intend to remove this behavior?  If that 
is the case, I'll make a small patch in a new issue to restore cell creation 
though the value is zero.  Thanks boss.


{code}
21:28:57.922 [main] ERROR org.hbase.async.test.TestIntegration - Test failed: 
incrementCoalescingWithZeroSumAmount
java.lang.AssertionError: List was expected to contain 1 items but was found to 
contain 0: []
at 
org.hbase.async.test.TestIntegration.assertSizeIs(TestIntegration.java:851) 
[build/:na]
at 
org.hbase.async.test.TestIntegration.incrementCoalescingWithZeroSumAmount(TestIntegration.java:595)
 [build/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[na:1.7.0_45]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
~[na:1.7.0_45]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.7.0_45]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 ~[junit-4.11.jar:na]
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 ~[junit-4.11.jar:na]
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 ~[junit-4.11.jar:na]
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 ~[junit-4.11.jar:na]
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
~[junit-4.11.jar:na]
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
~[junit-4.11.jar:na]
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) 
[junit-4.11.jar:na]
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 [junit-4.11.jar:na]
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 [junit-4.11.jar:na]
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) 
[junit-4.11.jar:na]
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) 
[junit-4.11.jar:na]
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) 
[junit-4.11.jar:na]
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) 
[junit-4.11.jar:na]
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) 
[junit-4.11.jar:na]
at org.junit.runners.ParentRunner.run(ParentRunner.java:309) 
[junit-4.11.jar:na]
at org.junit.runner.JUnitCore.run(JUnitCore.java:160) 
[junit-4.11.jar:na]
at org.junit.runner.JUnitCore.run(JUnitCore.java:138) 
[junit-4.11.jar:na]
at org.hbase.async.test.TestIntegration.main(TestIntegration.java:133) 
[build/:na]
{code}

> Don't write back to WAL/memstore when Increment amount is zero (mostly for 
> query rather than update intention)
> --
>
> Key: HBASE-10252
> URL: https://issues.apache.org/jira/browse/HBASE-10252
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10252-trunk-v0.patch, HBASE-10252-trunk-v1.patch
>
>
> When user calls Increment by providing amount=0, we don't write the original 
> value to WAL or memstore : adding 0 yields a 'new' value just with the same 
> value as the original one.
> 1. user provides 0 amount for query rather than for update, this fix is ok; 
> this intention is the most possible case;
> 2. user provides 0 amount for an update, this fix is also ok : no need to 
> touch back-end value if that value isn't changed;
> 3. either case we both return correct value, and keep subsequent query 
> results correct : if the 0 amount Increment is the first update, the query is 
> the same for retrieving a 0 value or retrieving nothing;



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898866#comment-13898866
 ] 

Hudson commented on HBASE-10505:


FAILURE: Integrated in HBase-0.94 #1284 (See 
[https://builds.apache.org/job/HBase-0.94/1284/])
HBASE-10505 Import.filterKv does not call Filter.filterRowKey. (larsh: rev 
1567522)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java


> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10509) TestRowProcessorEndpoint fails with missing required field row_processor_result

2014-02-11 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10509:
---

Status: Patch Available  (was: Open)

> TestRowProcessorEndpoint fails with missing required field 
> row_processor_result
> ---
>
> Key: HBASE-10509
> URL: https://issues.apache.org/jira/browse/HBASE-10509
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.99.0
> Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 
> Compressed References 20131114_175264 (JIT enabled, AOT enabled)
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: 10509.patch
>
>
> Seen with IBM JDK 7:
> {noformat}
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
> required fields: row_processor_result
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1301)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1245)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5482)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10509) TestRowProcessorEndpoint fails with missing required field row_processor_result

2014-02-11 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10509:
---

Attachment: 10509.patch

If I modify the message definition so the field is optional or initialize the 
builder whether there is an exception or not, this resolves the observed error. 
Attached patch does the latter. I have not dug deeper. Seems a correct change 
standing alone.

> TestRowProcessorEndpoint fails with missing required field 
> row_processor_result
> ---
>
> Key: HBASE-10509
> URL: https://issues.apache.org/jira/browse/HBASE-10509
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.99.0
> Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 
> Compressed References 20131114_175264 (JIT enabled, AOT enabled)
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: 10509.patch
>
>
> Seen with IBM JDK 7:
> {noformat}
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
> required fields: row_processor_result
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1301)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1245)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5482)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10509) TestRowProcessorEndpoint fails with missing required field row_processor_result

2014-02-11 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10509:
--

 Summary: TestRowProcessorEndpoint fails with missing required 
field row_processor_result
 Key: HBASE-10509
 URL: https://issues.apache.org/jira/browse/HBASE-10509
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
 Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed 
References 20131114_175264 (JIT enabled, AOT enabled)
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor


Seen with IBM JDK 7:

{noformat}
Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
required fields: row_processor_result
at 
com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
at 
org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1301)
at 
org.apache.hadoop.hbase.protobuf.generated.RowProcessorProtos$ProcessResponse$Builder.build(RowProcessorProtos.java:1245)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5482)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898850#comment-13898850
 ] 

Hudson commented on HBASE-10487:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #139 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/139/])
HBASE-10487 Avoid allocating new KeyValue and according bytes-copying for 
appended kvs which don't have existing values (Honghua) (tedyu: rev 1567514)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898851#comment-13898851
 ] 

Hudson commented on HBASE-10505:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #20 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/20/])
HBASE-10505 Import.filterKv does not call Filter.filterRowKey. (larsh: rev 
1567522)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java


> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10495) upgrade script is printing usage two times with help option.

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898849#comment-13898849
 ] 

Hudson commented on HBASE-10495:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #139 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/139/])
HBASE-10495 upgrade script is printing usage two times with help 
option.(Rajesh) (rajeshbabu: rev 1567495)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java


> upgrade script is printing usage two times with help option.
> 
>
> Key: HBASE-10495
> URL: https://issues.apache.org/jira/browse/HBASE-10495
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Affects Versions: 0.96.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10495.patch
>
>
> while testing 0.98 RC found usage is printing two times with help option.
> {code}
> HOST-10-18-91-14:/home/rajeshbabu/98RC3/hbase-0.98.0-hadoop2/bin # ./hbase 
> upgrade -h
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898842#comment-13898842
 ] 

Hudson commented on HBASE-10505:


SUCCESS: Integrated in HBase-0.94-security #409 (See 
[https://builds.apache.org/job/HBase-0.94-security/409/])
HBASE-10505 Import.filterKv does not call Filter.filterRowKey. (larsh: rev 
1567522)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java


> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10499) In write heavy scenario one of the regions does not get flushed causing RegionTooBusyException

2014-02-11 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898838#comment-13898838
 ] 

ramkrishna.s.vasudevan commented on HBASE-10499:


Not able to reproduce this further again.  

> In write heavy scenario one of the regions does not get flushed causing 
> RegionTooBusyException
> --
>
> Key: HBASE-10499
> URL: https://issues.apache.org/jira/browse/HBASE-10499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 0.98.1
>
> Attachments: hbase-root-regionserver-ip-10-93-128-92.zip, t1.dump, 
> t2.dump
>
>
> I got this while testing 0.98RC.  But am not sure if it is specific to this 
> version.  Doesn't seem so to me.  
> Also it is something similar to HBASE-5312 and HBASE-5568.
> Using 10 threads i do writes to 4 RS using YCSB. The table created has 200 
> regions.  In one of the run with 0.98 server and 0.98 client I faced this 
> problem like the hlogs became more and the system requested flushes for those 
> many regions.
> One by one everything was flushed except one and that one thing remained 
> unflushed.  The ripple effect of this on the client side
> {code}
> com.yahoo.ycsb.DBException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 54 actions: RegionTooBusyException: 54 times,
> at com.yahoo.ycsb.db.HBaseClient.cleanup(HBaseClient.java:245)
> at com.yahoo.ycsb.DBWrapper.cleanup(DBWrapper.java:73)
> at com.yahoo.ycsb.ClientThread.run(Client.java:307)
> Caused by: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 54 actions: RegionTooBusyException: 54 times,
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:187)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:171)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:897)
> at 
> org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:961)
> at 
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1225)
> at com.yahoo.ycsb.db.HBaseClient.cleanup(HBaseClient.java:232)
> ... 2 more
> {code}
> On one of the RS
> {code}
> 2014-02-11 08:45:58,714 INFO  [regionserver60020.logRoller] wal.FSHLog: Too 
> many hlogs: logs=38, maxlogs=32; forcing flush of 23 regions(s): 
> 97d8ae2f78910cc5ded5fbb1ddad8492, d396b8a1da05c871edcb68a15608fdf2, 
> 01a68742a1be3a9705d574ad68fec1d7, 1250381046301e7465b6cf398759378e, 
> 127c133f47d0419bd5ab66675aff76d4, 9f01c5d25ddc6675f750968873721253, 
> 29c055b5690839c2fa357cd8e871741e, ca4e33e3eb0d5f8314ff9a870fc43463, 
> acfc6ae756e193b58d956cb71ccf0aa3, 187ea304069bc2a3c825bc10a59c7e84, 
> 0ea411edc32d5c924d04bf126fa52d1e, e2f9331fc7208b1b230a24045f3c869e, 
> d9309ca864055eddf766a330352efc7a, 1a71bdf457288d449050141b5ff00c69, 
> 0ba9089db28e977f86a27f90bbab9717, fdbb3242d3b673bbe4790a47bc30576f, 
> bbadaa1f0e62d8a8650080b824187850, b1a5de30d8603bd5d9022e09c574501b, 
> cc6a9fabe44347ed65e7c325faa72030, 313b17dbff2497f5041b57fe13fa651e, 
> 6b788c498503ddd3e1433a4cd3fb4e39, 3d71274fe4f815882e9626e1cfa050d1, 
> acc43e4b42c1a041078774f4f20a3ff5
> ..
> 2014-02-11 08:47:49,580 INFO  [regionserver60020.logRoller] wal.FSHLog: Too 
> many hlogs: logs=53, maxlogs=32; forcing flush of 2 regions(s): 
> fdbb3242d3b673bbe4790a47bc30576f, 6b788c498503ddd3e1433a4cd3fb4e39
> {code}
> {code}
> 2014-02-11 09:42:44,237 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a 
> delay of 16689
> 2014-02-11 09:42:44,237 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a 
> delay of 15868
> 2014-02-11 09:42:54,238 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a 
> delay of 20847
> 2014-02-11 09:42:54,238 INFO  [regionserver60020.periodicFlusher] 
> regionserver.HRegionServer: regionserver60020.periodicFlusher requesting 
> flush for region 
> usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a 
> delay of 20099
> 2014-02-11 09:43:04,238 INFO  [regionserver60020.peri

[jira] [Commented] (HBASE-10495) upgrade script is printing usage two times with help option.

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898823#comment-13898823
 ] 

Hudson commented on HBASE-10495:


FAILURE: Integrated in HBase-TRUNK #4912 (See 
[https://builds.apache.org/job/HBase-TRUNK/4912/])
HBASE-10495 upgrade script is printing usage two times with help 
option.(Rajesh) (rajeshbabu: rev 1567493)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java


> upgrade script is printing usage two times with help option.
> 
>
> Key: HBASE-10495
> URL: https://issues.apache.org/jira/browse/HBASE-10495
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Affects Versions: 0.96.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10495.patch
>
>
> while testing 0.98 RC found usage is printing two times with help option.
> {code}
> HOST-10-18-91-14:/home/rajeshbabu/98RC3/hbase-0.98.0-hadoop2/bin # ./hbase 
> upgrade -h
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10508) Backport HBASE-10365 'HBaseFsck should clean up connection properly when repair is completed' to 0.94 and 0.96

2014-02-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10508:
---

Attachment: 10508-0.94.txt

> Backport HBASE-10365 'HBaseFsck should clean up connection properly when 
> repair is completed' to 0.94 and 0.96
> --
>
> Key: HBASE-10508
> URL: https://issues.apache.org/jira/browse/HBASE-10508
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10508-0.94.txt, 10508-0.96.txt
>
>
> At the end of exec() method, connections to the cluster are not properly 
> released.
> Connections should be released upon completion of repair.
> This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
> release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10508) Backport HBASE-10365 'HBaseFsck should clean up connection properly when repair is completed' to 0.94 and 0.96

2014-02-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898822#comment-13898822
 ] 

Ted Yu commented on HBASE-10508:


Comments from HBASE-10365 :

https://issues.apache.org/jira/browse/HBASE-10365?focusedCommentId=13880128&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13880128

https://issues.apache.org/jira/browse/HBASE-10365?focusedCommentId=13880749&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13880749

> Backport HBASE-10365 'HBaseFsck should clean up connection properly when 
> repair is completed' to 0.94 and 0.96
> --
>
> Key: HBASE-10508
> URL: https://issues.apache.org/jira/browse/HBASE-10508
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10508-0.96.txt
>
>
> At the end of exec() method, connections to the cluster are not properly 
> released.
> Connections should be released upon completion of repair.
> This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
> release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898821#comment-13898821
 ] 

Hadoop QA commented on HBASE-10506:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12628426/HBASE-10506-trunk.txt
  against trunk revision .
  ATTACHMENT ID: 12628426

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8667//console

This message is automatically generated.

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10506-0.94.txt, HBASE-10506-trunk.txt
>
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this 

[jira] [Updated] (HBASE-10508) Backport HBASE-10365 'HBaseFsck should clean up connection properly when repair is completed' to 0.94 and 0.96

2014-02-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10508:
---

Attachment: 10508-0.96.txt

> Backport HBASE-10365 'HBaseFsck should clean up connection properly when 
> repair is completed' to 0.94 and 0.96
> --
>
> Key: HBASE-10508
> URL: https://issues.apache.org/jira/browse/HBASE-10508
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10508-0.96.txt
>
>
> At the end of exec() method, connections to the cluster are not properly 
> released.
> Connections should be released upon completion of repair.
> This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
> release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10508) Backport HBASE-10365 'HBaseFsck should clean up connection properly when repair is completed' to 0.94 and 0.96

2014-02-11 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10508:
--

 Summary: Backport HBASE-10365 'HBaseFsck should clean up 
connection properly when repair is completed' to 0.94 and 0.96
 Key: HBASE-10508
 URL: https://issues.apache.org/jira/browse/HBASE-10508
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.2, 0.94.17


At the end of exec() method, connections to the cluster are not properly 
released.

Connections should be released upon completion of repair.

This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10500) Some tools OOM when BucketCache is enabled

2014-02-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898800#comment-13898800
 ] 

Nick Dimiduk commented on HBASE-10500:
--

IntegrationTestBulkLoad and IntegrationTestImportTsv both pass here. Lacking 
objection, will commit tomorrow.

> Some tools OOM when BucketCache is enabled
> --
>
> Key: HBASE-10500
> URL: https://issues.apache.org/jira/browse/HBASE-10500
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.96.0, 0.99.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: HBASE-10500.00.patch, HBASE-10500.01.patch
>
>
> Running {{hbck --repair}} or {{LoadIncrementalHFiles}} when BucketCache is 
> enabled in offheap mode can cause OOM. This is apparently because 
> {{bin/hbase}} does not include $HBASE_REGIONSERVER_OPTS for these tools. This 
> results in HRegion or HFileReaders initialized with a CacheConfig that 
> doesn't have the necessary Direct Memory.
> Possible solutions include:
>  - disable blockcache in the config used by hbck when running its repairs
>  - include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments
> I'm leaning toward the former because it's possible that hbck is run on a 
> host with different hardware profile as the RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10482) ReplicationSyncUp doesn't clean up its ZK, needed for tests

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10482:
--

Fix Version/s: 0.96.2

> ReplicationSyncUp doesn't clean up its ZK, needed for tests
> ---
>
> Key: HBASE-10482
> URL: https://issues.apache.org/jira/browse/HBASE-10482
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.96.1, 0.94.16
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: HBASE-10249.patch
>
>
> TestReplicationSyncUpTool failed again:
> https://builds.apache.org/job/HBase-TRUNK/4895/testReport/junit/org.apache.hadoop.hbase.replication/TestReplicationSyncUpTool/testSyncUpTool/
> It's not super obvious why only one of the two tables is replicated, the test 
> could use some more logging, but I understand it this way:
> The first ReplicationSyncUp gets started and for some reason it cannot 
> replicate the data:
> {noformat}
> 2014-02-06 21:32:19,811 INFO  [Thread-1372] 
> regionserver.ReplicationSourceManager(203): Current list of replicators: 
> [1391722339091.SyncUpTool.replication.org,1234,1, 
> quirinus.apache.org,37045,1391722237951, 
> quirinus.apache.org,33502,1391722238125] other RSs: []
> 2014-02-06 21:32:19,811 INFO  [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(231): Replicating 
> db42e7fc-7f29-4038-9292-d85ea8b9994b -> 783c0ab2-4ff9-4dc0-bb38-86bf31d1d817
> 2014-02-06 21:32:19,892 TRACE [Thread-1372.replicationSource,2] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
> 2014-02-06 21:32:19,911 TRACE [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
> 2014-02-06 21:32:20,094 TRACE [Thread-1372.replicationSource,2] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 2
> ...
> 2014-02-06 21:32:23,414 TRACE [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 8
> 2014-02-06 21:32:23,673 INFO  [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(169): Moving 
> quirinus.apache.org,37045,1391722237951's hlogs to my queue
> 2014-02-06 21:32:23,768 DEBUG [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(396): Creating 
> quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
> 2014-02-06 21:32:23,842 DEBUG [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(396): Creating 
> quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
> 2014-02-06 21:32:24,297 TRACE [Thread-1372.replicationSource,2] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
> 2014-02-06 21:32:24,314 TRACE [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
> {noformat}
> Finally it gives up:
> {noformat}
> 2014-02-06 21:32:30,873 DEBUG [Thread-1372] 
> replication.TestReplicationSyncUpTool(323): SyncUpAfterDelete failed at retry 
> = 0, with rowCount_ht1TargetPeer1 =100 and rowCount_ht2TargetAtPeer1 =200
> {noformat}
> The syncUp tool has an ID you can follow, grep for 
> syncupReplication1391722338885 or just the timestamp, and you can see it 
> doing things after that. The reason is that the tool closes the 
> ReplicationSourceManager but not the ZK connection, so events _still_ come in 
> and NodeFailoverWorker _still_ tries to recover queues but then there's 
> nothing to process them.
> Later in the logs you can see:
> {noformat}
> 2014-02-06 21:32:37,381 INFO  [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(169): Moving 
> quirinus.apache.org,33502,1391722238125's hlogs to my queue
> 2014-02-06 21:32:37,567 INFO  [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(239): Won't transfer the queue, another 
> RS took care of it because of: KeeperErrorCode = NoNode for 
> /1/replication/rs/quirinus.apache.org,33502,1391722238125/lock
> {noformat}
> There shouldn't' be any racing, but now someone already moved 
> "quirinus.apache.org,33502,1391722238125" away.
> FWIW I can't even make the test fail on my machine so I'm not 100% sure 
> closing the ZK connection fixes the issue, but at least it's the right thing 
> to do.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10507) Proper filter tests for TestImportExport

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10507:
--

Fix Version/s: (was: 0.94.17)
   0.94.18
   0.99.0
   0.98.1

> Proper filter tests for TestImportExport
> 
>
> Key: HBASE-10507
> URL: https://issues.apache.org/jira/browse/HBASE-10507
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
>
> See parent. TestImportExport.testWithFilter used to passed by accident (until 
> parent is fixed and until very recently also in trunk).
> This is as simple as just added some non-matching rows to the tests. Other 
> than parent that should be added to all branches.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9218) HBase shell does not allow to change/assign custom table-column families attributes

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9218:
-

Fix Version/s: (was: 0.94.17)
   0.94.18

> HBase shell does not allow to change/assign custom table-column families 
> attributes
> ---
>
> Key: HBASE-9218
> URL: https://issues.apache.org/jira/browse/HBASE-9218
> Project: HBase
>  Issue Type: Bug
>  Components: shell, Usability
>Affects Versions: 0.94.6.1
>Reporter: Vladimir Rodionov
> Fix For: 0.94.18
>
>
> HBase shell. In 0.94.6.1 the attempt to assign/change custom table or CF 
> attribute does not throw any exception but has no affect. The same code works 
> fine in Java API (on HTableDescriptor or HColumnDescriptor)
> This is short shell session snippet:
> {code}
> hbase(main):009:0> disable 'T'
> 0 row(s) in 18.0730 seconds
> hbase(main):010:0> alter 'T', NAME => 'df', 'FAKE' => '10'
> Updating all regions with the new schema...
> 5/5 regions updated.
> Done.
> 0 row(s) in 2.2900 seconds
> hbase(main):011:0> enable 'T'
> 0 row(s) in 18.7140 seconds
> hbase(main):012:0> describe 'T'
> DESCRIPTION   
>  ENABLED
>  {NAME => 'T', FAMILIES => [{NAME => 'df', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER = true
>  > 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'GZ', 
> MIN_VERSIONS => '0', TTL => '2147483647', K
>  EEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'true', 
> ENCODE_ON_DISK => 'true', BLOCKCACHE => 'tru
>  e'}]}
> {code}
> As you can see, the new attribute 'FAKE' has not been added to column family 
> 'cf'.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9218) HBase shell does not allow to change/assign custom table-column families attributes

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9218:
-

Priority: Major  (was: Critical)

> HBase shell does not allow to change/assign custom table-column families 
> attributes
> ---
>
> Key: HBASE-9218
> URL: https://issues.apache.org/jira/browse/HBASE-9218
> Project: HBase
>  Issue Type: Bug
>  Components: shell, Usability
>Affects Versions: 0.94.6.1
>Reporter: Vladimir Rodionov
> Fix For: 0.94.18
>
>
> HBase shell. In 0.94.6.1 the attempt to assign/change custom table or CF 
> attribute does not throw any exception but has no affect. The same code works 
> fine in Java API (on HTableDescriptor or HColumnDescriptor)
> This is short shell session snippet:
> {code}
> hbase(main):009:0> disable 'T'
> 0 row(s) in 18.0730 seconds
> hbase(main):010:0> alter 'T', NAME => 'df', 'FAKE' => '10'
> Updating all regions with the new schema...
> 5/5 regions updated.
> Done.
> 0 row(s) in 2.2900 seconds
> hbase(main):011:0> enable 'T'
> 0 row(s) in 18.7140 seconds
> hbase(main):012:0> describe 'T'
> DESCRIPTION   
>  ENABLED
>  {NAME => 'T', FAMILIES => [{NAME => 'df', DATA_BLOCK_ENCODING => 'NONE', 
> BLOOMFILTER = true
>  > 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'GZ', 
> MIN_VERSIONS => '0', TTL => '2147483647', K
>  EEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'true', 
> ENCODE_ON_DISK => 'true', BLOCKCACHE => 'tru
>  e'}]}
> {code}
> As you can see, the new attribute 'FAKE' has not been added to column family 
> 'cf'.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-10505.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to 0.94 and 0.96.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898792#comment-13898792
 ] 

stack commented on HBASE-10505:
---

+1 for 0.96

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10507) Proper filter tests for TestImportExport

2014-02-11 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-10507:
-

 Summary: Proper filter tests for TestImportExport
 Key: HBASE-10507
 URL: https://issues.apache.org/jira/browse/HBASE-10507
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl


See parent. TestImportExport.testWithFilter used to passed by accident (until 
parent is fixed and until very recently also in trunk).
This is as simple as just added some non-matching rows to the tests. Other than 
parent that should be added to all branches.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898787#comment-13898787
 ] 

Lars Hofhansl commented on HBASE-10505:
---

I'd like to address the issues first.
It's not much work to add a non-matching row/kvs to 
TestImportExport.testWithFilter, but that would need to be in branches.

I'll file a subtask for that.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898782#comment-13898782
 ] 

Hudson commented on HBASE-10481:


SUCCESS: Integrated in HBase-0.98 #150 (See 
[https://builds.apache.org/job/HBase-0.98/150/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567470)
* /hbase/branches/0.98/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898779#comment-13898779
 ] 

ramkrishna.s.vasudevan commented on HBASE-10505:


[~lhofhansl]
Patch looks good to me.  Can we try adding a filter testcase in 
testimportExport that validates this behaviour.  The current test as you said 
my be just passing.  Even sometime back I found that the testcases in 
TestImport were just passing without actually getting any KVs.  So add a 
testcase for this behaviour. If it takes time, then we can address in a 
seperate JIRA.
+1 on patch.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10506:
--

Fix Version/s: 0.94.17
   0.99.0
   0.96.2
   0.98.0

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10506-0.94.txt, HBASE-10506-trunk.txt
>
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> 2014-02-11,18:16:19,797 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
> get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259 after 0 ms, since caller disconnected
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259: output error
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 46 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> With this fix, we can reduce this hit probability at least:) the upstream 
> hadoop has this checking already, see: 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898773#comment-13898773
 ] 

Lars Hofhansl commented on HBASE-10506:
---

Looks good to me. +1

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10506-0.94.txt, HBASE-10506-trunk.txt
>
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> 2014-02-11,18:16:19,797 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
> get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259 after 0 ms, since caller disconnected
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259: output error
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 46 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> With this fix, we can reduce this hit probability at least:) the upstream 
> hadoop has this checking already, see: 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898771#comment-13898771
 ] 

Ted Yu commented on HBASE-10506:


+1

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBASE-10506-0.94.txt, HBASE-10506-trunk.txt
>
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> 2014-02-11,18:16:19,797 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
> get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259 after 0 ms, since caller disconnected
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259: output error
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 46 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> With this fix, we can reduce this hit probability at least:) the upstream 
> hadoop has this checking already, see: 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898772#comment-13898772
 ] 

Lars Hofhansl commented on HBASE-10505:
---

[~yuzhih...@gmail.com], [~stack], you good with the patch? Including 0.96? 
(Without it filtering in Import is broken)

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898769#comment-13898769
 ] 

Hudson commented on HBASE-10481:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #138 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/138/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567470)
* /hbase/branches/0.98/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-10506:
--

Attachment: HBASE-10506-trunk.txt

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBASE-10506-0.94.txt, HBASE-10506-trunk.txt
>
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> 2014-02-11,18:16:19,797 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
> get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259 after 0 ms, since caller disconnected
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259: output error
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 46 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> With this fix, we can reduce this hit probability at least:) the upstream 
> hadoop has this checking already, see: 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-10506:
--

Status: Patch Available  (was: Open)

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBASE-10506-0.94.txt, HBASE-10506-trunk.txt
>
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> 2014-02-11,18:16:19,797 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
> get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259 after 0 ms, since caller disconnected
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259: output error
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 46 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> With this fix, we can reduce this hit probability at least:) the upstream 
> hadoop has this checking already, see: 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898766#comment-13898766
 ] 

Hudson commented on HBASE-10481:


SUCCESS: Integrated in HBase-TRUNK #4911 (See 
[https://builds.apache.org/job/HBase-TRUNK/4911/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567471)
* /hbase/trunk/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898764#comment-13898764
 ] 

Feng Honghua commented on HBASE-10487:
--

Got it, thanks Ted.

> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10497) Add standard handling for swallowed InterruptedException thrown by Thread.sleep under HBase-Client/HBase-Server folders systematically

2014-02-11 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898762#comment-13898762
 ] 

Feng Honghua commented on HBASE-10497:
--

The helper method Threads.sleep is implemented as below:
{code}
  public static void sleep(long millis) {
try {
  Thread.sleep(millis);
} catch (InterruptedException e) {
  e.printStackTrace();
  Thread.currentThread().interrupt();
}
  }
{code}
So it's incorrect for tit to be called within a while/for loop(as [~nkeywal] 
pointed out in above comment), but actually it does be called within while/for 
loop several times in HBase code such as in DeleteTableHandler.java, 
AssignmentManager.java, JVMClusterUtil.java, HRegionServer.java and 
LruBlockCache.java (just a search under hbase-server folder). and a method in 
HRegionFileSystem.java calling Threads.sleep itself is called within a do-while 
loop, hence the same problem...

> Add standard handling for swallowed InterruptedException thrown by 
> Thread.sleep under HBase-Client/HBase-Server folders systematically
> --
>
> Key: HBASE-10497
> URL: https://issues.apache.org/jira/browse/HBASE-10497
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
>Priority: Minor
> Attachments: HBASE-10497-trunk_v1.patch, HBASE-10497-trunk_v2.patch
>
>
> There are many places where InterruptedException thrown by Thread.sleep are 
> swallowed silently (which are neither declared in the caller method's throws 
> clause nor rethrown immediately) under HBase-Client/HBase-Server folders.
> It'd be better to add standard 'log and call currentThread.interrupt' for 
> such cases.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10495) upgrade script is printing usage two times with help option.

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898753#comment-13898753
 ] 

Hudson commented on HBASE-10495:


FAILURE: Integrated in hbase-0.96 #290 (See 
[https://builds.apache.org/job/hbase-0.96/290/])
HBASE-10495 upgrade script is printing usage two times with help 
option.(Rajesh) (rajeshbabu: rev 1567496)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java


> upgrade script is printing usage two times with help option.
> 
>
> Key: HBASE-10495
> URL: https://issues.apache.org/jira/browse/HBASE-10495
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Affects Versions: 0.96.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10495.patch
>
>
> while testing 0.98 RC found usage is printing two times with help option.
> {code}
> HOST-10-18-91-14:/home/rajeshbabu/98RC3/hbase-0.98.0-hadoop2/bin # ./hbase 
> upgrade -h
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898755#comment-13898755
 ] 

Hudson commented on HBASE-10481:


FAILURE: Integrated in hbase-0.96 #290 (See 
[https://builds.apache.org/job/hbase-0.96/290/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567469)
* /hbase/branches/0.96/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898754#comment-13898754
 ] 

Hudson commented on HBASE-10485:


FAILURE: Integrated in hbase-0.96 #290 (See 
[https://builds.apache.org/job/hbase-0.96/290/])
HBASE-10485 PrefixFilter#filterKeyValue() should perform filtering on row key 
(Ted Yu and LarsH) (larsh: rev 1567460)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> PrefixFilter#filterKeyValue() should perform filtering on row key
> -
>
> Key: HBASE-10485
> URL: https://issues.apache.org/jira/browse/HBASE-10485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10485-0.94-v2.txt, 10485-0.94.txt, 10485-trunk-v2.txt, 
> 10485-trunk.addendum, 10485-v1.txt
>
>
> Niels reported an issue under the thread 'Trouble writing custom filter for 
> use in FilterList' where his custom filter used in FilterList along with 
> PrefixFilter produced an unexpected results.
> His test can be found here:
> https://github.com/nielsbasjes/HBase-filter-problem
> This is due to PrefixFilter#filterKeyValue() using 
> FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
> When FilterList.Operator.MUST_PASS_ONE is specified, 
> FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
> prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
> ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-10506:
--

Attachment: HBASE-10506-0.94.txt

Attached a patch against 0.94 branch, will make a trunk patch shortly

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBASE-10506-0.94.txt
>
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> 2014-02-11,18:16:19,797 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
> get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259 after 0 ms, since caller disconnected
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259: output error
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 46 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> With this fix, we can reduce this hit probability at least:) the upstream 
> hadoop has this checking already, see: 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898748#comment-13898748
 ] 

Ted Yu commented on HBASE-10487:


QA bot only tests patches against trunk.

> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10506) Fail-fast if client connection is lost before the real call be executed in RPC layer

2014-02-11 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-10506:
--

Summary: Fail-fast if client connection is lost before the real call be 
executed in RPC layer  (was: Fail-fast if client connection is lost before the 
real call be execused in RPC layer)

> Fail-fast if client connection is lost before the real call be executed in 
> RPC layer
> 
>
> Key: HBASE-10506
> URL: https://issues.apache.org/jira/browse/HBASE-10506
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.3
>Reporter: Liang Xie
>Assignee: Liang Xie
>
> In current HBase rpc impletement, there is no any connection double-checking 
> just before the "call" be invoked, considing there's a gc or other OS 
> scheduling or the call queue is full enough(e.g. the server side is slow/hang 
> due to some issues), and if the client side has a small rpc timeout value, it 
> could be possible when this request be taken from call queue, the client 
> connection is lost in that moment. we'd better has some fail-fast code before 
> the reall "call" be invoked, it just waste the server side resource.
> Here is a strace trace from our production env:
> 2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3eae6c77, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43252: output error
> 2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 151 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> 2014-02-11,18:16:19,797 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
> get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259 after 0 ms, since caller disconnected
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> Responder, call get([B@3f10ffd2, 
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
>  rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
> 10.101.10.181:43259: output error
> 2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 46 on 12600 caught a ClosedChannelException, this means that the 
> server was processing a request but the client went away. The error message 
> was: null
> With this fix, we can reduce this hit probability at least:) the upstream 
> hadoop has this checking already, see: 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10506) Fail-fast if client connection is lost before the real call be execused in RPC layer

2014-02-11 Thread Liang Xie (JIRA)
Liang Xie created HBASE-10506:
-

 Summary: Fail-fast if client connection is lost before the real 
call be execused in RPC layer
 Key: HBASE-10506
 URL: https://issues.apache.org/jira/browse/HBASE-10506
 Project: HBase
  Issue Type: Bug
  Components: IPC/RPC
Affects Versions: 0.94.3
Reporter: Liang Xie
Assignee: Liang Xie


In current HBase rpc impletement, there is no any connection double-checking 
just before the "call" be invoked, considing there's a gc or other OS 
scheduling or the call queue is full enough(e.g. the server side is slow/hang 
due to some issues), and if the client side has a small rpc timeout value, it 
could be possible when this request be taken from call queue, the client 
connection is lost in that moment. we'd better has some fail-fast code before 
the reall "call" be invoked, it just waste the server side resource.
Here is a strace trace from our production env:
2014-02-11,18:16:19,525 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
Responder, call get([B@3eae6c77, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"0741031-m8997060"}),
 rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
10.101.10.181:43252: output error
2014-02-11,18:16:19,526 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
handler 151 on 12600 caught a ClosedChannelException, this means that the 
server was processing a request but the client went away. The error message 
was: null
2014-02-11,18:16:19,797 ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer:
org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call 
get([B@3f10ffd2, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
 rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
10.101.10.181:43259 after 0 ms, since caller disconnected
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:450)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3633)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3590)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3615)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4414)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4387)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2075)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:460)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1457)
2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
Responder, call get([B@3f10ffd2, 
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"X":["T"]},"maxVersions":1,"row":"4245978-m7281526"}),
 rpc version=1, client version=29, methodsFingerPrint=-241105381 from 
10.101.10.181:43259: output error
2014-02-11,18:16:19,802 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server 
handler 46 on 12600 caught a ClosedChannelException, this means that the server 
was processing a request but the client went away. The error message was: null

With this fix, we can reduce this hit probability at least:) the upstream 
hadoop has this checking already, see: 
https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2034-L2036



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898744#comment-13898744
 ] 

Feng Honghua commented on HBASE-10487:
--

bq.-1 overall. Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12628416/HBASE-10487-0.98_v1.patch
 against trunk revision .
Above is against trunk revision? but it's a patch for 0.98.
And I can successfully apply this patch against 0.98 from 
http://svn.apache.org/repos/asf/hbase/branches/0.98/ in my local dev 
environment :-)

> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10487:
---

Fix Version/s: 0.98.1
 Hadoop Flags: Reviewed

> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10487:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898743#comment-13898743
 ] 

Hudson commented on HBASE-10481:


FAILURE: Integrated in HBase-0.94 #1283 (See 
[https://builds.apache.org/job/HBase-0.94/1283/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567472)
* /hbase/branches/0.94/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898741#comment-13898741
 ] 

Lars Hofhansl commented on HBASE-10505:
---

[~vmariyala], [~jesse_yates], not sure if that any impact on our stuff.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898735#comment-13898735
 ] 

Lars Hofhansl commented on HBASE-10505:
---

0.98 is fine too.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10505:
--

Attachment: 10505-0.96-v2.txt
10505-0.94-v2.txt

Patches for 0.94 and 0.96 to make it like trunk.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898731#comment-13898731
 ] 

Hadoop QA commented on HBASE-10487:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12628416/HBASE-10487-0.98_v1.patch
  against trunk revision .
  ATTACHMENT ID: 12628416

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8666//console

This message is automatically generated.

> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898729#comment-13898729
 ] 

Lars Hofhansl commented on HBASE-10505:
---

Heh, I hadn't sync'ed trunk in a but, was just fixed recently. I'll fix it the 
same way in 0.94 and 0.96 (so it's all the same).


> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7849) Provide administrative limits around bulkloads of files into a single region

2014-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898725#comment-13898725
 ] 

Hadoop QA commented on HBASE-7849:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12628403/hbase-7849_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12628403

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8665//console

This message is automatically generated.

> Provide administrative limits around bulkloads of files into a single region
> 
>
> Key: HBASE-7849
> URL: https://issues.apache.org/jira/browse/HBASE-7849
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Harsh J
>Assignee: Jimmy Xiang
> Attachments: hbase-7849.patch, hbase-7849_v2.patch
>
>
> Given the current mechanism, it is possible for users to flood a single 
> region with 1k+ store files via the bulkload API and basically cause the 
> region to become a flying dutchman - never getting assigned successfully 
> again.
> Ideally, an administrative limit could solve this. If the bulkload RPC call 
> can check if the region already has X store files, then it can reject the 
> request to add another and throw a failure at the client with an appropriate 
> message.
> This may be an intrusive change, but seems necessary in perfecting the gap 
> between devs and ops in managing a HBase clusters. This would especially 
> prevent abuse in form of unaware devs not pre-splitting tables before 
> bulkloading things in. Currently, this leads to ops pain, as the devs think 
> HBase has gone non-functional and begin complaining.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10505:
---

Fix Version/s: (was: 0.99.0)
   (was: 0.98.0)

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.2, 0.94.17
>
> Attachments: 10505-0.94.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10487:
-

Attachment: HBASE-10487-0.98_v1.patch

Patch for 0.98 attached, thanks [~yuzhih...@gmail.com]

> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.99.0
>
> Attachments: HBASE-10487-0.98_v1.patch, HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898719#comment-13898719
 ] 

Ted Yu commented on HBASE-10505:


In trunk, we have:
{code}
  if (filter == null || !filter.filterRowKey(key.get(), key.getOffset(), 
key.getLength())) {
for (Cell kv : result.rawCells()) {
{code}
So this problem doesn't exist in trunk.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: 10505-0.94.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898715#comment-13898715
 ] 

Ted Yu commented on HBASE-10505:


Thanks for fixing this.
{code}
+   * @param row on which to apply the filter
+   * @return true if the key should not be written, false otherwise
{code}
Please add javadoc for other parameters.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: 10505-0.94.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898706#comment-13898706
 ] 

Lars Hofhansl commented on HBASE-10493:
---

Filed HBASE-10505. This was completely broken in Import. TestImportExport only 
passed by pure accident.

> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-10505:
-

Assignee: Lars Hofhansl

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: 10505-0.94.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10505:
--

Attachment: 10505-0.94.txt

Here's a 0.94 patch.
Actually TestImportExport.testWithFilter only passes by pure accident, because 
PrefixFilter.filterKeyValue never filtered anything.

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: 10505-0.94.txt
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898701#comment-13898701
 ] 

Hudson commented on HBASE-10485:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #19 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/19/])
HBASE-10485 PrefixFilter#filterKeyValue() should perform filtering on row key 
(Ted Yu and LarsH) (larsh: rev 1567461)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> PrefixFilter#filterKeyValue() should perform filtering on row key
> -
>
> Key: HBASE-10485
> URL: https://issues.apache.org/jira/browse/HBASE-10485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10485-0.94-v2.txt, 10485-0.94.txt, 10485-trunk-v2.txt, 
> 10485-trunk.addendum, 10485-v1.txt
>
>
> Niels reported an issue under the thread 'Trouble writing custom filter for 
> use in FilterList' where his custom filter used in FilterList along with 
> PrefixFilter produced an unexpected results.
> His test can be found here:
> https://github.com/nielsbasjes/HBase-filter-problem
> This is due to PrefixFilter#filterKeyValue() using 
> FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
> When FilterList.Operator.MUST_PASS_ONE is specified, 
> FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
> prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
> ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898702#comment-13898702
 ] 

Hudson commented on HBASE-10481:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #19 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/19/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567472)
* /hbase/branches/0.94/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10505:
--

Fix Version/s: 0.94.17
   0.99.0
   0.96.2
   0.98.0

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10505:
--

Priority: Critical  (was: Major)

> Import.filterKv does not call Filter.filterRowKey
> -
>
> Key: HBASE-10505
> URL: https://issues.apache.org/jira/browse/HBASE-10505
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
>
> The general contract of a Filter is that filterRowKey is called before 
> filterKeyValue.
> Import is using Filters for custom filtering but it does not called 
> filterRowKey at all. That throws off some Filters (such as RowFilter, and 
> more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
> HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-11 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-10505:
-

 Summary: Import.filterKv does not call Filter.filterRowKey
 Key: HBASE-10505
 URL: https://issues.apache.org/jira/browse/HBASE-10505
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl


The general contract of a Filter is that filterRowKey is called before 
filterKeyValue.
Import is using Filters for custom filtering but it does not called 
filterRowKey at all. That throws off some Filters (such as RowFilter, and more 
recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898696#comment-13898696
 ] 

Hudson commented on HBASE-10481:


FAILURE: Integrated in HBase-0.94-security #408 (See 
[https://builds.apache.org/job/HBase-0.94-security/408/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567472)
* /hbase/branches/0.94/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898693#comment-13898693
 ] 

Lars Hofhansl commented on HBASE-10493:
---

So this and HBASE-10485 actually unveil another problem: Import does not call 
filterRowKey, so the filter is not setup correctly before we call 
filterKeyValue. This is a problem!
[~yuzhih...@gmail.com], [~stack]. So we can either fix Import in all branches 
or roll these two changes back.
(but note that Import is already broken for other filters - such as RowFilter - 
that also relies on filterRowKey to be called first)

> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898695#comment-13898695
 ] 

Hudson commented on HBASE-10485:


FAILURE: Integrated in HBase-0.94-security #408 (See 
[https://builds.apache.org/job/HBase-0.94-security/408/])
HBASE-10485 PrefixFilter#filterKeyValue() should perform filtering on row key 
(Ted Yu and LarsH) (larsh: rev 1567461)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> PrefixFilter#filterKeyValue() should perform filtering on row key
> -
>
> Key: HBASE-10485
> URL: https://issues.apache.org/jira/browse/HBASE-10485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10485-0.94-v2.txt, 10485-0.94.txt, 10485-trunk-v2.txt, 
> 10485-trunk.addendum, 10485-v1.txt
>
>
> Niels reported an issue under the thread 'Trouble writing custom filter for 
> use in FilterList' where his custom filter used in FilterList along with 
> PrefixFilter produced an unexpected results.
> His test can be found here:
> https://github.com/nielsbasjes/HBase-filter-problem
> This is due to PrefixFilter#filterKeyValue() using 
> FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
> When FilterList.Operator.MUST_PASS_ONE is specified, 
> FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
> prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
> ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10495) upgrade script is printing usage two times with help option.

2014-02-11 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10495:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to 0.96,0.98 and trunk.
Thanks for review Stack.

> upgrade script is printing usage two times with help option.
> 
>
> Key: HBASE-10495
> URL: https://issues.apache.org/jira/browse/HBASE-10495
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Affects Versions: 0.96.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10495.patch
>
>
> while testing 0.98 RC found usage is printing two times with help option.
> {code}
> HOST-10-18-91-14:/home/rajeshbabu/98RC3/hbase-0.98.0-hadoop2/bin # ./hbase 
> upgrade -h
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> usage: $bin/hbase upgrade -check [-dir DIR]|-execute
>  -check   Run upgrade check; looks for HFileV1  under ${hbase.rootdir}
>   or provided 'dir' directory.
>  -dirRelative path of dir to check for HFileV1s.
>  -execute Run upgrade; zk and hdfs must be up, hbase down
>  -h,--helpHelp
> Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade
> Example usage:
> Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:
>  $ bin/hbase upgrade -check
> Run the upgrade:
>  $ bin/hbase upgrade -execute
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898663#comment-13898663
 ] 

Hudson commented on HBASE-10493:


FAILURE: Integrated in HBase-0.94-JDK7 #46 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/46/])
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567426)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898664#comment-13898664
 ] 

Hudson commented on HBASE-10481:


FAILURE: Integrated in HBase-0.94-JDK7 #46 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/46/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567472)
* /hbase/branches/0.94/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898662#comment-13898662
 ] 

Hudson commented on HBASE-10485:


FAILURE: Integrated in HBase-0.94-JDK7 #46 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/46/])
HBASE-10485 PrefixFilter#filterKeyValue() should perform filtering on row key 
(Ted Yu and LarsH) (larsh: rev 1567461)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> PrefixFilter#filterKeyValue() should perform filtering on row key
> -
>
> Key: HBASE-10485
> URL: https://issues.apache.org/jira/browse/HBASE-10485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10485-0.94-v2.txt, 10485-0.94.txt, 10485-trunk-v2.txt, 
> 10485-trunk.addendum, 10485-v1.txt
>
>
> Niels reported an issue under the thread 'Trouble writing custom filter for 
> use in FilterList' where his custom filter used in FilterList along with 
> PrefixFilter produced an unexpected results.
> His test can be found here:
> https://github.com/nielsbasjes/HBase-filter-problem
> This is due to PrefixFilter#filterKeyValue() using 
> FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
> When FilterList.Operator.MUST_PASS_ONE is specified, 
> FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
> prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
> ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10361) Enable/AlterTable support for region replicas

2014-02-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898647#comment-13898647
 ] 

Enis Soztutar commented on HBASE-10361:
---

bq. // update meta if needed (TODO: make this work when table is online)
Will this come in this patch? If not we can create an issue for tracking this, 
and check and throw an exception if the table is not offline when region 
replication is changed. 

inside removeReplicaColumnsIfNeeded(), we remove the regions if new replication 
count is lower from meta. We want to encapsulate the meta operations in 
MetaReader / MetaEditor so that outsiders won't even know how the replicas are 
stored. Can we change this to call MetaEditor.deleteRegion() for the region 
replicas that should be removed from meta inside MetaEditor, and change 
MetaEditor to recognize the replica and do the action accordingly. 

ModifyTableHandler also does not create new regions when region replication is 
bumped. Is this because EnableTableHandler would create those anyway? In case 
of online schema change, we can address this later I guess. 

> Enable/AlterTable support for region replicas
> -
>
> Key: HBASE-10361
> URL: https://issues.apache.org/jira/browse/HBASE-10361
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: 0.99.0
>
> Attachments: 10361-1.txt
>
>
> Add support for region replicas in master operations enable table and modify 
> table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898636#comment-13898636
 ] 

Hudson commented on HBASE-10493:


FAILURE: Integrated in hbase-0.96 #289 (See 
[https://builds.apache.org/job/hbase-0.96/289/])
HBASE-10493 Fix TestFilterList (tedyu: rev 1567418)
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key, add test (tedyu: rev 1567416)
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567414)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898630#comment-13898630
 ] 

Hudson commented on HBASE-10493:


FAILURE: Integrated in HBase-0.94 #1282 (See 
[https://builds.apache.org/job/HBase-0.94/1282/])
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567426)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898629#comment-13898629
 ] 

Hudson commented on HBASE-10485:


FAILURE: Integrated in HBase-0.94 #1282 (See 
[https://builds.apache.org/job/HBase-0.94/1282/])
HBASE-10485 PrefixFilter#filterKeyValue() should perform filtering on row key 
(Ted Yu and LarsH) (larsh: rev 1567461)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> PrefixFilter#filterKeyValue() should perform filtering on row key
> -
>
> Key: HBASE-10485
> URL: https://issues.apache.org/jira/browse/HBASE-10485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10485-0.94-v2.txt, 10485-0.94.txt, 10485-trunk-v2.txt, 
> 10485-trunk.addendum, 10485-v1.txt
>
>
> Niels reported an issue under the thread 'Trouble writing custom filter for 
> use in FilterList' where his custom filter used in FilterList along with 
> PrefixFilter produced an unexpected results.
> His test can be found here:
> https://github.com/nielsbasjes/HBase-filter-problem
> This is due to PrefixFilter#filterKeyValue() using 
> FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
> When FilterList.Operator.MUST_PASS_ONE is specified, 
> FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
> prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
> ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-7849) Provide administrative limits around bulkloads of files into a single region

2014-02-11 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7849:
---

Attachment: hbase-7849_v2.patch

Attached v2 that works with hadoop1 too.

> Provide administrative limits around bulkloads of files into a single region
> 
>
> Key: HBASE-7849
> URL: https://issues.apache.org/jira/browse/HBASE-7849
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Harsh J
>Assignee: Jimmy Xiang
> Attachments: hbase-7849.patch, hbase-7849_v2.patch
>
>
> Given the current mechanism, it is possible for users to flood a single 
> region with 1k+ store files via the bulkload API and basically cause the 
> region to become a flying dutchman - never getting assigned successfully 
> again.
> Ideally, an administrative limit could solve this. If the bulkload RPC call 
> can check if the region already has X store files, then it can reject the 
> request to add another and throw a failure at the client with an appropriate 
> message.
> This may be an intrusive change, but seems necessary in perfecting the gap 
> between devs and ops in managing a HBase clusters. This would especially 
> prevent abuse in form of unaware devs not pre-splitting tables before 
> bulkloading things in. Currently, this leads to ops pain, as the devs think 
> HBase has gone non-functional and begin complaining.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898607#comment-13898607
 ] 

Hudson commented on HBASE-10493:


FAILURE: Integrated in hbase-0.96-hadoop2 #200 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/200/])
HBASE-10493 Fix TestFilterList (tedyu: rev 1567418)
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key, add test (tedyu: rev 1567416)
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567414)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10481) API Compatibility JDiff script does not properly handle arguments in reverse order

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898608#comment-13898608
 ] 

Hudson commented on HBASE-10481:


FAILURE: Integrated in hbase-0.96-hadoop2 #200 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/200/])
HBASE-10481 API Compatibility JDiff script does not properly handle arguments 
in reverse order (Aleksandr Shulman) (stack: rev 1567469)
* /hbase/branches/0.96/dev-support/jdiffHBasePublicAPI.sh


> API Compatibility JDiff script does not properly handle arguments in reverse 
> order
> --
>
> Key: HBASE-10481
> URL: https://issues.apache.org/jira/browse/HBASE-10481
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.94.16, 0.99.0, 0.96.1.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.98.1, 0.99.0, 0.96.1.1, 0.94.17
>
> Attachments: HBASE-10481-v1.patch
>
>
> [~jmhsieh] found an issue when doing a diff between a pre-0.96 branch and a 
> post-0.96 branch.
> Typically, if the pre-0.96 branch is specified first, and the post-0.96 
> branch second, the exisitng logic handles it.
> When it is in the reverse order, that logic is not handled properly.
> The fix should address this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10500) Some tools OOM when BucketCache is enabled

2014-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898605#comment-13898605
 ] 

Hadoop QA commented on HBASE-10500:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12628364/HBASE-10500.01.patch
  against trunk revision .
  ATTACHMENT ID: 12628364

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8664//console

This message is automatically generated.

> Some tools OOM when BucketCache is enabled
> --
>
> Key: HBASE-10500
> URL: https://issues.apache.org/jira/browse/HBASE-10500
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.96.0, 0.99.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: HBASE-10500.00.patch, HBASE-10500.01.patch
>
>
> Running {{hbck --repair}} or {{LoadIncrementalHFiles}} when BucketCache is 
> enabled in offheap mode can cause OOM. This is apparently because 
> {{bin/hbase}} does not include $HBASE_REGIONSERVER_OPTS for these tools. This 
> results in HRegion or HFileReaders initialized with a CacheConfig that 
> doesn't have the necessary Direct Memory.
> Possible solutions include:
>  - disable blockcache in the config used by hbck when running its repairs
>  - include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments
> I'm leaning toward the former because it's possible that hbck is run on a 
> host with different hardware profile as the RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898606#comment-13898606
 ] 

Hudson commented on HBASE-10485:


FAILURE: Integrated in hbase-0.96-hadoop2 #200 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/200/])
HBASE-10485 PrefixFilter#filterKeyValue() should perform filtering on row key 
(Ted Yu and LarsH) (larsh: rev 1567460)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> PrefixFilter#filterKeyValue() should perform filtering on row key
> -
>
> Key: HBASE-10485
> URL: https://issues.apache.org/jira/browse/HBASE-10485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10485-0.94-v2.txt, 10485-0.94.txt, 10485-trunk-v2.txt, 
> 10485-trunk.addendum, 10485-v1.txt
>
>
> Niels reported an issue under the thread 'Trouble writing custom filter for 
> use in FilterList' where his custom filter used in FilterList along with 
> PrefixFilter produced an unexpected results.
> His test can be found here:
> https://github.com/nielsbasjes/HBase-filter-problem
> This is due to PrefixFilter#filterKeyValue() using 
> FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
> When FilterList.Operator.MUST_PASS_ONE is specified, 
> FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
> prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
> ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10498) Add new APIs to load balancer interface

2014-02-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898603#comment-13898603
 ] 

Enis Soztutar commented on HBASE-10498:
---

bq. Can you not add a new attribute for the Stochastic LB to consider – 
colocation – and weight it above others rather than add API?
This is kind of the opposite of what we do for not co-locating the region 
replicas. The patch at HBASE-10351 adds "soft" constraints for ensuring that 
the replicas are not co-located. I highly suggest taking a look there. However, 
for secondary indexing, co-locating regions should be a "hard constraint" I 
imagine. 

Still it should be possible to implement hard constraints like co-location 
inside the core LB's, but implement the logic of deciding which regions to 
co-locate as a pluggable layer. 

> Add new APIs to load balancer interface
> ---
>
> Key: HBASE-10498
> URL: https://issues.apache.org/jira/browse/HBASE-10498
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.1, 0.99.0
>
>
> If a custom load balancer required to maintain region and corresponding 
> server locations,
> we can capture this information when we run any balancer algorithm before 
> assignment(like random,retain).
> But during master startup we will not call any balancer algorithm if a region 
> already assinged
> During split also we open child regions first in RS and then notify to master 
> through zookeeper. 
> So split regions information cannot be captured into balancer.
> Since balancer has access to master we can get the information from online 
> regions or region plan data structures in AM.
> But some use cases we cannot relay on this information(mainly to maintain 
> colocation of two tables regions). 
> So it's better to add some APIs to load balancer to notify balancer when 
> *region is online or offline*.
> These APIs helps a lot to maintain *regions colocation through custom load 
> balancer* which is very important in secondary indexing. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898600#comment-13898600
 ] 

Hudson commented on HBASE-10493:


SUCCESS: Integrated in HBase-0.98 #149 (See 
[https://builds.apache.org/job/HBase-0.98/149/])
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567402)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898565#comment-13898565
 ] 

Hudson commented on HBASE-10493:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #137 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/137/])
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567402)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898561#comment-13898561
 ] 

Hudson commented on HBASE-10493:


SUCCESS: Integrated in HBase-TRUNK #4910 (See 
[https://builds.apache.org/job/HBase-TRUNK/4910/])
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567403)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898557#comment-13898557
 ] 

Hudson commented on HBASE-10485:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #87 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/87/])
HBASE-10485 PrefixFilter#filterKeyValue() should perform filtering on row key 
(tedyu: rev 1566912)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
HBASE-10485 Revert to address more review comments (tedyu: rev 1566869)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterListAdditional.java


> PrefixFilter#filterKeyValue() should perform filtering on row key
> -
>
> Key: HBASE-10485
> URL: https://issues.apache.org/jira/browse/HBASE-10485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10485-0.94-v2.txt, 10485-0.94.txt, 10485-trunk-v2.txt, 
> 10485-trunk.addendum, 10485-v1.txt
>
>
> Niels reported an issue under the thread 'Trouble writing custom filter for 
> use in FilterList' where his custom filter used in FilterList along with 
> PrefixFilter produced an unexpected results.
> His test can be found here:
> https://github.com/nielsbasjes/HBase-filter-problem
> This is due to PrefixFilter#filterKeyValue() using 
> FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
> When FilterList.Operator.MUST_PASS_ONE is specified, 
> FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
> prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
> ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898560#comment-13898560
 ] 

Hudson commented on HBASE-10487:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #87 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/87/])
HBASE-10487 Avoid allocating new KeyValue and according bytes-copying for 
appended kvs which don't have existing values (Honghua) (tedyu: rev 1566981)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Avoid allocating new KeyValue and according bytes-copying for appended kvs 
> which don't have existing values
> ---
>
> Key: HBASE-10487
> URL: https://issues.apache.org/jira/browse/HBASE-10487
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.99.0
>
> Attachments: HBASE-10487-trunk_v1.patch
>
>
> in HRegion.append, new KeyValues will be allocated and do according 
> bytes-copying no matter whether there are existing kv for the appended cells, 
> we can improve here by avoiding the allocating of new KeyValue and according 
> bytes-copying for kv which don't have existing(old) values by reusing the 
> passed-in kv and only updating its timestamp to 'now'(its original timestamp 
> is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10493) InclusiveStopFilter#filterKeyValue() should perform filtering on row key

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898558#comment-13898558
 ] 

Hudson commented on HBASE-10493:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #87 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/87/])
HBASE-10493 InclusiveStopFilter#filterKeyValue() should perform filtering on 
row key (tedyu: rev 1567403)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> InclusiveStopFilter#filterKeyValue() should perform filtering on row key
> 
>
> Key: HBASE-10493
> URL: https://issues.apache.org/jira/browse/HBASE-10493
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: 10493-0.94.txt, 10493-v1.txt, 10493-v2.txt
>
>
> InclusiveStopFilter inherits filterKeyValue() from FilterBase which always 
> returns ReturnCode.INCLUDE
> InclusiveStopFilter#filterKeyValue() should be consistent with filtering on 
> row key.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8751) Enable peer cluster to choose/change the ColumnFamilies/Tables it really want to replicate from a source cluster

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898559#comment-13898559
 ] 

Hudson commented on HBASE-8751:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #87 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/87/])
HBASE-8751 Enable peer cluster to choose/change the ColumnFamilies/Tables it
   really want to replicate from a source cluster (Feng Honghua via JD) 
(jdcryans: rev 1566944)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeers.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestPerTableCFReplication.java
* /hbase/trunk/hbase-shell/src/main/ruby/hbase/replication_admin.rb
* /hbase/trunk/hbase-shell/src/main/ruby/shell.rb
* /hbase/trunk/hbase-shell/src/main/ruby/shell/commands/add_peer.rb
* /hbase/trunk/hbase-shell/src/main/ruby/shell/commands/list_peers.rb
* /hbase/trunk/hbase-shell/src/main/ruby/shell/commands/set_peer_tableCFs.rb
* /hbase/trunk/hbase-shell/src/main/ruby/shell/commands/show_peer_tableCFs.rb


> Enable peer cluster to choose/change the ColumnFamilies/Tables it really want 
> to replicate from a source cluster
> 
>
> Key: HBASE-8751
> URL: https://issues.apache.org/jira/browse/HBASE-8751
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-8751-0.94-V0.patch, HBASE-8751-0.94-v1.patch, 
> HBASE-8751-trunk_v0.patch, HBASE-8751-trunk_v1.patch, 
> HBASE-8751-trunk_v2.patch, HBASE-8751-trunk_v3.patch
>
>
> Consider scenarios (all cf are with replication-scope=1):
> 1) cluster S has 3 tables, table A has cfA,cfB, table B has cfX,cfY, table C 
> has cf1,cf2.
> 2) cluster X wants to replicate table A : cfA, table B : cfX and table C from 
> cluster S.
> 3) cluster Y wants to replicate table B : cfY, table C : cf2 from cluster S.
> Current replication implementation can't achieve this since it'll push the 
> data of all the replicatable column-families from cluster S to all its peers, 
> X/Y in this scenario.
> This improvement provides a fine-grained replication theme which enable peer 
> cluster to choose the column-families/tables they really want from the source 
> cluster:
> A). Set the table:cf-list for a peer when addPeer:
>   hbase-shell> add_peer '3', "zk:1100:/hbase", "table1; table2:cf1,cf2; 
> table3:cf2"
> B). View the table:cf-list config for a peer using show_peer_tableCFs:
>   hbase-shell> show_peer_tableCFs "1"
> C). Change/set the table:cf-list for a peer using set_peer_tableCFs:
>   hbase-shell> set_peer_tableCFs '2', "table1:cfX; table2:cf1; table3:cf1,cf2"
> In this theme, replication-scope=1 only means a column-family CAN be 
> replicated to other clusters, but only the 'table:cf-list list' determines 
> WHICH cf/table will actually be replicated to a specific peer.
> To provide back-compatibility, empty 'table:cf-list list' will replicate all 
> replicatable cf/table. (this means we don't allow a peer which replicates 
> nothing from a source cluster, we think it's reasonable: if replicating 
> nothing why bother adding a peer?)
> This improvement addresses the exact problem raised  by the first FAQ in 
> "http://hbase.apache.org/replication.html":
>   "GLOBAL means replicate? Any provision to replicate only to cluster X and 
> not to cluster Y? or is that for later?
>   Yes, this is for much later."
> I also noticed somebody mentioned "replication-scope" as integer rather than 
> a boolean is for such fine-grained replication purpose, but I think extending 
> "replication-scope" can't achieve the same replication granularity 
> flexibility as providing above per-peer replication configurations.
> This improvement has been running smoothly in our production clusters 
> (Xiaomi) for several months.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10413) Tablesplit.getLength returns 0

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898556#comment-13898556
 ] 

Hudson commented on HBASE-10413:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #87 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/87/])
HBASE-10413 addendum makes split length readable (tedyu: rev 1567232)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
HBASE-10413 Tablesplit.getLength returns 0 (Lukas Nalezenec) (tedyu: rev 
1566768)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSplit.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSizeCalculator.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSplit.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSizeCalculator.java


> Tablesplit.getLength returns 0
> --
>
> Key: HBASE-10413
> URL: https://issues.apache.org/jira/browse/HBASE-10413
> Project: HBase
>  Issue Type: Bug
>  Components: Client, mapreduce
>Affects Versions: 0.96.1.1
>Reporter: Lukas Nalezenec
>Assignee: Lukas Nalezenec
> Fix For: 0.98.1, 0.99.0
>
> Attachments: 10413-7.patch, 10413.addendum, HBASE-10413-2.patch, 
> HBASE-10413-3.patch, HBASE-10413-4.patch, HBASE-10413-5.patch, 
> HBASE-10413-6.patch, HBASE-10413.patch
>
>
> InputSplits should be sorted by length but TableSplit does not contain real 
> getLength implementation:
>   @Override
>   public long getLength() {
> // Not clear how to obtain this... seems to be used only for sorting 
> splits
> return 0;
>   }
> This is causing us problem with scheduling - we have got jobs that are 
> supposed to finish in limited time but they get often stuck in last mapper 
> working on large region.
> Can we implement this method ? 
> What is the best way ?
> We were thinking about estimating size by size of files on HDFS.
> We would like to get Scanner from TableSplit, use startRow, stopRow and 
> column families to get corresponding region than computing size of HDFS for 
> given region and column family. 
> Update:
> This ticket was about production issue - I talked with guy who worked on this 
> and he said our production issue was probably not directly caused by 
> getLength() returning 0. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9501) Provide throttling for replication

2014-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898555#comment-13898555
 ] 

Hudson commented on HBASE-9501:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #87 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/87/])
HBASE-9501 Provide throttling for replication (Feng Honghua via JD) (jdcryans: 
rev 1566923)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationThrottler.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationThrottler.java


> Provide throttling for replication
> --
>
> Key: HBASE-9501
> URL: https://issues.apache.org/jira/browse/HBASE-9501
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9501-trunk_v0.patch, HBASE-9501-trunk_v1.patch, 
> HBASE-9501-trunk_v2.patch, HBASE-9501-trunk_v3.patch, 
> HBASE-9501-trunk_v4.patch
>
>
> When we disable a peer for a time of period, and then enable it, the 
> ReplicationSource in master cluster will push the accumulated hlog entries 
> during the disabled interval to the re-enabled peer cluster at full speed.
> If the bandwidth of the two clusters is shared by different applications, the 
> push at full speed for replication can use all the bandwidth and severely 
> influence other applications.
> Though there are two config replication.source.size.capacity and 
> replication.source.nb.capacity to tweak the batch size each time a push 
> delivers, but if decrease these two configs, the number of pushes increase, 
> and all these pushes proceed continuously without pause. And no obvious help 
> for the bandwidth throttling.
> From bandwidth-sharing and push-speed perspective, it's more reasonable to 
> provide a bandwidth up limit for each peer push channel, and within that 
> limit, peer can choose a big batch size for each push for bandwidth 
> efficiency.
> Any opinion?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10482) ReplicationSyncUp doesn't clean up its ZK, needed for tests

2014-02-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898553#comment-13898553
 ] 

Lars Hofhansl commented on HBASE-10482:
---

Should we close this now?

> ReplicationSyncUp doesn't clean up its ZK, needed for tests
> ---
>
> Key: HBASE-10482
> URL: https://issues.apache.org/jira/browse/HBASE-10482
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.96.1, 0.94.16
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
> Fix For: 0.98.1, 0.99.0, 0.94.17
>
> Attachments: HBASE-10249.patch
>
>
> TestReplicationSyncUpTool failed again:
> https://builds.apache.org/job/HBase-TRUNK/4895/testReport/junit/org.apache.hadoop.hbase.replication/TestReplicationSyncUpTool/testSyncUpTool/
> It's not super obvious why only one of the two tables is replicated, the test 
> could use some more logging, but I understand it this way:
> The first ReplicationSyncUp gets started and for some reason it cannot 
> replicate the data:
> {noformat}
> 2014-02-06 21:32:19,811 INFO  [Thread-1372] 
> regionserver.ReplicationSourceManager(203): Current list of replicators: 
> [1391722339091.SyncUpTool.replication.org,1234,1, 
> quirinus.apache.org,37045,1391722237951, 
> quirinus.apache.org,33502,1391722238125] other RSs: []
> 2014-02-06 21:32:19,811 INFO  [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(231): Replicating 
> db42e7fc-7f29-4038-9292-d85ea8b9994b -> 783c0ab2-4ff9-4dc0-bb38-86bf31d1d817
> 2014-02-06 21:32:19,892 TRACE [Thread-1372.replicationSource,2] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
> 2014-02-06 21:32:19,911 TRACE [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
> 2014-02-06 21:32:20,094 TRACE [Thread-1372.replicationSource,2] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 2
> ...
> 2014-02-06 21:32:23,414 TRACE [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 8
> 2014-02-06 21:32:23,673 INFO  [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(169): Moving 
> quirinus.apache.org,37045,1391722237951's hlogs to my queue
> 2014-02-06 21:32:23,768 DEBUG [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(396): Creating 
> quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
> 2014-02-06 21:32:23,842 DEBUG [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(396): Creating 
> quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
> 2014-02-06 21:32:24,297 TRACE [Thread-1372.replicationSource,2] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
> 2014-02-06 21:32:24,314 TRACE [Thread-1372.replicationSource,1] 
> regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
> {noformat}
> Finally it gives up:
> {noformat}
> 2014-02-06 21:32:30,873 DEBUG [Thread-1372] 
> replication.TestReplicationSyncUpTool(323): SyncUpAfterDelete failed at retry 
> = 0, with rowCount_ht1TargetPeer1 =100 and rowCount_ht2TargetAtPeer1 =200
> {noformat}
> The syncUp tool has an ID you can follow, grep for 
> syncupReplication1391722338885 or just the timestamp, and you can see it 
> doing things after that. The reason is that the tool closes the 
> ReplicationSourceManager but not the ZK connection, so events _still_ come in 
> and NodeFailoverWorker _still_ tries to recover queues but then there's 
> nothing to process them.
> Later in the logs you can see:
> {noformat}
> 2014-02-06 21:32:37,381 INFO  [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(169): Moving 
> quirinus.apache.org,33502,1391722238125's hlogs to my queue
> 2014-02-06 21:32:37,567 INFO  [ReplicationExecutor-0] 
> replication.ReplicationQueuesZKImpl(239): Won't transfer the queue, another 
> RS took care of it because of: KeeperErrorCode = NoNode for 
> /1/replication/rs/quirinus.apache.org,33502,1391722238125/lock
> {noformat}
> There shouldn't' be any racing, but now someone already moved 
> "quirinus.apache.org,33502,1391722238125" away.
> FWIW I can't even make the test fail on my machine so I'm not 100% sure 
> closing the ZK connection fixes the issue, but at least it's the right thing 
> to do.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9830) Backport HBASE-9605 to 0.94

2014-02-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9830:
-

   Resolution: Won't Fix
Fix Version/s: (was: 0.94.17)
   Status: Resolved  (was: Patch Available)

There's too much confusion here. I am removing this from 0.94.
Let me know if this important, we can bring it back.

> Backport HBASE-9605 to 0.94
> ---
>
> Key: HBASE-9830
> URL: https://issues.apache.org/jira/browse/HBASE-9830
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
> Attachments: HBASE-9830-0.94-v1.patch
>
>
> Backport HBASE-9605 which is about "Allow AggregationClient to skip 
> specifying column family for row count aggregate"



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   3   >