[jira] [Commented] (HBASE-10877) HBase non-retriable exception list should be expanded

2016-01-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121920#comment-15121920
 ] 

Nick Dimiduk commented on HBASE-10877:
--

Could well be different. I'm not thoroughly versed in spark, so I don't know 
how much help I can be. Please bring your question to the user mailing list and 
provide the full stack trace. We'll get you sorted out.

> HBase non-retriable exception list should be expanded
> -
>
> Key: HBASE-10877
> URL: https://issues.apache.org/jira/browse/HBASE-10877
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> Example where retries do not make sense:
> {noformat}
> 2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch hbase:meta table: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:18 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:20 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:24 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:34 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:55 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: 

[jira] [Updated] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15173:
---
Attachment: (was: HBASE-15173.v3.patch)

> Execute mergeRegions RPC call as the request user
> -
>
> Key: HBASE-15173
> URL: https://issues.apache.org/jira/browse/HBASE-15173
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15173.v1.patch, HBASE-15173.v2.patch, 
> HBASE-15173.v2.patch, HBASE-15173.v3.patch, HBASE-15173.v3.patch, 
> HBASE-15173.v3.patch
>
>
> This is follow up to HBASE-15132
> Master currently sends mergeRegions RPC to region server under user 'hbase'.
> This issue is to execute mergeRegions RPC call as the request user
> See tail of HBASE-15132 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15146) Don't block on Reader threads queueing to a scheduler queue

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121674#comment-15121674
 ] 

Hudson commented on HBASE-15146:


FAILURE: Integrated in HBase-Trunk_matrix #664 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/664/])
HBASE-15146 Don't block on Reader threads queueing to a scheduler queue 
(eclark: rev 138b754671d51d3f494adc250ab0cb9e085c858a)
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RWQueueRpcExecutor.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/exceptions/TestClientExceptionsUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BalancedQueueRpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcScheduler.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/FifoRpcScheduler.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java


> Don't block on Reader threads queueing to a scheduler queue
> ---
>
> Key: HBASE-15146
> URL: https://issues.apache.org/jira/browse/HBASE-15146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15146-v7.patch, HBASE-15146-v8.patch, 
> HBASE-15146-v8.patch, HBASE-15146.0.patch, HBASE-15146.1.patch, 
> HBASE-15146.2.patch, HBASE-15146.3.patch, HBASE-15146.4.patch, 
> HBASE-15146.5.patch, HBASE-15146.6.patch
>
>
> Blocking on the epoll thread is awful. The new rpc scheduler can have lots of 
> different queues. Those queues have different capacity limits. Currently the 
> dispatch method can block trying to add the the blocking queue in any of the 
> schedulers.
> This causes readers to block, tcp acks are delayed, and everything slows down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15173:
---
Attachment: HBASE-15173.v3.patch

> Execute mergeRegions RPC call as the request user
> -
>
> Key: HBASE-15173
> URL: https://issues.apache.org/jira/browse/HBASE-15173
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15173.v1.patch, HBASE-15173.v2.patch, 
> HBASE-15173.v2.patch, HBASE-15173.v3.patch, HBASE-15173.v3.patch, 
> HBASE-15173.v3.patch
>
>
> This is follow up to HBASE-15132
> Master currently sends mergeRegions RPC to region server under user 'hbase'.
> This issue is to execute mergeRegions RPC call as the request user
> See tail of HBASE-15132 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15173:
---
Attachment: (was: HBASE-15173.v3.patch)

> Execute mergeRegions RPC call as the request user
> -
>
> Key: HBASE-15173
> URL: https://issues.apache.org/jira/browse/HBASE-15173
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15173.v1.patch, HBASE-15173.v2.patch, 
> HBASE-15173.v2.patch, HBASE-15173.v3.patch, HBASE-15173.v3.patch, 
> HBASE-15173.v3.patch
>
>
> This is follow up to HBASE-15132
> Master currently sends mergeRegions RPC to region server under user 'hbase'.
> This issue is to execute mergeRegions RPC call as the request user
> See tail of HBASE-15132 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13590) TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey

2016-01-28 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13590:
-
Attachment: HBASE-13590.branch-1.1.patch

> TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey
> ---
>
> Key: HBASE-13590
> URL: https://issues.apache.org/jira/browse/HBASE-13590
> Project: HBase
>  Issue Type: Test
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4
>
> Attachments: HBASE-13590.branch-1.1.patch, 
> HBASE-13590.branch-1.1.patch, HBASE-13590.branch-1.1.patch, 
> HBASE-13590.branch-1.patch, HBASE-13590.branch-1.v2.patch, 
> testEnableTableHandler_branch-1.1.log.zip, 
> testEnableTableHandler_branch-1.log.zip
>
>
> Looking at our [build 
> history|https://builds.apache.org/job/HBase-1.1/buildTimeTrend], it seems 
> this test is flakey. See builds 429, 431, 439.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13590) TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey

2016-01-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121846#comment-15121846
 ] 

Nick Dimiduk commented on HBASE-13590:
--

bq. which seems like an env issue

Indeed. Ran on H4, not H2 (the known flakey host). Looking at it's [build 
history|https://builds.apache.org/computer/H4/builds] I don't see many passing 
hbase unit test runs. Any thoughts [~busbey] [~stack]? Can we trust this host?

{{TestEnableTableHandler}} passed even in the above failed run. Let me post 
once more. If it passes again for both JDK's, I say commit.

> TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey
> ---
>
> Key: HBASE-13590
> URL: https://issues.apache.org/jira/browse/HBASE-13590
> Project: HBase
>  Issue Type: Test
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4
>
> Attachments: HBASE-13590.branch-1.1.patch, 
> HBASE-13590.branch-1.1.patch, HBASE-13590.branch-1.patch, 
> HBASE-13590.branch-1.v2.patch, testEnableTableHandler_branch-1.1.log.zip, 
> testEnableTableHandler_branch-1.log.zip
>
>
> Looking at our [build 
> history|https://builds.apache.org/job/HBase-1.1/buildTimeTrend], it seems 
> this test is flakey. See builds 429, 431, 439.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13590) TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey

2016-01-28 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121862#comment-15121862
 ] 

Sean Busbey commented on HBASE-13590:
-

It says in the jira comment from yetus that there was a timed out test:

{quote}
| JDK v1.8.0_66 Timed out junit tests|  
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
{quote}

earlier, this would have been reported by teh zombie detector

> TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey
> ---
>
> Key: HBASE-13590
> URL: https://issues.apache.org/jira/browse/HBASE-13590
> Project: HBase
>  Issue Type: Test
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4
>
> Attachments: HBASE-13590.branch-1.1.patch, 
> HBASE-13590.branch-1.1.patch, HBASE-13590.branch-1.1.patch, 
> HBASE-13590.branch-1.patch, HBASE-13590.branch-1.v2.patch, 
> testEnableTableHandler_branch-1.1.log.zip, 
> testEnableTableHandler_branch-1.log.zip
>
>
> Looking at our [build 
> history|https://builds.apache.org/job/HBase-1.1/buildTimeTrend], it seems 
> this test is flakey. See builds 429, 431, 439.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15183) Correct discrepancies in CHANGES.txt on branch-1.1

2016-01-28 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15183:
-
Attachment: commits-1.1.3.txt

Attaching my findings from 1.1.3 audit so I don't accidentally misplace it.

> Correct discrepancies in CHANGES.txt on branch-1.1
> --
>
> Key: HBASE-15183
> URL: https://issues.apache.org/jira/browse/HBASE-15183
> Project: HBase
>  Issue Type: Test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 1.1.4
>
> Attachments: commits-1.1.3.txt
>
>
> While drafting the announcement email for 1.1.3 I noticed some discrepancies 
> between fixVersions in JIRA and what was in the git commit history. I did an 
> audit and issue cleanup in JIRA for 1.1.3, so at least that release tag was 
> corrected. This task is to go back and do the same for 1.1.1 and 1.1.2, and 
> to both fix JIRA versions and update the CHANGES.txt file. Given our release 
> style, 1.1.0 is a less obvious task, so leaving that version number be for 
> now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15183) Correct discrepancies in CHANGES.txt on branch-1.1

2016-01-28 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15183:
-
Issue Type: Task  (was: Test)

> Correct discrepancies in CHANGES.txt on branch-1.1
> --
>
> Key: HBASE-15183
> URL: https://issues.apache.org/jira/browse/HBASE-15183
> Project: HBase
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 1.1.4
>
> Attachments: commits-1.1.3.txt
>
>
> While drafting the announcement email for 1.1.3 I noticed some discrepancies 
> between fixVersions in JIRA and what was in the git commit history. I did an 
> audit and issue cleanup in JIRA for 1.1.3, so at least that release tag was 
> corrected. This task is to go back and do the same for 1.1.1 and 1.1.2, and 
> to both fix JIRA versions and update the CHANGES.txt file. Given our release 
> style, 1.1.0 is a less obvious task, so leaving that version number be for 
> now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13437) ThriftServer leaks ZooKeeper connections

2016-01-28 Thread Khaled Hammouda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121659#comment-15121659
 ] 

Khaled Hammouda commented on HBASE-13437:
-

For reference, this bug impacts the REST server as well, since it also uses the 
`ConnectionCache` class.

> ThriftServer leaks ZooKeeper connections
> 
>
> Key: HBASE-13437
> URL: https://issues.apache.org/jira/browse/HBASE-13437
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.8
>Reporter: Winger Pun
>Assignee: Albert Strasheim
> Fix For: 2.0.0, 1.1.0, 0.98.13, 1.0.2
>
> Attachments: HBASE-13437_1.patch, HBASE-13437_1.patch, 
> hbase-13437-fix.patch
>
>
> HBase ThriftServer will cache Zookeeper connection in memory using 
> org.apache.hadoop.hbase.util.ConnectionCache. This class has a mechanism 
> called chore to clean up connections idle for too long(default is 10 min). 
> But method timedOut for testing whether idle exceed for maxIdleTime always 
> return false which leads to never release the Zookeeper connection. If we 
> send request to ThriftServer every maxIdleTime then ThriftServer will keep 
> thousands of Zookeeper Connection soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121912#comment-15121912
 ] 

Ted Yu commented on HBASE-15173:


>From Java 8 test output, there were dozens of errors in the following form:
{code}
java.io.IOException: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: Error while running command to get file permissions 
: ExitCodeException exitCode=127: /bin/ls: error while loading shared 
libraries: libselinux.so.1: failed to map segment from shared object: 
Permission denied
{code}

> Execute mergeRegions RPC call as the request user
> -
>
> Key: HBASE-15173
> URL: https://issues.apache.org/jira/browse/HBASE-15173
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15173.v1.patch, HBASE-15173.v2.patch, 
> HBASE-15173.v2.patch, HBASE-15173.v3.patch, HBASE-15173.v3.patch, 
> HBASE-15173.v3.patch
>
>
> This is follow up to HBASE-15132
> Master currently sends mergeRegions RPC to region server under user 'hbase'.
> This issue is to execute mergeRegions RPC call as the request user
> See tail of HBASE-15132 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15135) Add metrics for storefile age

2016-01-28 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121684#comment-15121684
 ] 

Sean Busbey commented on HBASE-15135:
-

It's this section:

{code}
## from github
curl -L https://api.github.com/repos/apache/yetus/tarball/HEAD > 
yetus.tar.gz
tar xvpf yetus.tar.gz
{code}

it must have been the github outage yesterday. looks like curl won't fail on 
HTTP 4xx or 5xx unless we tell it to via {{--fail}}. I'll update the job to do 
this.

> Add metrics for storefile age
> -
>
> Key: HBASE-15135
> URL: https://issues.apache.org/jira/browse/HBASE-15135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Attachments: HBASE-15135-v2.patch, HBASE-15135-v3.patch, 
> HBASE-15135-v4.patch, HBASE-15135.patch
>
>
> In order to make sure that compactions are fully up to date it would be nice 
> to have metrics on:
> * Max storefile age
> * Min storefile age
> * Average storefile age
> * Number of reference files
> If we could have those metrics per region and per regionserver it would be 
> awesome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123028#comment-15123028
 ] 

Hadoop QA commented on HBASE-9393:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
1s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} master passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 51s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 42s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 33s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 23s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 13s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 4s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 57s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 48s 
{color} | {color:red} Patch causes 24 errors with Hadoop v2.6.3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 

[jira] [Commented] (HBASE-14198) Eclipse project generation is broken in master

2016-01-28 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123031#comment-15123031
 ] 

Lars Hofhansl commented on HBASE-14198:
---

Just ran into this. What do you guys do these times. Just import as a maven 
project? That has worked for me.

> Eclipse project generation is broken in master
> --
>
> Key: HBASE-14198
> URL: https://issues.apache.org/jira/browse/HBASE-14198
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>
> After running 
> mvn eclipse:eclipse I tried to import projects into Eclipse (Luna) and got 
> multiple build errors, similar to:
> {code}
> Cannot nest output folder 'hbase-thrift/target/test-classes/META-INF' inside 
> output folder 'hbase-thrift/target/test-classes' hbase-thrift
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15177) Reduce garbage created under high load

2016-01-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123080#comment-15123080
 ] 

ramkrishna.s.vasudevan commented on HBASE-15177:


Just a doubt 
{code}
buf.position(offset); // CodedInputStream may have consumed more than it should 
have
{code}
Will this be needed?  Because the CIS would have operated on the buf array only 
right?

> Reduce garbage created under high load
> --
>
> Key: HBASE-15177
> URL: https://issues.apache.org/jira/browse/HBASE-15177
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0
>
> Attachments: Screen Shot 2016-01-26 at 10.03.48 PM.png, Screen Shot 
> 2016-01-26 at 10.03.56 PM.png, Screen Shot 2016-01-26 at 10.06.16 PM.png, 
> Screen Shot 2016-01-26 at 10.15.15 PM.png, hbase-15177_v0.patch
>
>
> I have been doing some profiling of the garbage being created. The idea was 
> to follow up on HBASE-14490 and experiment with offheap IPC byte buffers and 
> byte buffer re-use. However, without changing the IPC byte buffers for now, 
> there are a couple of (easy) improvements that I've identified from 
> profiling: 
> 1. RPCServer.Connection.processRequest() should work with ByteBuffer instead 
> of byte[] and not-recreate CodedInputStream a few times. 
> 2. RSRpcServices.getRegion() allocates two byte arrays for region, while only 
> 1 is needed.
> 3. AnnotationReadingPriorityFunction is very expensive in allocations. Mainly 
> it allocates the regionName byte[] to get the table name. We already set the 
> priority for most of the operations (multi, get, increment, etc) but we are 
> only reading the priority in case of multi. We should use the priority from 
> the client side. 
> Lets do the simple improvements in this patch, we can get to IPC buffer 
> re-use in HBASE-14490. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14969) Add throughput controller for flush

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123148#comment-15123148
 ] 

Hudson commented on HBASE-14969:


SUCCESS: Integrated in HBase-Trunk_matrix #667 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/667/])
HBASE-14969 Add throughput controller for flush (zhangduo: rev 
b3b1ce99c63d79401ddda9c114850dea61af0afb)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreFlusher.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreEngine.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/FlushThroughputControllerFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputControllerFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/NoLimitCompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/throttle/TestFlushWithThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestCompactionWithThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreEngine.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFlusher.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionTool.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHMobStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreEngine.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionContext.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/throttle/TestCompactionWithThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/PressureAwareCompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/ThroughputControlUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/ThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/CompactionThroughputControllerFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HMobStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
* 

[jira] [Commented] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123149#comment-15123149
 ] 

Hudson commented on HBASE-14841:


SUCCESS: Integrated in HBase-Trunk_matrix #667 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/667/])
HBASE-14841 Allow Dictionary to work with BytebufferedCells (Ram) (ramkrishna: 
rev 0de221a19d799ad515f8f4556cacd05e6b4e74f8)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestTagCompressionContext.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/Dictionary.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java


> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch, HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15167) Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1

2016-01-28 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen reassigned HBASE-15167:
-

Assignee: Heng Chen

> Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1
> 
>
> Key: HBASE-15167
> URL: https://issues.apache.org/jira/browse/HBASE-15167
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.1.3
>Reporter: Nick Dimiduk
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 1.1.4
>
> Attachments: blocked.log
>
>
> This was left as a zombie after one of my test runs this weekend. 
> {noformat}
> "WALProcedureStoreSyncThread" daemon prio=10 tid=0x7f3ccc209000 
> nid=0x3960 in Object.wait() [0x7f3c6b6b5000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1397)
>   - locked <0x0007f2813390> (a org.apache.hadoop.ipc.Client$Call)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy23.create(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy23.create(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:264)
>   at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>   at com.sun.proxy.$Proxy24.create(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>   at com.sun.proxy.$Proxy24.create(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1612)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1488)
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1413)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:387)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:383)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:383)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:327)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:766)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:733)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.tryRollWriter(WALProcedureStore.java:668)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.periodicRoll(WALProcedureStore.java:711)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:531)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:66)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:180)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123157#comment-15123157
 ] 

Heng Chen commented on HBASE-15128:
---

IMO we can do a tradeoff,  firstly we go on this issue and patch,  after it 
committed,  we could disable region split and merge at least.  And then, we 
create an issue as subtask of HBASE-13936 to refactor all switches based on 
dynamic configuration,  if nobody do it, i can take it.  wdyt? [~mbertozzi]

> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14969) Add throughput controller for flush

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123068#comment-15123068
 ] 

Hudson commented on HBASE-14969:


FAILURE: Integrated in HBase-1.3 #522 (See 
[https://builds.apache.org/job/HBase-1.3/522/])
HBASE-14969 Add throughput controller for flush (zhangduo: rev 
0d21fa92791ae7d704f48311539facba7061770b)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputControllerFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/CompactionThroughputControllerFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/PressureAwareCompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/ThroughputControlUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/NoLimitCompactionThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreEngine.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/FlushThroughputControllerFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFlusher.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/NoLimitThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/throttle/TestCompactionWithThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestCompactionWithThroughputController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionContext.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionTool.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreEngine.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/ThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/throttle/TestFlushWithThroughputController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreEngine.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java


> Add throughput controller for flush
> ---
>
>  

[jira] [Commented] (HBASE-15135) Add metrics for storefile age

2016-01-28 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121656#comment-15121656
 ] 

Sean Busbey commented on HBASE-15135:
-

{quote}
Sean Busbey hmm.. could it be that the patch gets stuck somehow and can't be 
run, even if renamed? I try to restart the patch manually but it fails 
immediately.
[EnvInject] - Variables injected successfully.
[PreCommit-HBASE-Build] $ /bin/bash -e /tmp/hudson920775254521547136.sh
[WARN] patch process already existed 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/patchprocess'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 -::- -::- -::- 0
100 168 0 168 0 0 450 0 -::- -::- -::- 451
tar: This does not look like a tar archive
{quote}

If you're manually running the precommit job, then you should be able to rerun 
it without a rename (the duplicate submission check is done by the admin job, 
not the individual precommit jobs).

the "not a tar archive" error looks like an error with downloading some tool; 
probably a redirect not getting followed. lemme see if it's something simple I 
can fix now.

> Add metrics for storefile age
> -
>
> Key: HBASE-15135
> URL: https://issues.apache.org/jira/browse/HBASE-15135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Attachments: HBASE-15135-v2.patch, HBASE-15135-v3.patch, 
> HBASE-15135-v4.patch, HBASE-15135.patch
>
>
> In order to make sure that compactions are fully up to date it would be nice 
> to have metrics on:
> * Max storefile age
> * Min storefile age
> * Average storefile age
> * Number of reference files
> If we could have those metrics per region and per regionserver it would be 
> awesome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15183) Correct discrepancies in CHANGES.txt on branch-1.1

2016-01-28 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-15183:


 Summary: Correct discrepancies in CHANGES.txt on branch-1.1
 Key: HBASE-15183
 URL: https://issues.apache.org/jira/browse/HBASE-15183
 Project: HBase
  Issue Type: Test
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 1.1.4


While drafting the announcement email for 1.1.3 I noticed some discrepancies 
between fixVersions in JIRA and what was in the git commit history. I did an 
audit and issue cleanup in JIRA for 1.1.3, so at least that release tag was 
corrected. This task is to go back and do the same for 1.1.1 and 1.1.2, and to 
both fix JIRA versions and update the CHANGES.txt file. Given our release 
style, 1.1.0 is a less obvious task, so leaving that version number be for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13259) mmap() based BucketCache IOEngine

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122009#comment-15122009
 ] 

Hadoop QA commented on HBASE-13259:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s 
{color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 23s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 12s 
{color} | {color:red} Patch generated 2 new checkstyle issues in hbase-common 
(total was 7, now 9). {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 25s 
{color} | {color:red} Patch generated 4 new checkstyle issues in hbase-server 
(total was 36, now 40). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 57s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 24s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 149m 35s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 27s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 29s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | 

[jira] [Commented] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122082#comment-15122082
 ] 

Ted Yu commented on HBASE-15181:


Is this in production ?
If so, can you share performance numbers ?

75public static final String MAX_AGE = CONFIG_PREFIX + 
"tiered.max.storefile.age";
76public static final String TIME_UNIT = CONFIG_PREFIX + 
"tiered.time.unit";
77public static final String TIER_BASE = CONFIG_PREFIX + 
"tiered.tier.base";
78public static final String MIN_THRESHOLD = CONFIG_PREFIX + 
"tiered.min.threshold";

Please add javadoc for the parameters above.
Normally such constants end with '_KEY'

TieredCompactionPolicy.java needs Apache license. Please add annotation for 
audience and class javadoc.

Putting the next patch on review board would facilitate reviewing.

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15135) Add metrics for storefile age

2016-01-28 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122109#comment-15122109
 ] 

Sean Busbey commented on HBASE-15135:
-

IMHO no; we should not be relying on a pre-release version of another project. 
most patches still don't cross module boundaries and the next yetus release 
should be soon.

> Add metrics for storefile age
> -
>
> Key: HBASE-15135
> URL: https://issues.apache.org/jira/browse/HBASE-15135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Attachments: HBASE-15135-v2.patch, HBASE-15135-v3.patch, 
> HBASE-15135-v4.patch, HBASE-15135.patch
>
>
> In order to make sure that compactions are fully up to date it would be nice 
> to have metrics on:
> * Max storefile age
> * Min storefile age
> * Average storefile age
> * Number of reference files
> If we could have those metrics per region and per regionserver it would be 
> awesome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122082#comment-15122082
 ] 

Ted Yu edited comment on HBASE-15181 at 1/28/16 7:31 PM:
-

Is this in production ?
If so, can you share performance numbers ?
{code}
75public static final String MAX_AGE = CONFIG_PREFIX + 
"tiered.max.storefile.age";
76public static final String TIME_UNIT = CONFIG_PREFIX + 
"tiered.time.unit";
77public static final String TIER_BASE = CONFIG_PREFIX + 
"tiered.tier.base";
78public static final String MIN_THRESHOLD = CONFIG_PREFIX + 
"tiered.min.threshold";
{code}
Please add javadoc for the parameters above.
Normally such constants end with '_KEY'

TieredCompactionPolicy.java needs Apache license. Please add annotation for 
audience and class javadoc.

Putting the next patch on review board would facilitate reviewing.


was (Author: yuzhih...@gmail.com):
Is this in production ?
If so, can you share performance numbers ?

75public static final String MAX_AGE = CONFIG_PREFIX + 
"tiered.max.storefile.age";
76public static final String TIME_UNIT = CONFIG_PREFIX + 
"tiered.time.unit";
77public static final String TIER_BASE = CONFIG_PREFIX + 
"tiered.tier.base";
78public static final String MIN_THRESHOLD = CONFIG_PREFIX + 
"tiered.min.threshold";

Please add javadoc for the parameters above.
Normally such constants end with '_KEY'

TieredCompactionPolicy.java needs Apache license. Please add annotation for 
audience and class javadoc.

Putting the next patch on review board would facilitate reviewing.

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.
> Configuration can be set at hbase-site or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread Clara Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong updated HBASE-15181:

Description: 
This is a simple implementation of date-based tiered compaction similar to 
Cassandra's for the following benefits:
1. Improve date-range-based scan by structuring store files in date-based 
tiered layout.
2. Reduce compaction overhead.
3. Improve TTL efficiency.

Perfect fit for the use cases that:
1. has mostly date-based date write and scan and a focus on the most recent 
data. 
2. never or rarely deletes data.

Out-of-order writes are handled gracefully so the data will still get to the 
right store file for time-range-scan and re-compacton with existing store file 
in the same time window is handled by ExploringCompactionPolicy.

Time range overlapping among store files is tolerated and the performance 
impact is minimized.

Configuration can be set at hbase-site or overriden at per-table or 
per-column-famly level by hbase shell.

Design spec is at 
https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing


  was:
This is a simple implementation of date-based tiered compaction similar to 
Cassandra's for the following benefits:
1. Improve date-range-based scan by structuring store files in date-based 
tiered layout.
2. Reduce compaction overhead.
3. Improve TTL efficiency.

Perfect fit for the use cases that:
1. has mostly date-based date write and scan and a focus on the most recent 
data. 
2. never or rarely deletes data.

Out-of-order writes are handled gracefully so the data will still get to the 
right store file for time-range-scan and re-compacton with existing store file 
in the same time window is handled by ExploringCompactionPolicy.

Time range overlapping among store files is tolerated and the performance 
impact is minimized.




> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.
> Configuration can be set at hbase-site or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15146) Don't block on Reader threads queueing to a scheduler queue

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122113#comment-15122113
 ] 

Hudson commented on HBASE-15146:


SUCCESS: Integrated in HBase-1.2 #522 (See 
[https://builds.apache.org/job/HBase-1.2/522/])
HBASE-15146 Don't block on Reader threads queueing to a scheduler queue 
(eclark: rev 51998b9eb5c97265c93a83047d897eb17c7a58ca)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcScheduler.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BalancedQueueRpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RWQueueRpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/exceptions/TestClientExceptionsUtil.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/FifoRpcScheduler.java


> Don't block on Reader threads queueing to a scheduler queue
> ---
>
> Key: HBASE-15146
> URL: https://issues.apache.org/jira/browse/HBASE-15146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15146-v7.patch, HBASE-15146-v8.patch, 
> HBASE-15146-v8.patch, HBASE-15146.0.patch, HBASE-15146.1.patch, 
> HBASE-15146.2.patch, HBASE-15146.3.patch, HBASE-15146.4.patch, 
> HBASE-15146.5.patch, HBASE-15146.6.patch
>
>
> Blocking on the epoll thread is awful. The new rpc scheduler can have lots of 
> different queues. Those queues have different capacity limits. Currently the 
> dispatch method can block trying to add the the blocking queue in any of the 
> schedulers.
> This causes readers to block, tcp acks are delayed, and everything slows down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15146) Don't block on Reader threads queueing to a scheduler queue

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122140#comment-15122140
 ] 

Hudson commented on HBASE-15146:


SUCCESS: Integrated in HBase-1.3-IT #466 (See 
[https://builds.apache.org/job/HBase-1.3-IT/466/])
HBASE-15146 Don't block on Reader threads queueing to a scheduler queue 
(eclark: rev 421fe24e9bb925e6199cc02118a5314458caeb38)
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RWQueueRpcExecutor.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BalancedQueueRpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcScheduler.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/exceptions/TestClientExceptionsUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/FifoRpcScheduler.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java


> Don't block on Reader threads queueing to a scheduler queue
> ---
>
> Key: HBASE-15146
> URL: https://issues.apache.org/jira/browse/HBASE-15146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15146-v7.patch, HBASE-15146-v8.patch, 
> HBASE-15146-v8.patch, HBASE-15146.0.patch, HBASE-15146.1.patch, 
> HBASE-15146.2.patch, HBASE-15146.3.patch, HBASE-15146.4.patch, 
> HBASE-15146.5.patch, HBASE-15146.6.patch
>
>
> Blocking on the epoll thread is awful. The new rpc scheduler can have lots of 
> different queues. Those queues have different capacity limits. Currently the 
> dispatch method can block trying to add the the blocking queue in any of the 
> schedulers.
> This causes readers to block, tcp acks are delayed, and everything slows down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15142) Procedure v2 - Basic WebUI listing the procedures

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122137#comment-15122137
 ] 

Hudson commented on HBASE-15142:


SUCCESS: Integrated in HBase-1.3-IT #466 (See 
[https://builds.apache.org/job/HBase-1.3-IT/466/])
HBASE-15142 Procedure v2 - Basic WebUI listing the procedures (matteo.bertozzi: 
rev 2f571b1457acc3a4b9cbc0cf14f191f8657c20f5)
* hbase-server/src/main/resources/hbase-webapps/master/zk.jsp
* hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* hbase-server/src/main/resources/hbase-webapps/master/procedures.jsp
* hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp


> Procedure v2 - Basic WebUI listing the procedures
> -
>
> Key: HBASE-15142
> URL: https://issues.apache.org/jira/browse/HBASE-15142
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15142-v0.patch, proc-webui.png
>
>
> Basic table showing the list of procedures 
> pending/in-execution/recently-completed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15135) Add metrics for storefile age

2016-01-28 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122168#comment-15122168
 ] 

Mikhail Antonov commented on HBASE-15135:
-

Yeah.. Agree.

> Add metrics for storefile age
> -
>
> Key: HBASE-15135
> URL: https://issues.apache.org/jira/browse/HBASE-15135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Attachments: HBASE-15135-v2.patch, HBASE-15135-v3.patch, 
> HBASE-15135-v4.patch, HBASE-15135.patch
>
>
> In order to make sure that compactions are fully up to date it would be nice 
> to have metrics on:
> * Max storefile age
> * Min storefile age
> * Average storefile age
> * Number of reference files
> If we could have those metrics per region and per regionserver it would be 
> awesome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122174#comment-15122174
 ] 

Ted Yu commented on HBASE-15181:


I went over the linked doc.
bq. For other tiers, we apply the exploring compaction using a small file count.
Can you give a bit more detail on this small file count ? Does it appear in the 
tables at the end of the doc ?

How do you handle bulk loaded hfiles (in terms of maintaining window 
boundaries) ?

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.
> Configuration can be set at hbase-site or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122185#comment-15122185
 ] 

Hadoop QA commented on HBASE-15173:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
0s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
22m 48s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 30s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 101m 19s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
34s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 252m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28 |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122112#comment-15122112
 ] 

churro morales commented on HBASE-15181:


[~te...@apache.org] this is in production and has been tested.  

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.
> Configuration can be set at hbase-site or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15135) Add metrics for storefile age

2016-01-28 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122101#comment-15122101
 ] 

Mikhail Antonov commented on HBASE-15135:
-

I see.. thanks for looking into it!

I was suspecting this "patch process already existed" as indication of 
prohibited resubmission, but was able to get it through after 2 or 3 attempts. 
Looks like sporadic failure. Given that issue with incorrect (or 
non-deterministic) module reordering, does it make sense to turn this flag on 
by default for HBase pre-commit jobs?

> Add metrics for storefile age
> -
>
> Key: HBASE-15135
> URL: https://issues.apache.org/jira/browse/HBASE-15135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Attachments: HBASE-15135-v2.patch, HBASE-15135-v3.patch, 
> HBASE-15135-v4.patch, HBASE-15135.patch
>
>
> In order to make sure that compactions are fully up to date it would be nice 
> to have metrics on:
> * Max storefile age
> * Min storefile age
> * Average storefile age
> * Number of reference files
> If we could have those metrics per region and per regionserver it would be 
> awesome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10877) HBase non-retriable exception list should be expanded

2016-01-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122130#comment-15122130
 ] 

Sergey Shelukhin commented on HBASE-10877:
--

We should add a stackoverflow question w/the answer and link to it from the 
description in this JIRA :)

> HBase non-retriable exception list should be expanded
> -
>
> Key: HBASE-10877
> URL: https://issues.apache.org/jira/browse/HBASE-10877
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> Example where retries do not make sense:
> {noformat}
> 2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch hbase:meta table: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:18 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:20 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:24 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:34 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:55 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 

[jira] [Commented] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122139#comment-15122139
 ] 

Hudson commented on HBASE-15173:


SUCCESS: Integrated in HBase-1.3-IT #466 (See 
[https://builds.apache.org/job/HBase-1.3-IT/466/])
HBASE-15173 Execute mergeRegions RPC call as the request user (tedyu: rev 
486f7612be6d0bdfb2721890ca9982dbcd3f80c2)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DispatchMergingRegionHandler.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java


> Execute mergeRegions RPC call as the request user
> -
>
> Key: HBASE-15173
> URL: https://issues.apache.org/jira/browse/HBASE-15173
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15173.v1.patch, HBASE-15173.v2.patch, 
> HBASE-15173.v2.patch, HBASE-15173.v3.patch, HBASE-15173.v3.patch, 
> HBASE-15173.v3.patch
>
>
> This is follow up to HBASE-15132
> Master currently sends mergeRegions RPC to region server under user 'hbase'.
> This issue is to execute mergeRegions RPC call as the request user
> See tail of HBASE-15132 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15171) Avoid counting duplicate kv and generating lots of small hfiles in PutSortReducer

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122138#comment-15122138
 ] 

Hudson commented on HBASE-15171:


SUCCESS: Integrated in HBase-1.3-IT #466 (See 
[https://builds.apache.org/job/HBase-1.3-IT/466/])
HBASE-15171 Addendum removes extra loop (Yu Li) (tedyu: rev 
dfa94841374f78422d4e44a5623cc8b601966b1d)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/PutSortReducer.java


> Avoid counting duplicate kv and generating lots of small hfiles in 
> PutSortReducer
> -
>
> Key: HBASE-15171
> URL: https://issues.apache.org/jira/browse/HBASE-15171
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.1.2, 0.98.17
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15171.addendum.patch, HBASE-15171.patch, 
> HBASE-15171.patch, HBASE-15171.patch
>
>
> Once there was one of our online user writing huge number of duplicated kvs 
> during bulkload, and we found it generated lots of small hfiles and slows 
> down the whole process.
> After debugging, we found in PutSortReducer#reduce, although it already tried 
> to handle the pathological case by setting a threshold for single-row size 
> and having a TreeMap to avoid writing out duplicated kv, it forgot to exclude 
> duplicated kv from the accumulated size. As shown in below code segment:
> {code}
> while (iter.hasNext() && curSize < threshold) {
>   Put p = iter.next();
>   for (List cells: p.getFamilyCellMap().values()) {
> for (Cell cell: cells) {
>   KeyValue kv = KeyValueUtil.ensureKeyValue(cell);
>   map.add(kv);
>   curSize += kv.heapSize();
> }
>   }
> }
> {code}
> We should move the {{curSize += kv.heapSize();}} line out of the outer for 
> loop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122141#comment-15122141
 ] 

Hudson commented on HBASE-15019:


SUCCESS: Integrated in HBase-1.3-IT #466 (See 
[https://builds.apache.org/job/HBase-1.3-IT/466/])
HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 
67c2fc7cd62f5d53da633f08d5a3c93600ac86f0)
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, 
> HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, 
> HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> 

[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122388#comment-15122388
 ] 

Hudson commented on HBASE-15019:


SUCCESS: Integrated in HBase-1.2 #523 (See 
[https://builds.apache.org/job/HBase-1.2/523/])
HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 
778c9730b3403f4b330578b44cce3f56d19cf25e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java


> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, 
> HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, 
> HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> 

[jira] [Updated] (HBASE-15033) Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1

2016-01-28 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-15033:
--
Fix Version/s: (was: 1.0.3)
   1.0.4

> Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1
> ---
>
> Key: HBASE-15033
> URL: https://issues.apache.org/jira/browse/HBASE-15033
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: stack
>Assignee: stack
> Fix For: 1.1.4, 1.0.4
>
> Attachments: 15033.patch
>
>
> Backport current test-patch.sh and zombie dettector to branch-1.0+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122446#comment-15122446
 ] 

Matteo Bertozzi commented on HBASE-15128:
-

why a setSwitch() api instead of something like updateConfigurationProperty() 
or something more generic?

to me introducing a new generic rpc that basically does only on/off seems to 
bring us in a direction where 3/4 months from now we have to deal with how to 
try to keep compatibility.
I prefer having the flag in normalizer, just because we keep down the number of 
dynamic property that we have now. as soon as you add a generic setSwitch() api 
that number will go up exponentially.

We talked already about having dynamic configuration changes, which to me seems 
the generic solution that solve also this problem. so, why not trying to go in 
that direction?

> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread Clara Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong updated HBASE-15181:

Attachment: HBASE-15181-v2.patch

Fixed problems reported by Hadoop QA.

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch, HBASE-15181-v2.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.
> Configuration can be set at hbase-site or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder

2016-01-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122488#comment-15122488
 ] 

Enis Soztutar commented on HBASE-15180:
---

This is a good idea. Having all cells in the same RPC share the same byte[]. 

Is {{CellReadable}} really necessary? Isn't this the same thing as 
Codec.Decoder. I mean, from a layering perspective, I thought that we would 
instead change the Codec to be aware of byte[] directly, and return a 
CellScanner that can return KV's from the same buffer. I was thinking of doing 
a Codec at the RPC layer to do something like FAST_DIFF. Can that still be done 
with this patch? 

Should we default to MSLAB for good? I don't think anybody runs with MSLAB off. 

RPCServer reaching this is not right: 
{code}
+this.mslabEnabled = conf.getBoolean(HConstants.USEMSLAB_KEY, 
HConstants.USEMSLAB_DEFAULT);
{code}

Can the byte[4]'s be statically allocated?  

> Reduce garbage created while reading Cells from Codec Decoder
> -
>
> Key: HBASE-15180
> URL: https://issues.apache.org/jira/browse/HBASE-15180
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15180.patch
>
>
> In KeyValueDecoder#parseCell (Default Codec decoder) we use 
> KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create 
> a byte[] of length 4 and read the cell length and then an array of Cell's 
> length and read in cell bytes into it and create a KV.
> Actually in server we read the reqs into a byte[] and CellScanner is created 
> on top of a ByteArrayInputStream on top of this. By default in write path, we 
> have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell 
> bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that 
> bytes.  So there is no issue if we create Cells over the RPC read byte[] 
> directly here in Decoder.  No need for 2 byte[] creation and copy for every 
> Cell in request.
> My plan is to make a Cell aware ByteArrayInputStream which can read Cells 
> directly from it.  
> Same Codec path is used in client side also. There better we can avoid this 
> direct Cell create and continue to do the copy to smaller byte[]s path.  Plan 
> to introduce some thing like a CodecContext associated with every Codec 
> instance which can say the server/client context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15146) Don't block on Reader threads queueing to a scheduler queue

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121959#comment-15121959
 ] 

Hudson commented on HBASE-15146:


SUCCESS: Integrated in HBase-1.3 #519 (See 
[https://builds.apache.org/job/HBase-1.3/519/])
HBASE-15146 Don't block on Reader threads queueing to a scheduler queue 
(eclark: rev 421fe24e9bb925e6199cc02118a5314458caeb38)
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RWQueueRpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/FifoRpcScheduler.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BalancedQueueRpcExecutor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcScheduler.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/exceptions/TestClientExceptionsUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


> Don't block on Reader threads queueing to a scheduler queue
> ---
>
> Key: HBASE-15146
> URL: https://issues.apache.org/jira/browse/HBASE-15146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15146-v7.patch, HBASE-15146-v8.patch, 
> HBASE-15146-v8.patch, HBASE-15146.0.patch, HBASE-15146.1.patch, 
> HBASE-15146.2.patch, HBASE-15146.3.patch, HBASE-15146.4.patch, 
> HBASE-15146.5.patch, HBASE-15146.6.patch
>
>
> Blocking on the epoll thread is awful. The new rpc scheduler can have lots of 
> different queues. Those queues have different capacity limits. Currently the 
> dispatch method can block trying to add the the blocking queue in any of the 
> schedulers.
> This causes readers to block, tcp acks are delayed, and everything slows down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15171) Avoid counting duplicate kv and generating lots of small hfiles in PutSortReducer

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121958#comment-15121958
 ] 

Hudson commented on HBASE-15171:


SUCCESS: Integrated in HBase-1.3 #519 (See 
[https://builds.apache.org/job/HBase-1.3/519/])
HBASE-15171 Addendum removes extra loop (Yu Li) (tedyu: rev 
dfa94841374f78422d4e44a5623cc8b601966b1d)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/PutSortReducer.java


> Avoid counting duplicate kv and generating lots of small hfiles in 
> PutSortReducer
> -
>
> Key: HBASE-15171
> URL: https://issues.apache.org/jira/browse/HBASE-15171
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.1.2, 0.98.17
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15171.addendum.patch, HBASE-15171.patch, 
> HBASE-15171.patch, HBASE-15171.patch
>
>
> Once there was one of our online user writing huge number of duplicated kvs 
> during bulkload, and we found it generated lots of small hfiles and slows 
> down the whole process.
> After debugging, we found in PutSortReducer#reduce, although it already tried 
> to handle the pathological case by setting a threshold for single-row size 
> and having a TreeMap to avoid writing out duplicated kv, it forgot to exclude 
> duplicated kv from the accumulated size. As shown in below code segment:
> {code}
> while (iter.hasNext() && curSize < threshold) {
>   Put p = iter.next();
>   for (List cells: p.getFamilyCellMap().values()) {
> for (Cell cell: cells) {
>   KeyValue kv = KeyValueUtil.ensureKeyValue(cell);
>   map.add(kv);
>   curSize += kv.heapSize();
> }
>   }
> }
> {code}
> We should move the {{curSize += kv.heapSize();}} line out of the outer for 
> loop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"

2016-01-28 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-14810:

Attachment: HBASE-14810.patch

Incorporated discussion in the JIRA with discussion on the mailing list and 
came up with this first draft.

> Update Hadoop support description to explain "not tested" vs "not supported"
> 
>
> Key: HBASE-14810
> URL: https://issues.apache.org/jira/browse/HBASE-14810
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Misty Stanley-Jones
>Priority: Critical
> Attachments: HBASE-14810.patch
>
>
> from [~ndimiduk] in thread about hadoop 2.6.1+:
> {quote}
> While we're in there, we should also clarify the meaning of "Not Supported"
> vs "Not Tested". It seems we don't say what we mean by these distinctions.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15033) Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1

2016-01-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122426#comment-15122426
 ] 

stack commented on HBASE-15033:
---

Sorry for the mess here lads. I committed w/ wrong commit message it seems. OK 
if I revert and commit with proper message?



> Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1
> ---
>
> Key: HBASE-15033
> URL: https://issues.apache.org/jira/browse/HBASE-15033
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.4
>
> Attachments: 15033.patch
>
>
> Backport current test-patch.sh and zombie dettector to branch-1.0+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122427#comment-15122427
 ] 

Enis Soztutar commented on HBASE-15128:
---

bq. I'm not able to find the CatalogJanitor storing in Zk (can you point me to 
the code, I have only seen an update in-memory), but I see the balancer and the 
normalizer tracker storing in zk.
I maybe wrong about the catalog janitor. I thought that is also saves state in 
zk. 
bq. I'll be more ok if the patch is using the normalizer_switch to toggle and 
adjust the the flags, relaying on the existing setNormalizerRunning() rpc.
I think what we want instead is to introduce this setSwitch() with different 
switch types, and move all switch types including balancer and normalizer to 
the new API. We can commit this patch, and do a follow up on moving the other 
switches to the new API (at least thats what I thought was the plan). 


> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122440#comment-15122440
 ] 

Hudson commented on HBASE-15019:


FAILURE: Integrated in HBase-1.3 #520 (See 
[https://builds.apache.org/job/HBase-1.3/520/])
HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 
67c2fc7cd62f5d53da633f08d5a3c93600ac86f0)
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, 
> HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, 
> HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> 

[jira] [Commented] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122439#comment-15122439
 ] 

Hudson commented on HBASE-15173:


FAILURE: Integrated in HBase-1.3 #520 (See 
[https://builds.apache.org/job/HBase-1.3/520/])
HBASE-15173 Execute mergeRegions RPC call as the request user (tedyu: rev 
486f7612be6d0bdfb2721890ca9982dbcd3f80c2)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DispatchMergingRegionHandler.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java


> Execute mergeRegions RPC call as the request user
> -
>
> Key: HBASE-15173
> URL: https://issues.apache.org/jira/browse/HBASE-15173
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15173.v1.patch, HBASE-15173.v2.patch, 
> HBASE-15173.v2.patch, HBASE-15173.v3.patch, HBASE-15173.v3.patch, 
> HBASE-15173.v3.patch
>
>
> This is follow up to HBASE-15132
> Master currently sends mergeRegions RPC to region server under user 'hbase'.
> This issue is to execute mergeRegions RPC call as the request user
> See tail of HBASE-15132 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13590) TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122375#comment-15122375
 ] 

Hadoop QA commented on HBASE-13590:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 47s 
{color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 57s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 6m 
0s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
58s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 59s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 140m 33s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 140m 35s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 306m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hbase.regionserver.TestFailedAppendAndSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784970/HBASE-13590.branch-1.1.patch
 |
| JIRA Issue | HBASE-13590 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a1352f9fc5c1 

[jira] [Updated] (HBASE-15033) Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1

2016-01-28 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-15033:
--
Fix Version/s: (was: 1.0.4)
   1.0.3

> Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1
> ---
>
> Key: HBASE-15033
> URL: https://issues.apache.org/jira/browse/HBASE-15033
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.4
>
> Attachments: 15033.patch
>
>
> Backport current test-patch.sh and zombie dettector to branch-1.0+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15033) Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1

2016-01-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122412#comment-15122412
 ] 

Enis Soztutar commented on HBASE-15033:
---

[~ndimiduk] for branch-1.0, I've found: 
{code}
commit f02a9fa6a02b0ea98c4d2a183c70016b678e34bd
Author: stack 
Date:   Tue Dec 22 19:54:21 2015 -0800

HBASE-15021 hadoopqa doing false positives
{code}

in branch-1.1, these two might be it: 
{code}
b57e52f HBASE-15021 hadoopqa doing false positives
77cc4cf HBASE-15021 hadoopqa doing false positives
{code}



> Backport test-patch.sh and zombie-detector.sh from master to branch-1.0/1.1
> ---
>
> Key: HBASE-15033
> URL: https://issues.apache.org/jira/browse/HBASE-15033
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.4
>
> Attachments: 15033.patch
>
>
> Backport current test-patch.sh and zombie dettector to branch-1.0+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15158) Change order in which we do write pipeline operations; do all under row locks!

2016-01-28 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15158:
--
Attachment: 15158v2.patch

Took me a while. All tests that failed now pass locally.

TestHRegionReplayEvents was 'fixed' by pushing through more edits; localfs 
buffer was not flushing out all edits for the test (This item may come back to 
bite me.. can't figure why this patch brings this on... and it is this single 
test only... we'll see). Other items were fixed by careful compare of patch and 
old code... I'd not restored the replay code 100%.

I can break this patch up. Let me do that.

> Change order in which we do write pipeline operations; do all under row locks!
> --
>
> Key: HBASE-15158
> URL: https://issues.apache.org/jira/browse/HBASE-15158
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15158.patch, 15158v2.patch
>
>
> Change how we do our write pipeline. I want to do all write pipeline ops 
> under row lock so I lean on this fact fixing performance regression in 
> check-and-set type operations like increment, append, and checkAnd* (see 
> sibling issue HBASE-15082).
> To be specific, we write like this now:
> {code}
> # take rowlock
> # start mvcc
> # append to WAL
> # add to memstore
> # let go of rowlock
> # sync WAL
> # in case of error: rollback memstore
> {code}
> Instead, write like this:
> {code}
> # take rowlock
> # start mvcc
> # append to WAL
> # sync WAL
> # add to memstore
> # let go of rowlock
> ... no need to do rollback.
> {code}
> The old ordering was put in place because it got better performance in a time 
> when WAL was different and before row locks were read/write (HBASE-12751).
> Testing in branch-1 shows that a reordering and skipping mvcc waits gets us 
> back to the performance we had before we unified mvcc and sequenceid 
> (HBASE-8763). Tests in HBASE-15046 show that at the macro level using our 
> usual perf tools, reordering pipeline seems to cause no slowdown (see 
> HBASE-15046). A rough compare of increments with reordered write pipeline 
> seems to have us getting back a bunch of our performance (see tail of 
> https://issues.apache.org/jira/browse/HBASE-15082?focusedCommentId=15111703=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15111703
>  and subsequent comment).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122363#comment-15122363
 ] 

Hadoop QA commented on HBASE-14810:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} master passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 54s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 1s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
48s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 229m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Timed out junit tests | 
org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
| JDK v1.7.0_91 Timed out junit tests | 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint
 |
|   | org.apache.hadoop.hbase.TestZooKeeper |
|   | org.apache.hadoop.hbase.TestAcidGuarantees |
|   | org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite |
|   | 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784984/HBASE-14810.patch |
| JIRA Issue | HBASE-14810 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 8e38ac40aa1a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1ee0768 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/341/artifact/patchprocess/patch-unit-root-jdk1.8.0_66.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/341/artifact/patchprocess/patch-unit-root-jdk1.7.0_91.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/341/artifact/patchprocess/patch-unit-root-jdk1.8.0_66.txt
 
https://builds.apache.org/job/PreCommit-HBASE-Build/341/artifact/patchprocess/patch-unit-root-jdk1.7.0_91.txt
 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/341/testReport/ |
| modules | C: . U: . |
| Max memory used | 428MB |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/341/console |


This message was automatically generated.



> Update Hadoop support description to explain 

[jira] [Commented] (HBASE-15142) Procedure v2 - Basic WebUI listing the procedures

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122438#comment-15122438
 ] 

Hudson commented on HBASE-15142:


FAILURE: Integrated in HBase-1.3 #520 (See 
[https://builds.apache.org/job/HBase-1.3/520/])
HBASE-15142 Procedure v2 - Basic WebUI listing the procedures (matteo.bertozzi: 
rev 2f571b1457acc3a4b9cbc0cf14f191f8657c20f5)
* hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* hbase-server/src/main/resources/hbase-webapps/master/zk.jsp
* hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
* hbase-server/src/main/resources/hbase-webapps/master/procedures.jsp


> Procedure v2 - Basic WebUI listing the procedures
> -
>
> Key: HBASE-15142
> URL: https://issues.apache.org/jira/browse/HBASE-15142
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15142-v0.patch, proc-webui.png
>
>
> Basic table showing the list of procedures 
> pending/in-execution/recently-completed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15171) Avoid counting duplicate kv and generating lots of small hfiles in PutSortReducer

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122196#comment-15122196
 ] 

Hudson commented on HBASE-15171:


FAILURE: Integrated in HBase-Trunk_matrix #665 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/665/])
HBASE-15171 Addendum removes extra loop (Yu Li) (tedyu: rev 
37ed0f6d0815389e0b368bc98b3a01dd02f193ac)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/PutSortReducer.java


> Avoid counting duplicate kv and generating lots of small hfiles in 
> PutSortReducer
> -
>
> Key: HBASE-15171
> URL: https://issues.apache.org/jira/browse/HBASE-15171
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.1.2, 0.98.17
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15171.addendum.patch, HBASE-15171.patch, 
> HBASE-15171.patch, HBASE-15171.patch
>
>
> Once there was one of our online user writing huge number of duplicated kvs 
> during bulkload, and we found it generated lots of small hfiles and slows 
> down the whole process.
> After debugging, we found in PutSortReducer#reduce, although it already tried 
> to handle the pathological case by setting a threshold for single-row size 
> and having a TreeMap to avoid writing out duplicated kv, it forgot to exclude 
> duplicated kv from the accumulated size. As shown in below code segment:
> {code}
> while (iter.hasNext() && curSize < threshold) {
>   Put p = iter.next();
>   for (List cells: p.getFamilyCellMap().values()) {
> for (Cell cell: cells) {
>   KeyValue kv = KeyValueUtil.ensureKeyValue(cell);
>   map.add(kv);
>   curSize += kv.heapSize();
> }
>   }
> }
> {code}
> We should move the {{curSize += kv.heapSize();}} line out of the outer for 
> loop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122306#comment-15122306
 ] 

Matteo Bertozzi commented on HBASE-15128:
-

if you hold a write lock everything is blocked. no drop table, no disable, no 
modify and so on

> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122203#comment-15122203
 ] 

Hudson commented on HBASE-15019:


SUCCESS: Integrated in HBase-1.2-IT #413 (See 
[https://builds.apache.org/job/HBase-1.2-IT/413/])
HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 
778c9730b3403f4b330578b44cce3f56d19cf25e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java


> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, 
> HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, 
> HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> 

[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122305#comment-15122305
 ] 

Jonathan Hsieh commented on HBASE-15128:


Is there a reason why the write aspect of table locks could not be used to to 
block splits and merges from happening?



> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread Clara Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122208#comment-15122208
 ] 

Clara Xiong commented on HBASE-15181:
-

The small number is the default: 2, as in the table at the end.
The doc also explained bulk loaded hfiles will be handled within the window as 
they are of today by ExporingCompactionPolicy.

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.
> Configuration can be set at hbase-site or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10877) HBase non-retriable exception list should be expanded

2016-01-28 Thread Ajinkya Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122250#comment-15122250
 ] 

Ajinkya Kale commented on HBASE-10877:
--

I do have a stackoverflow question :) But these workarounds dont seem to work 
for me.
http://stackoverflow.com/questions/34909506/accessing-hbase-tables-through-spark

> HBase non-retriable exception list should be expanded
> -
>
> Key: HBASE-10877
> URL: https://issues.apache.org/jira/browse/HBASE-10877
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> Example where retries do not make sense:
> {noformat}
> 2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch hbase:meta table: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:18 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:20 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:24 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:34 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:55 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:26 UTC 2014, 
> 

[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122526#comment-15122526
 ] 

Enis Soztutar commented on HBASE-15128:
---

bq. why a setSwitch() api instead of something like 
updateConfigurationProperty() or something more generic?
Hmm, we actually do not have configuration options to enable / disable 
balancer. They are usually always on of course. Only turned off in maintenance 
mode. So they are really not configuration related. 
bq. I prefer having the flag in normalizer, just because we keep down the 
number of dynamic property that we have now. as soon as you add a generic 
setSwitch() api that number will go up exponentially.
The new split or merge switch is not directly related to normalizer. It allows 
to disable all splits or merges, rather than changing normalizer behavior to do 
splits or merges. 

> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15184) SparkSQL Scan operation doesn't work on kerberos cluster

2016-01-28 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-15184:

Attachment: HBaseSparkModule.zip

Solution that worked on CDH 5.5 on client kerberos cluster, but also includes 
spark package to override a protected class.

> SparkSQL Scan operation doesn't work on kerberos cluster
> 
>
> Key: HBASE-15184
> URL: https://issues.apache.org/jira/browse/HBASE-15184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Malaska
> Attachments: HBaseSparkModule.zip
>
>
> I was using the HBase Spark Module at a client with Kerberos and I ran into 
> an issue with the Scan.  
> I made a fix for the client but we need to put it back into HBase.  I will 
> attach my solution, but it has a major problem.  I had to over ride a 
> protected class in spark.  I will need help to decover a better approach



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15181) A simple implementation of date based tiered compaction

2016-01-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122587#comment-15122587
 ] 

Ted Yu commented on HBASE-15181:


{code}
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
{code}
Please take a look at license header from other source files - the format is 
different from above.
{code}
+public class TieredCompactionPolicy extends RatioBasedCompactionPolicy {
{code}
Please add @InterfaceAudience.Private to the above class.
{code}
+LOG.debug("Compaction buckets are: " + buckets);
{code}
The output of the above would not be useful since buckets is List of ArrayList's
{code}
+// For any other bucket, at least 2 store files is enough.
{code}
What's the rationale behind the above decision ?
{code}
+return bucket;
+  } else if (!isIncomingWindow && compactionPolicyPerWindow != null) {
{code}
nit: else can be omitted above.

In newestBucket(), maxThreshold is not used.

Please put next patch on review board.



> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0
>
> Attachments: HBASE-15181-v1.patch, HBASE-15181-v2.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully so the data will still get to the 
> right store file for time-range-scan and re-compacton with existing store 
> file in the same time window is handled by ExploringCompactionPolicy.
> Time range overlapping among store files is tolerated and the performance 
> impact is minimized.
> Configuration can be set at hbase-site or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122702#comment-15122702
 ] 

Jonathan Hsieh commented on HBASE-15128:


Are we talking just about the hbck vs the master here or the bigger context of 
having switches and dynamic modes?

If we are just focused on hbck, I actually thing it is good that drop table, 
disable table, alter table and so on are blocked while hbck is doing its thing.

> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15173) Execute mergeRegions RPC call as the request user

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122558#comment-15122558
 ] 

Hudson commented on HBASE-15173:


SUCCESS: Integrated in HBase-Trunk_matrix #666 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/666/])
HBASE-15173 Execute mergeRegions RPC call as the request user (tedyu: rev 
1ee07688c8e75bf8507c1613feec9c56e950ab4c)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DispatchMergingRegionHandler.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java


> Execute mergeRegions RPC call as the request user
> -
>
> Key: HBASE-15173
> URL: https://issues.apache.org/jira/browse/HBASE-15173
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15173.v1.patch, HBASE-15173.v2.patch, 
> HBASE-15173.v2.patch, HBASE-15173.v3.patch, HBASE-15173.v3.patch, 
> HBASE-15173.v3.patch
>
>
> This is follow up to HBASE-15132
> Master currently sends mergeRegions RPC to region server under user 'hbase'.
> This issue is to execute mergeRegions RPC call as the request user
> See tail of HBASE-15132 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15142) Procedure v2 - Basic WebUI listing the procedures

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122557#comment-15122557
 ] 

Hudson commented on HBASE-15142:


SUCCESS: Integrated in HBase-Trunk_matrix #666 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/666/])
HBASE-15142 Procedure v2 - Basic WebUI listing the procedures (matteo.bertozzi: 
rev 14dd959aa2145be3fddee6c4dc001508393784e7)
* hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
* hbase-server/src/main/resources/hbase-webapps/master/zk.jsp
* hbase-server/src/main/resources/hbase-webapps/master/procedures.jsp
* hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon


> Procedure v2 - Basic WebUI listing the procedures
> -
>
> Key: HBASE-15142
> URL: https://issues.apache.org/jira/browse/HBASE-15142
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15142-v0.patch, proc-webui.png
>
>
> Basic table showing the list of procedures 
> pending/in-execution/recently-completed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122559#comment-15122559
 ] 

Hudson commented on HBASE-15019:


SUCCESS: Integrated in HBase-Trunk_matrix #666 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/666/])
HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 
8a217da8fd3990f9880270eb1e50d8f87d1e92fb)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java


> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, 
> HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, 
> HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> 

[jira] [Updated] (HBASE-15103) hadoopcheck test should provide diff file showing what's new

2016-01-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15103:
---
Issue Type: Test  (was: Bug)

> hadoopcheck test should provide diff file showing what's new
> 
>
> Key: HBASE-15103
> URL: https://issues.apache.org/jira/browse/HBASE-15103
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> Currently developer has to go to 'build artifacts' folder to read output from 
> hadoopcheck test.
> e.g.
> https://builds.apache.org/job/PreCommit-HBASE-Build/98/artifact/patchprocess/patch-javac-2.6.1.txt
> hadoopcheck test should provide diff file showing what exactly is new
> Thanks to Sean for offline discussion



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15056) Split fails with KeeperException$NoNodeException when namespace quota is enabled

2016-01-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15056:
---
Labels: quota  (was: )

> Split fails with KeeperException$NoNodeException when namespace quota is 
> enabled
> 
>
> Key: HBASE-15056
> URL: https://issues.apache.org/jira/browse/HBASE-15056
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Ted Yu
>  Labels: quota
> Attachments: 15056-branch-1-v1.txt, 
> split-fails-when-exceeding-quota-with-znode-loss.test
>
>
> When trying to port HBASE-15044 to branch-1, I found that region split fails 
> with KeeperException$NoNodeException when namespace quota is enabled and the 
> split would exceed allocated quota.
> Here is related test output:
> {code}
> 2015-12-30 09:50:16,764 WARN  [RS:0;10.22.24.71:65256-splits-1451497816754] 
> zookeeper.ZKAssign(885): regionserver:65256-0x151f402c21c0001, 
> quorum=localhost:57662, baseZNode=/hbase Attempt to transition the 
> unassigned node for 17fc99c04a8027b653e9d5ef5d578461 from 
> RS_ZK_REQUEST_REGION_SPLIT to RS_ZK_REQUEST_REGION_SPLIT failed, the node 
> existed and   was in the expected state but then when setting data it no 
> longer existed
> 2015-12-30 09:50:16,866 DEBUG [RS:0;10.22.24.71:65256-splits-1451497816754] 
> zookeeper.ZKUtil(718): regionserver:65256-0x151f402c21c0001, 
> quorum=localhost:57662, baseZNode=/hbase Unable to get data of znode 
> /hbase/region-in-transition/17fc99c04a8027b653e9d5ef5d578461 because node 
> does not exist (not necessarily an error)
> 2015-12-30 09:50:16,866 INFO  [RS:0;10.22.24.71:65256-splits-1451497816754] 
> regionserver.SplitRequest(97): Running rollback/cleanup of failed split of 
> np2:   
> testRegionNormalizationSplitOnCluster,z,1451497806295.17fc99c04a8027b653e9d5ef5d578461.;
>  Failed getting SPLITTING znode on 
> np2:testRegionNormalizationSplitOnCluster,z,   
> 1451497806295.17fc99c04a8027b653e9d5ef5d578461.
> java.io.IOException: Failed getting SPLITTING znode on 
> np2:testRegionNormalizationSplitOnCluster,z,1451497806295.17fc99c04a8027b653e9d5ef5d578461.
>   at 
> org.apache.hadoop.hbase.coordination.ZKSplitTransactionCoordination.waitForSplitTransaction(ZKSplitTransactionCoordination.java:200)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.stepsBeforePONR(SplitTransactionImpl.java:381)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.createDaughters(SplitTransactionImpl.java:277)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.execute(SplitTransactionImpl.java:560)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitRequest.doSplitting(SplitRequest.java:82)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:154)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Data is null, splitting node 
> 17fc99c04a8027b653e9d5ef5d578461 no longer exists
>   at 
> org.apache.hadoop.hbase.coordination.ZKSplitTransactionCoordination.waitForSplitTransaction(ZKSplitTransactionCoordination.java:166)
>   ... 8 more
> 2015-12-30 09:50:16,869 DEBUG [RS:0;10.22.24.71:65256-splits-1451497816754] 
> zookeeper.ZKUtil(718): regionserver:65256-0x151f402c21c0001, 
> quorum=localhost:57662, baseZNode=/hbase Unable to get data of znode 
> /hbase/region-in-transition/17fc99c04a8027b653e9d5ef5d578461 because node 
> does not exist (not necessarily an error)
> 2015-12-30 09:50:16,869 INFO  [RS:0;10.22.24.71:65256-splits-1451497816754] 
> coordination.ZKSplitTransactionCoordination(268): Failed cleanup zk node of 
> np2:  
> testRegionNormalizationSplitOnCluster,z,1451497806295.17fc99c04a8027b653e9d5ef5d578461.
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
>   at org.apache.hadoop.hbase.zookeeper.ZKAssign.deleteNode(ZKAssign.java:452)
>   at org.apache.hadoop.hbase.zookeeper.ZKAssign.deleteNode(ZKAssign.java:381)
>   at 
> org.apache.hadoop.hbase.coordination.ZKSplitTransactionCoordination.clean(ZKSplitTransactionCoordination.java:261)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.rollback(SplitTransactionImpl.java:948)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.rollback(SplitTransactionImpl.java:900)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitRequest.doSplitting(SplitRequest.java:99)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:154)
>   at 
> 

[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK

2016-01-28 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122569#comment-15122569
 ] 

Matteo Bertozzi commented on HBASE-15128:
-

{quote}Hmm, we actually do not have configuration options to enable / disable 
balancer. They are usually always on of course. Only turned off in maintenance 
mode. So they are really not configuration related.{quote}
they are not configuration, because we don't have dynamic configuration. 
pretty sure if we had dynamic conf, we ended up with a conf.set("xyz.enable", 
...)

assuming we add this setSwitch() now, and later we add dynamic conf. how one 
decide when to use one vs the other? since they are able to do the same thing?

{quote}The new split or merge switch is not directly related to normalizer. It 
allows to disable all splits or merges, rather than changing normalizer 
behavior to do splits or merges.{quote}
this is just me not knowing what the normalizer is. I was assuming the 
normalizer was taking care of all the split/merge related stuff now. sorry, but 
I haven’t looked at that code yet and I was just making an assumption.
and the switch we had already for the normalizer seemed a good place to keep 
down the number of rpc calls that will end up "redundant" if we add dynamic 
configuration support.

> Disable region splits and merges in HBCK
> 
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14781) Turn per cf flushing on for ITBLL by default

2016-01-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122584#comment-15122584
 ] 

Nick Dimiduk commented on HBASE-14781:
--

Can folks think of any reason not to bring this back to 1.1? The rest of the 
patches made it back, as far as I can tell.

> Turn per cf flushing on for ITBLL by default
> 
>
> Key: HBASE-14781
> URL: https://issues.apache.org/jira/browse/HBASE-14781
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14781.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122552#comment-15122552
 ] 

Hudson commented on HBASE-15019:


FAILURE: Integrated in HBase-1.1-JDK8 #1735 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1735/])
HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 
5041485aa5c1ecfaa4697b8d0b8a78d027ceaa8a)
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, 
> HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, 
> HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> 

[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122551#comment-15122551
 ] 

Hudson commented on HBASE-15019:


FAILURE: Integrated in HBase-1.1-JDK7 #1648 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1648/])
HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 
5041485aa5c1ecfaa4697b8d0b8a78d027ceaa8a)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java


> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18
>
> Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, 
> HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, 
> HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> 

[jira] [Created] (HBASE-15184) SparkSQL Scan operation doesn't work on kerberos cluster

2016-01-28 Thread Ted Malaska (JIRA)
Ted Malaska created HBASE-15184:
---

 Summary: SparkSQL Scan operation doesn't work on kerberos cluster
 Key: HBASE-15184
 URL: https://issues.apache.org/jira/browse/HBASE-15184
 Project: HBase
  Issue Type: Bug
Reporter: Ted Malaska


I was using the HBase Spark Module at a client with Kerberos and I ran into an 
issue with the Scan.  

I made a fix for the client but we need to put it back into HBase.  I will 
attach my solution, but it has a major problem.  I had to over ride a protected 
class in spark.  I will need help to decover a better approach



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14877) maven archetype: client application

2016-01-28 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14877:
--
Status: Open  (was: Patch Available)

> maven archetype: client application
> ---
>
> Key: HBASE-14877
> URL: https://issues.apache.org/jira/browse/HBASE-14877
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: archetype, beginner, maven
> Attachments: HBASE-14877-v2.patch, HBASE-14877-v3.patch, 
> HBASE-14877-v4.patch, HBASE-14877.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14877) maven archetype: client application

2016-01-28 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14877:
--
Status: Patch Available  (was: Open)

Submitting v5 of the patch, which differs from previous versions of patch as 
follows:

(1) README.md (markdown format) replaces README.txt
(2) A new brief section has been added at the end of developers.adoc, pointing 
contributors to the README.md file (for info on the structure of 
hbase-archetypes and how to add new archetypes). The new section is currently 
numbered 148.9 in book.html when the site is generated.

> maven archetype: client application
> ---
>
> Key: HBASE-14877
> URL: https://issues.apache.org/jira/browse/HBASE-14877
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: archetype, beginner, maven
> Attachments: HBASE-14877-v2.patch, HBASE-14877-v3.patch, 
> HBASE-14877-v4.patch, HBASE-14877-v5.patch, HBASE-14877.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10877) HBase non-retriable exception list should be expanded

2016-01-28 Thread Ajinkya Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121049#comment-15121049
 ] 

Ajinkya Kale commented on HBASE-10877:
--

Also, I get only "java.lang.IllegalAccessError: 
com/google/protobuf/HBaseZeroCopyByteString " and do not get the super class 
part in the description of this ticket.
I dont know if they are different.

> HBase non-retriable exception list should be expanded
> -
>
> Key: HBASE-10877
> URL: https://issues.apache.org/jira/browse/HBASE-10877
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> Example where retries do not make sense:
> {noformat}
> 2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch hbase:meta table: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:18 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:20 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:24 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:34 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:55 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon 

[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2016-01-28 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121069#comment-15121069
 ] 

Ashish Singhi commented on HBASE-9393:
--

Thanks for the comments.
Sorry for delay in repsonse, I was on holidays.
bq. Adding the below as finally in a method named pickReaderVersion seems a bit 
odd... is pickReaderVersion only place we read in the file trailer? That seems 
odd (not your issue Ashish Singhi). You'd think we'd want to keep the trailer 
around in the reader.
[~anoop.hbase] has already replied for this. Thanks.

bq. Bq. Is it odd adding this unbufferStream to hbase types when there is the 
Interface CanUnbuffer up in hdfs? Should we have a local hbase equivalent... 
and put it on HFileBlock, HFileReader... Then the relation is more clear? 
Perhaps overkill?
>From HBase side we do not have any control over the socket, so I don’t think 
>we can do anything here apart from calling the unbuffer api for the stream 
>which implements CanBuffer class. I also think this is not needed.

bq. May be we should at least rename this method pickReaderVersion ?
Changed it to openReader as per the suggestion.

Last QA run for v5 was clean. Updated patch addressing method rename comment.
Thanks all again.

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> 
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2, 0.98.0
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 
> 7279 regions
>Reporter: Avi Zrachya
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-9393.patch, HBASE-9393.v1.patch, 
> HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, 
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, 
> HBASE-9393.v6.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect 
> to the datanode because too many mapped sockets from one host to another on 
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart 
> hbase to solve the porblem, later in time it will incease to 60-100K sockets 
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219  0 12:26 pts/000:00:00 grep 21592
> hbase21592 1 17 Aug29 ?03:29:06 
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m 
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode 
> -Dhbase.log.dir=/var/log/hbase 
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13590) TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121095#comment-15121095
 ] 

Hadoop QA commented on HBASE-13590:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
36s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 45s 
{color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 4m 
21s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 59s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 6s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 202m 6s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Timed out junit tests | 
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784835/HBASE-13590.branch-1.1.patch
 |
| JIRA Issue | HBASE-13590 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux c029bf0d1faa 

[jira] [Commented] (HBASE-15171) Avoid counting duplicate kv and generating lots of small hfiles in PutSortReducer

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120988#comment-15120988
 ] 

Hadoop QA commented on HBASE-15171:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 59s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 18s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 38s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 228m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hbase.replication.multiwal.TestReplicationSyncUpToolWithMultipleWAL |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784815/HBASE-15171.addendum.patch
 |
| JIRA Issue | HBASE-15171 |
| Optional 

[jira] [Updated] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14841:
---
Attachment: HBASE-14841_8.patch

Updated patch to avoid the findbugs warning. Thanks to Anoop for helping out in 
this. 

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121075#comment-15121075
 ] 

Anoop Sam John commented on HBASE-14841:


Compressor, WALCellCodec - TODOs can be removed.
{code}
compressTags(out, in.array(), offset, length);
108   ByteBufferUtils.skip(in, length);
{code}
we need this skip?

ByteArrayBackedNode#hashCode - No need for typecasting now
equals  -> We can check for Node type and typecast to that and use getContents 
(as in ByteBufferBackedNode)

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14841:
---
Status: Patch Available  (was: Open)

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14841:
---
Status: Open  (was: Patch Available)

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10877) HBase non-retriable exception list should be expanded

2016-01-28 Thread Ajinkya Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121033#comment-15121033
 ] 

Ajinkya Kale commented on HBASE-10877:
--

[~ndimiduk] unfortunately that didnt work. There are no clear steps for spark 
in the book, unless i am missing something.
I tried adding the hbase-protocol jar to both driver.classpath as well as 
executor.classpath without any luck. Any suggestions ?

> HBase non-retriable exception list should be expanded
> -
>
> Key: HBASE-10877
> URL: https://issues.apache.org/jira/browse/HBASE-10877
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> Example where retries do not make sense:
> {noformat}
> 2014-03-31 20:54:27,765 WARN [InputInitializer [Map 1] #0] 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch hbase:meta table: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=35, exceptions:
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:17 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:18 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:20 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:24 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:34 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:45:55 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:46:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:47:45 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:05 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:25 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:48:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:26 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:49:46 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
> Mon Mar 31 20:50:06 UTC 2014, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@343d511e, 
> 

[jira] [Commented] (HBASE-15093) Replication can report incorrect size of log queue for the global source when multiwal is enabled

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121051#comment-15121051
 ] 

Hadoop QA commented on HBASE-15093:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
0s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s 
{color} | {color:red} hbase-hadoop2-compat in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s 
{color} | {color:green} hbase-hadoop-compat in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 2s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hbase-hadoop-compat in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s {color} 
| {color:red} hbase-server in the patch failed with JDK 

[jira] [Updated] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14841:
---
Attachment: HBASE-14841_8.patch

Rebased patch with the comments. Removed unused compressTags API and now only 
one compressTags exists and the skip() is not needed as you said because now 
that BB is from a Cell. I think previous compressTag needed that due to some 
reason. Not sure. 

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch, HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14841:
---
Status: Patch Available  (was: Open)

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch, HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14918) In-Memory MemStore Flush and Compaction

2016-01-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121121#comment-15121121
 ] 

Anoop Sam John commented on HBASE-14918:


As per the present trunk code, we can move it out.. I have done also..  But am 
not sure whether we need it inside the new Memstore impl, (with internal flush 
to pipeline and flush as CellBlock).. So I did not raise a Jira.

Why I say to move out is this work of copying the Cell data into a MSLAB area 
is not a Memstore impl detail.  Whatever be the Memstore impl (current or new) 
we need this.  Also I have done a patch for avoiding garbage what we create in 
write path (See HBASE015180) when MSLAB is on.  That is why I thought to make 
it an upper layer work than at the Memstore impl.   
I need to see how my patch can satisfy the need of new memstore impls

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 0.98.18
>
> Attachments: CellBlocksSegmentDesign.pdf
>
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 4 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Adding StoreServices facility at the region level to allow memstores 
> update region counters and access region level synchronization mechanism;
> (3) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (4) Memory optimization including compressed format representation and off 
> heap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14877) maven archetype: client application

2016-01-28 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14877:
--
Attachment: HBASE-14877-v5.patch

> maven archetype: client application
> ---
>
> Key: HBASE-14877
> URL: https://issues.apache.org/jira/browse/HBASE-14877
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: archetype, beginner, maven
> Attachments: HBASE-14877-v2.patch, HBASE-14877-v3.patch, 
> HBASE-14877-v4.patch, HBASE-14877-v5.patch, HBASE-14877.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2016-01-28 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-9393:
-
Attachment: HBASE-9393.v6.patch

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> 
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2, 0.98.0
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 
> 7279 regions
>Reporter: Avi Zrachya
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-9393.patch, HBASE-9393.v1.patch, 
> HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, 
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, 
> HBASE-9393.v6.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect 
> to the datanode because too many mapped sockets from one host to another on 
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart 
> hbase to solve the porblem, later in time it will incease to 60-100K sockets 
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219  0 12:26 pts/000:00:00 grep 21592
> hbase21592 1 17 Aug29 ?03:29:06 
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m 
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode 
> -Dhbase.log.dir=/var/log/hbase 
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14841:
---
Status: Open  (was: Patch Available)

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14877) maven archetype: client application

2016-01-28 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14877:
--
Release Note: 
This patch introduces a new infrastructure for creation and maintenance of 
Maven archetypes in the context of the hbase project, and it also introduces 
the first archetype, which end-users may utilize to generate a simple 
hbase-client dependent project.

NOTE that this patch should introduce two new WARNINGs ("Using platform 
encoding ... to copy filtered resources") into the hbase install process. These 
warnings are hard-wired into the maven-archetype-plugin:create-from-project 
goal. See hbase/hbase-archetypes/README.md, footnote [6] for details.

After applying the patch, see hbase/hbase-archetypes/README.md for details 
regarding the new archetype infrastructure introduced by this patch. (The 
README text is also conveniently positioned at the top of the patch itself.) 

Here is the opening paragraph of the README.md file: 
= 
The hbase-archetypes subproject of hbase provides an infrastructure for 
creation and maintenance of Maven archetypes pertinent to HBase. Upon 
deployment to the archetype catalog of the central Maven repository, these 
archetypes may be used by end-user developers to autogenerate completely 
configured Maven projects (including fully-functioning sample code) through 
invocation of the archetype:generate goal of the maven-archetype-plugin. 
 
The README.md file also contains several paragraphs under the heading, "Notes 
for contributors and committers to the HBase project", which explains the 
layout of 'hbase-archetypes', and how archetypes are created and installed into 
the local Maven repository, ready for deployment to the central Maven 
repository. It also outlines how new archetypes may be developed and added to 
the collection in the future.

  was:
This patch introduces a new infrastructure for creation and maintenance of 
Maven archetypes in the context of the hbase project, and it also introduces 
the first archetype, which end-users may utilize to generate a simple 
hbase-client dependent project.

NOTE that this patch should introduce two new WARNINGs ("Using platform 
encoding ... to copy filtered resources") into the hbase install process. These 
warnings are hard-wired into the maven-archetype-plugin:create-from-project 
goal. See hbase/hbase-archetypes/README.txt, footnote [7] for details.

After applying the patch, see hbase/hbase-archetypes/README.txt for details 
regarding the new archetype infrastructure introduced by this patch. (The 
README text is also conveniently positioned at the top of the patch itself.) 

Here is the opening paragraph of the README.txt file: 
= 
The hbase-archetypes subproject of hbase provides an infrastructure for 
creation and maintenance of Maven archetypes pertinent to HBase. Upon 
deployment to the archetype catalog of the central Maven repository, these 
archetypes may be used by end-user developers to autogenerate completely 
configured Maven projects (including fully-functioning sample code) through 
invocation of the archetype:generate goal of the maven-archetype-plugin. 
 
The README.txt file also contains several paragraphs under the heading, "Notes 
for contributors to the HBase project", which explains the layout of 
'hbase-archetypes', and how archetypes are created and installed into the local 
Maven repository, ready for deployment to the central Maven repository. It also 
outlines how new archetypes may be developed and added to the collection in the 
future.


> maven archetype: client application
> ---
>
> Key: HBASE-14877
> URL: https://issues.apache.org/jira/browse/HBASE-14877
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: archetype, beginner, maven
> Attachments: HBASE-14877-v2.patch, HBASE-14877-v3.patch, 
> HBASE-14877-v4.patch, HBASE-14877-v5.patch, HBASE-14877.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14918) In-Memory MemStore Flush and Compaction

2016-01-28 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121094#comment-15121094
 ] 

Eshcar Hillel commented on HBASE-14918:
---

Thanks [~anoop.hbase].
I don't see how you can move MSLAB to the HStore level.
In the first patch MSLAB is used in the segment to allocate the byte range (in 
maybeCloneWithAllocator()), and it also does bookkeeping of scanners which 
access the MSLAB (with inc/decScannersCount()) so it can manage the 
deallocation of buffers when no scanners can access them.
This is also the case in master but there the methods are in the scope of 
DefaultMemStore and the MemStoreScanner.
How would you suggest to move it to HStore? Why do you think it is better there 
and not inside the segment?

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 0.98.18
>
> Attachments: CellBlocksSegmentDesign.pdf
>
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 4 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Adding StoreServices facility at the region level to allow memstores 
> update region counters and access region level synchronization mechanism;
> (3) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (4) Memory optimization including compressed format representation and off 
> heap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121106#comment-15121106
 ] 

Anoop Sam John commented on HBASE-14841:


+1

> Allow Dictionary to work with BytebufferedCells
> ---
>
> Key: HBASE-14841
> URL: https://issues.apache.org/jira/browse/HBASE-14841
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-14841.patch, HBASE-14841_1.patch, 
> HBASE-14841_2.patch, HBASE-14841_3.patch, HBASE-14841_4.patch, 
> HBASE-14841_5.patch, HBASE-14841_6.patch, HBASE-14841_7.patch, 
> HBASE-14841_8.patch, HBASE-14841_8.patch
>
>
> This is part of HBASE-14832 where we need to ensure that while BBCells are 
> getting compacted the TagCompression part should be working with BBCells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14841) Allow Dictionary to work with BytebufferedCells

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121127#comment-15121127
 ] 

Hadoop QA commented on HBASE-14841:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s 
{color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 39s {color} 
| {color:red} hbase-common-jdk1.8.0_72 with JDK v1.8.0_72 generated 4 new 
issues (was 26, now 26). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 28s {color} 
| {color:red} hbase-common-jdk1.7.0_91 with JDK v1.7.0_91 generated 4 new 
issues (was 26, now 26). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 11s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-common 
(total was 160, now 161). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
22m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 39s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 102m 47s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.7.0_91. 

  1   2   >