[jira] [Commented] (HBASE-14374) Backport parent issue to 1.1 and 1.0

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734284#comment-14734284
 ] 

Hadoop QA commented on HBASE-14374:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754573/14317.branch-1.v2.txt
  against branch-1 branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754573

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15458//console

This message is automatically generated.

> Backport parent issue to 1.1 and 1.0
> 
>
> Key: HBASE-14374
> URL: https://issues.apache.org/jira/browse/HBASE-14374
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.3
>
> Attachments: 14317-branch-1.1.txt, 14317.branch-1.v2.txt
>
>
> Backport parent issue to branch-1.1. and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2015-09-08 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-6617:
-
Attachment: HBASE-6617.branch-1.patch

Upload the patch for branch-1

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 6617-v11.patch, HBASE-6617.branch-1.patch, 
> HBASE-6617.patch, HBASE-6617_v10.patch, HBASE-6617_v11.patch, 
> HBASE-6617_v12.patch, HBASE-6617_v2.patch, HBASE-6617_v3.patch, 
> HBASE-6617_v4.patch, HBASE-6617_v7.patch, HBASE-6617_v9.patch
>
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14331) a single callQueue related improvements

2015-09-08 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734507#comment-14734507
 ] 

Hiroshi Ikeda commented on HBASE-14331:
---

Sorry, it has just struck me that my implementation of the {{remove(Object)}} 
method is inconsistent with the other methods.
The method cannot throw {{UnsupportedOperationException}} in the contract of 
{{BlockingQueue}}, and it would be better to define a new interface which has 
just {{take}} and {{put}}, but {{BalancedQueueRpcExecutor}} and 
{{RWQueueRpcExecutor}} are published to "Coprocesssor" and "Phoenix" with 
unbelievably messy constructors so that it is quite difficult.

For now, I'll append the broken classes in package private.

> a single callQueue related improvements
> ---
>
> Key: HBASE-14331
> URL: https://issues.apache.org/jira/browse/HBASE-14331
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC, Performance
>Reporter: Hiroshi Ikeda
>Priority: Minor
> Attachments: BlockingQueuesPerformanceTestApp-output.pdf, 
> BlockingQueuesPerformanceTestApp-output.txt, 
> BlockingQueuesPerformanceTestApp.java, CallQueuePerformanceTestApp.java, 
> SemaphoreBasedBlockingQueue.java, SemaphoreBasedLinkedBlockingQueue.java, 
> SemaphoreBasedPriorityBlockingQueue.java
>
>
> {{LinkedBlockingQueue}} well separates locks between the {{take}} method and 
> the {{put}} method, but not between takers, and not between putters. These 
> methods are implemented to take locks at the almost beginning of their logic. 
> HBASE-11355 introduces multiple call-queues to reduce such possible 
> congestion, but I doubt that it is required to stick to {{BlockingQueue}}.
> There are the other shortcomings of using {{BlockingQueue}}. When using 
> multiple queues, since {{BlockingQueue}} blocks threads it is required to 
> prepare enough threads for each queue. It is possible that there is a queue 
> starving for threads while there is another queue where threads are idle. 
> Even if you can tune parameters to avoid such situations, the tuning is not 
> so trivial.
> I suggest using a single {{ConcurrentLinkedQueue}} with {{Semaphore}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14378) Get TestAccessController* passing again on branch-1

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734356#comment-14734356
 ] 

Hadoop QA commented on HBASE-14378:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754562/14378.branch-1.txt
  against branch-1 branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754562

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestStochasticBalancerJmxMetrics

 {color:red}-1 core zombie tests{color}.  There are 3 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15457//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15457//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15457//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15457//console

This message is automatically generated.

> Get TestAccessController* passing again on branch-1
> ---
>
> Key: HBASE-14378
> URL: https://issues.apache.org/jira/browse/HBASE-14378
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14378.branch-1.txt
>
>
> TestAccessController* are failing reliably on branch-1. They go zombie. I 
> learned that setting the junit test timeout facility on the class doesn't 
> make the zombie timeout nor does setting a timeout on each test turn zombies 
> to test failures; the test goes zombie on the way out in the tear down of the 
> cluster.
> Digging, we are out of handlers... all are occupied.
> 3dacee6 HBASE-14290 Spin up less threads in tests cut the default thread 
> count to 3 from 10. Putting the value back on these tests seems to make them 
> pass reliably when I run locally.  For good measure, I'll add in the timeouts 
> .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14376) move hbase spark integration examples into their own module

2015-09-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734696#comment-14734696
 ] 

Sean Busbey commented on HBASE-14376:
-

Unfortunately, you have to go to the build artifacts:

https://builds.apache.org/job/PreCommit-HBASE-Build/15455/artifact/patchprocess/

and then grab the "trunkJavacWarnings.txt" and "patchJavacWarnings.txt" and 
then diff them.

> move hbase spark integration examples into their own module
> ---
>
> Key: HBASE-14376
> URL: https://issues.apache.org/jira/browse/HBASE-14376
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14376.1.patch
>
>
> take the examples that are currently in the hbase-spark module and move them 
> into a hbase-spark-examples module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14349) pre-commit zombie finder is overly broad

2015-09-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734758#comment-14734758
 ] 

Sean Busbey commented on HBASE-14349:
-

current zombie detector:

{code}
  ZOMBIE_TESTS_COUNT=`jps | grep surefirebooter | wc -l`
  if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
#It seems sometimes the tests are not dying immediately. Let's give them 30s
echo "Suspicious java process found - waiting 30s to see if there are just 
slow to stop"
sleep 30
ZOMBIE_TESTS_COUNT=`jps | grep surefirebooter | wc -l`
if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
  echo "There are $ZOMBIE_TESTS_COUNT zombie tests, they should have been 
killed by surefire but survived"
  echo " BEGIN zombies jstack extract"
  ZB_STACK=`jps | grep surefirebooter | cut -d ' ' -f 1 | xargs -n 1 jstack 
| grep ".test" | grep "\.java"`
  jps | grep surefirebooter | cut -d ' ' -f 1 | xargs -n 1 jstack
  echo " END  zombies jstack extract"
  JIRA_COMMENT="$JIRA_COMMENT
 {color:red}-1 core zombie tests{color}.  There are ${ZOMBIE_TESTS_COUNT} 
zombie test(s): ${ZB_STACK}"
  BAD=1
  jps | grep surefirebooter | cut -d ' ' -f 1 | xargs kill -9
else
  echo "We're ok: there is no zombie test, but some tests took some time to 
stop"
fi
  else
echo "We're ok: there is no zombie test"
  fi
{code}

the jps entries look like

{code}
Every 2.0s: jps | grep surefirebooter   

  Tue Sep  8 07:47:48 2015

36463 surefirebooter7254413731964488287.jar

{code}

so there's nothing that screams "hbase only." Anyone have any ideas? I guess we 
could track pids after we do the jstack dump and limit to those that look like 
hbase tests?

> pre-commit zombie finder is overly broad
> 
>
> Key: HBASE-14349
> URL: https://issues.apache.org/jira/browse/HBASE-14349
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Busbey
>Priority: Critical
>
> Zombie detector is flagging processes from builds that aren't ours.
> ex from HBASE-14337:
> {code}
> -1 core zombie tests. There are 4 zombie test(s): at 
> org.apache.reef.io.network.DeprecatedNetworkConnectionServiceTest.testMultithreadedSharedConnMessagingNetworkConnServiceRate(DeprecatedNetworkConnectionServiceTest.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14372) test-patch doesn't properly account for flakey tests

2015-09-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734700#comment-14734700
 ] 

Sean Busbey commented on HBASE-14372:
-

yeah, I think that'd be reasonable. I mostly just noted this here so I would 
remember to check post-Yetus. :)

Also looks like we get the same empty test failures when the retry doesn't 
succeed:


{code}

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 


{Code}

{code}
Results :

Failed tests: 
org.apache.hadoop.hbase.client.TestFastFail.testFastFail(org.apache.hadoop.hbase.client.TestFastFail)
  Run 1: TestFastFail.testFastFail:274 There should be atleast one 
PreemptiveFastFail exception, otherwise, the test makes little 
sense.numPreemptiveFastFailExceptions: 0
  Run 2: TestFastFail.testFastFail:274 There should be atleast one 
PreemptiveFastFail exception, otherwise, the test makes little 
sense.numPreemptiveFastFailExceptions: 0




Tests run: 2719, Failures: 1, Errors: 0, Skipped: 19
{code}

test saved: https://builds.apache.org/job/PreCommit-HBASE-Build/15455/

> test-patch doesn't properly account for flakey tests
> 
>
> Key: HBASE-14372
> URL: https://issues.apache.org/jira/browse/HBASE-14372
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Busbey
>
> we use the surefire plugin to rerun failing tests to check for flakeys. if a 
> flakey test fails once and then succeeds we get a vote that looks like this:
> {code}
>  {color:red}-1 core tests{color}.  The patch failed these unit tests:
>  
> {code}
> (note the lack of any listed tests)
> looking back at the actual run we can see:
> {code}
> Flaked tests: 
> org.apache.hadoop.hbase.util.TestHBaseFsck.testSplitDaughtersNotInMeta(org.apache.hadoop.hbase.util.TestHBaseFsck)
>   Run 1: TestHBaseFsck.testSplitDaughtersNotInMeta:1771 null
>   Run 2: PASS
> Tests run: 2642, Failures: 0, Errors: 0, Skipped: 17, Flakes: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14380) Correct data also getting skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper

2015-09-08 Thread Bhupendra Kumar Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhupendra Kumar Jain reassigned HBASE-14380:


Assignee: Bhupendra Kumar Jain  (was: Bhupendra)

> Correct data also getting skipped along with bad data in importTsv bulk load 
> thru TsvImporterTextMapper
> ---
>
> Key: HBASE-14380
> URL: https://issues.apache.org/jira/browse/HBASE-14380
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Bhupendra Kumar Jain
>Assignee: Bhupendra Kumar Jain
>
> Cosider the input data is as below 
> ROWKEY, TIEMSTAMP, Col_Value
> r1,1,v1   >> Correct line
> r1 >> Bad line
> r1,3,v3   >> Correct line
> r1,4,v4   >> Correct line
> When data is bulk loaded using importTsv with mapper as TsvImporterTextMapper 
> ,  All the lines are getting ignored even though skipBadLines is set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14380) Correct data also getting skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper

2015-09-08 Thread Bhupendra Kumar Jain (JIRA)
Bhupendra Kumar Jain created HBASE-14380:


 Summary: Correct data also getting skipped along with bad data in 
importTsv bulk load thru TsvImporterTextMapper
 Key: HBASE-14380
 URL: https://issues.apache.org/jira/browse/HBASE-14380
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Bhupendra Kumar Jain
Assignee: Bhupendra


Cosider the input data is as below 
ROWKEY, TIEMSTAMP, Col_Value
r1,1,v1 >> Correct line
r1   >> Bad line
r1,3,v3 >> Correct line
r1,4,v4 >> Correct line

When data is bulk loaded using importTsv with mapper as TsvImporterTextMapper , 
 All the lines are getting ignored even though skipBadLines is set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14380) Correct data also getting skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper

2015-09-08 Thread Bhupendra Kumar Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhupendra Kumar Jain updated HBASE-14380:
-
Attachment: 0001-HBASE-14380.patch

Simple patch. Please review.

> Correct data also getting skipped along with bad data in importTsv bulk load 
> thru TsvImporterTextMapper
> ---
>
> Key: HBASE-14380
> URL: https://issues.apache.org/jira/browse/HBASE-14380
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Bhupendra Kumar Jain
>Assignee: Bhupendra Kumar Jain
> Attachments: 0001-HBASE-14380.patch
>
>
> Cosider the input data is as below 
> ROWKEY, TIEMSTAMP, Col_Value
> r1,1,v1   >> Correct line
> r1 >> Bad line
> r1,3,v3   >> Correct line
> r1,4,v4   >> Correct line
> When data is bulk loaded using importTsv with mapper as TsvImporterTextMapper 
> ,  All the lines are getting ignored even though skipBadLines is set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734673#comment-14734673
 ] 

Hadoop QA commented on HBASE-6617:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12754599/HBASE-6617.branch-1.patch
  against branch-1 branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754599

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3820 checkstyle errors (more than the master's current 3815 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL.testLabelsTableOpsWithDifferentUsers(TestVisibilityLabelsWithACL.java:232)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15459//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15459//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15459//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15459//console

This message is automatically generated.

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 6617-v11.patch, HBASE-6617.branch-1.patch, 
> HBASE-6617.patch, HBASE-6617_v10.patch, HBASE-6617_v11.patch, 
> HBASE-6617_v12.patch, HBASE-6617_v2.patch, HBASE-6617_v3.patch, 
> HBASE-6617_v4.patch, HBASE-6617_v7.patch, HBASE-6617_v9.patch
>
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14372) test-patch doesn't properly account for flakey tests

2015-09-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734700#comment-14734700
 ] 

Sean Busbey edited comment on HBASE-14372 at 9/8/15 12:00 PM:
--

yeah, I think that'd be reasonable. I mostly just noted this here so I would 
remember to check post-Yetus. :)

Also looks like we get the same empty test failures when the retry doesn't 
succeed:


{code}

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 


{code}

{code}
Results :

Failed tests: 
org.apache.hadoop.hbase.client.TestFastFail.testFastFail(org.apache.hadoop.hbase.client.TestFastFail)
  Run 1: TestFastFail.testFastFail:274 There should be atleast one 
PreemptiveFastFail exception, otherwise, the test makes little 
sense.numPreemptiveFastFailExceptions: 0
  Run 2: TestFastFail.testFastFail:274 There should be atleast one 
PreemptiveFastFail exception, otherwise, the test makes little 
sense.numPreemptiveFastFailExceptions: 0




Tests run: 2719, Failures: 1, Errors: 0, Skipped: 19
{code}

test saved: https://builds.apache.org/job/PreCommit-HBASE-Build/15455/


was (Author: busbey):
yeah, I think that'd be reasonable. I mostly just noted this here so I would 
remember to check post-Yetus. :)

Also looks like we get the same empty test failures when the retry doesn't 
succeed:


{code}

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 


{Code}

{code}
Results :

Failed tests: 
org.apache.hadoop.hbase.client.TestFastFail.testFastFail(org.apache.hadoop.hbase.client.TestFastFail)
  Run 1: TestFastFail.testFastFail:274 There should be atleast one 
PreemptiveFastFail exception, otherwise, the test makes little 
sense.numPreemptiveFastFailExceptions: 0
  Run 2: TestFastFail.testFastFail:274 There should be atleast one 
PreemptiveFastFail exception, otherwise, the test makes little 
sense.numPreemptiveFastFailExceptions: 0




Tests run: 2719, Failures: 1, Errors: 0, Skipped: 19
{code}

test saved: https://builds.apache.org/job/PreCommit-HBASE-Build/15455/

> test-patch doesn't properly account for flakey tests
> 
>
> Key: HBASE-14372
> URL: https://issues.apache.org/jira/browse/HBASE-14372
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Busbey
>
> we use the surefire plugin to rerun failing tests to check for flakeys. if a 
> flakey test fails once and then succeeds we get a vote that looks like this:
> {code}
>  {color:red}-1 core tests{color}.  The patch failed these unit tests:
>  
> {code}
> (note the lack of any listed tests)
> looking back at the actual run we can see:
> {code}
> Flaked tests: 
> org.apache.hadoop.hbase.util.TestHBaseFsck.testSplitDaughtersNotInMeta(org.apache.hadoop.hbase.util.TestHBaseFsck)
>   Run 1: TestHBaseFsck.testSplitDaughtersNotInMeta:1771 null
>   Run 2: PASS
> Tests run: 2642, Failures: 0, Errors: 0, Skipped: 17, Flakes: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14380) Correct data also getting skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper

2015-09-08 Thread Bhupendra Kumar Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734734#comment-14734734
 ] 

Bhupendra Kumar Jain commented on HBASE-14380:
--

TextSortReducer will receive the request grouped by rowkey and all text lines 
as Iterable values. 
{code}
protected void reduce(ImmutableBytesWritable rowKey, java.lang.Iterable 
lines,
  Reducer.Context context)
  throws java.io.IOException, InterruptedException
{code}
Inside method, each line is parsed and in case of bad line, the method returns 
from there , instead of continuing with next line. So all subsequent data are 
getting ignored. 

{code}
catch (ImportTsv.TsvParser.BadTsvLineException badLine) {
  if (skipBadLines) {
System.err.println("Bad line." + badLine.getMessage());
incrementBadLineCount(1);
return;
  }
{code}

> Correct data also getting skipped along with bad data in importTsv bulk load 
> thru TsvImporterTextMapper
> ---
>
> Key: HBASE-14380
> URL: https://issues.apache.org/jira/browse/HBASE-14380
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Bhupendra Kumar Jain
>Assignee: Bhupendra
>
> Cosider the input data is as below 
> ROWKEY, TIEMSTAMP, Col_Value
> r1,1,v1   >> Correct line
> r1 >> Bad line
> r1,3,v3   >> Correct line
> r1,4,v4   >> Correct line
> When data is bulk loaded using importTsv with mapper as TsvImporterTextMapper 
> ,  All the lines are getting ignored even though skipBadLines is set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14340) Add second bulk load option to Spark Bulk Load to send puts as the value

2015-09-08 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734676#comment-14734676
 ] 

Ted Malaska commented on HBASE-14340:
-

Starting this now

> Add second bulk load option to Spark Bulk Load to send puts as the value
> 
>
> Key: HBASE-14340
> URL: https://issues.apache.org/jira/browse/HBASE-14340
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
>
> The initial bulk load option for Spark bulk load sends values over one by one 
> through the shuffle.  This is the similar to how the original MR bulk load 
> worked.
> How ever the MR bulk loader have more then one bulk load option.  There is a 
> second option that allows for all the Column Families, Qualifiers, and Values 
> or a row to be combined in the map side.
> This only works if the row is not super wide.
> But if the row is not super wide this method of sending values through the 
> shuffle will reduce the data and work the shuffle has to deal with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14378) Get TestAccessController* passing again on branch-1

2015-09-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734709#comment-14734709
 ] 

Sean Busbey commented on HBASE-14378:
-

the zombies from above:

 * hbase.security.access.TestWithDisabledAuthorization
 * hbase.security.access.TestScanEarlyTermination
 * hbase.security.access.TestCellACLs


maybe the same thing, since they're also in the security access package and all 
look stuck in tearDownAfterClass?

Are the timeouts needed? I thought we were timing out based on test 
categorization now? Or do these need more time than the default?

> Get TestAccessController* passing again on branch-1
> ---
>
> Key: HBASE-14378
> URL: https://issues.apache.org/jira/browse/HBASE-14378
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14378.branch-1.txt
>
>
> TestAccessController* are failing reliably on branch-1. They go zombie. I 
> learned that setting the junit test timeout facility on the class doesn't 
> make the zombie timeout nor does setting a timeout on each test turn zombies 
> to test failures; the test goes zombie on the way out in the tear down of the 
> cluster.
> Digging, we are out of handlers... all are occupied.
> 3dacee6 HBASE-14290 Spin up less threads in tests cut the default thread 
> count to 3 from 10. Putting the value back on these tests seems to make them 
> pass reliably when I run locally.  For good measure, I'll add in the timeouts 
> .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2015-09-08 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-6617:
-
Attachment: HBASE-6617.branch-1.v2.patch

v2 patch for branch-1, fix the checkstyle issues and ask HadoopQA to rerun UT

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 6617-v11.patch, HBASE-6617.branch-1.patch, 
> HBASE-6617.branch-1.v2.patch, HBASE-6617.patch, HBASE-6617_v10.patch, 
> HBASE-6617_v11.patch, HBASE-6617_v12.patch, HBASE-6617_v2.patch, 
> HBASE-6617_v3.patch, HBASE-6617_v4.patch, HBASE-6617_v7.patch, 
> HBASE-6617_v9.patch
>
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735509#comment-14735509
 ] 

Enis Soztutar commented on HBASE-14370:
---

bq. See if patch v3 is better.
Thanks Ted. 

The executor thread is not shut down, and will cause a thread leak. 
I was following the AtomicReference nodes, but could not get the full 
semantics. Did you introduce that to pass the list of nodes to the thread? Can 
we simplify by just passing the list of znodes directly to the thread? There 
may still be a race condition between nodeChildrenChanged (which now happens in 
the thread) and nodeDeleted, which still executes in the zk event thread, no?  

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails

2015-09-08 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-14280:
--
Release Note: 
Patch will effectively works with Hadoop version 2.6 or greater with a launch 
of "internal.nameservices".
There will be no change in version older than 2.6.

  was:It is supported with Hadoop 2.6 with a launch of "internal.nameservices".

  Status: Patch Available  (was: Open)

> Bulk Upload from HA cluster to remote HA hbase cluster fails
> 
>
> Key: HBASE-14280
> URL: https://issues.apache.org/jira/browse/HBASE-14280
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, regionserver
>Affects Versions: 0.98.4
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
>  Labels: easyfix, patch
> Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch
>
>
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
>   ... 4 more
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548)
>   ... 11 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails

2015-09-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735543#comment-14735543
 ] 

Ted Yu commented on HBASE-14280:


{code}
74}catch(NoSuchMethodError e){
{code}
Please leave a space between right curly and catch, catch and '(', ')' and 
right curly.

'else' should follow the right curly (on same line).

> Bulk Upload from HA cluster to remote HA hbase cluster fails
> 
>
> Key: HBASE-14280
> URL: https://issues.apache.org/jira/browse/HBASE-14280
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, regionserver
>Affects Versions: 0.98.4
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
>  Labels: easyfix, patch
> Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch
>
>
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
>   ... 4 more
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548)
>   ... 11 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12911) Client-side metrics

2015-09-08 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735427#comment-14735427
 ] 

Nick Dimiduk commented on HBASE-12911:
--

Failure of TestInterfaceAudienceAnnotations is legitimate, though larger than 
my patch. Will fix in HBASE-14382.

> Client-side metrics
> ---
>
> Key: HBASE-12911
> URL: https://issues.apache.org/jira/browse/HBASE-12911
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, Operability, Performance
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, am.jpg, client metrics 
> RS-Master.jpg, client metrics client.jpg, conn_agg.jpg, connection 
> attributes.jpg, ltt.jpg, standalone.jpg
>
>
> There's very little visibility into the hbase client. Folks who care to add 
> some kind of metrics collection end up wrapping Table method invocations with 
> {{System.currentTimeMillis()}}. For a crude example of this, have a look at 
> what I did in {{PerformanceEvaluation}} for exposing requests latencies up to 
> {{IntegrationTestRegionReplicaPerf}}. The client is quite complex, there's a 
> lot going on under the hood that is impossible to see right now without a 
> profiler. Being a crucial part of the performance of this distributed system, 
> we should have deeper visibility into the client's function.
> I'm not sure that wiring into the hadoop metrics system is the right choice 
> because the client is often embedded as a library in a user's application. We 
> should have integration with our metrics tools so that, i.e., a client 
> embedded in a coprocessor can report metrics through the usual RS channels, 
> or a client used in a MR job can do the same.
> I would propose an interface-based system with pluggable implementations. Out 
> of the box we'd include a hadoop-metrics implementation and one other, 
> possibly [dropwizard/metrics|https://github.com/dropwizard/metrics].
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14382) Missing interface audience annotations

2015-09-08 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-14382:


 Summary: Missing interface audience annotations
 Key: HBASE-14382
 URL: https://issues.apache.org/jira/browse/HBASE-14382
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Nick Dimiduk
Priority: Blocker
 Fix For: 2.0.0, 1.3.0


Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
annotations. Indeed, from test log, my patch is not the only one missing 
annotations.

{noformat}
2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
/Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
/Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
/Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
/Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
/Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
/Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 12:05:31,158 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO NOT 
have @InterfaceAudience annotation:
2015-09-08 12:05:31,158 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
2015-09-08 12:05:31,158 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.client.MetricsConnectionWrapper
2015-09-08 12:05:31,160 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
2015-09-08 12:05:31,160 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.client.MetricsConnectionHostSource
2015-09-08 12:05:31,160 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): class 
org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
2015-09-08 12:05:31,160 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
2015-09-08 12:05:31,160 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
2015-09-08 12:05:31,160 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
2015-09-08 12:05:31,161 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
2015-09-08 12:05:31,161 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
2015-09-08 12:05:31,161 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): class 
org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
2015-09-08 12:05:31,161 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.client.MetricsConnectionSource
2015-09-08 12:05:31,161 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
2015-09-08 12:05:31,161 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
2015-09-08 12:05:31,161 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): class 
org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
2015-09-08 12:05:31,162 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): class 
org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
2015-09-08 12:05:31,162 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.regionserver.MetricsRegionSource
2015-09-08 12:05:31,162 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.master.MetricsMasterWrapper
2015-09-08 12:05:31,162 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.metrics.BaseSource
2015-09-08 12:05:31,162 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): interface 
org.apache.hadoop.hbase.rest.MetricsRESTSource
2015-09-08 12:05:31,162 INFO  [main] 

[jira] [Commented] (HBASE-14331) a single callQueue related improvements

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735504#comment-14735504
 ] 

Hadoop QA commented on HBASE-14331:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754687/HBASE-14331.patch
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754687

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1837 checkstyle errors (more than the master's current 1834 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15468//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15468//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15468//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15468//console

This message is automatically generated.

> a single callQueue related improvements
> ---
>
> Key: HBASE-14331
> URL: https://issues.apache.org/jira/browse/HBASE-14331
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC, Performance
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Attachments: BlockingQueuesPerformanceTestApp-output.pdf, 
> BlockingQueuesPerformanceTestApp-output.txt, 
> BlockingQueuesPerformanceTestApp.java, CallQueuePerformanceTestApp.java, 
> HBASE-14331.patch, SemaphoreBasedBlockingQueue.java, 
> SemaphoreBasedLinkedBlockingQueue.java, 
> SemaphoreBasedPriorityBlockingQueue.java
>
>
> {{LinkedBlockingQueue}} well separates locks between the {{take}} method and 
> the {{put}} method, but not between takers, and not between putters. These 
> methods are implemented to take locks at the almost beginning of their logic. 
> HBASE-11355 introduces multiple call-queues to reduce such possible 
> congestion, but I doubt that it is required to stick to {{BlockingQueue}}.
> There are the other shortcomings of using {{BlockingQueue}}. When using 
> multiple queues, since {{BlockingQueue}} blocks threads it is required to 
> prepare enough threads for each queue. It is possible that there is a queue 
> starving for threads while there is another queue where threads are idle. 
> Even if you can tune parameters to avoid such situations, the tuning is not 
> so trivial.
> I suggest using a single {{ConcurrentLinkedQueue}} with {{Semaphore}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735503#comment-14735503
 ] 

Hadoop QA commented on HBASE-12751:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754714/12751.rebased.v26.txt
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754714

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 90 new 
or modified tests.

{color:red}-1 Anti-pattern{color}.  The patch appears to 
have anti-pattern where BYTES_COMPARATOR was omitted:
 -getRegionInfo(), -1, new TreeMap());.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 2.0.3) to fail.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn post-site goal 
to fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15471//testReport/
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15471//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15471//console

This message is automatically generated.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, 
> 12751v22.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 
> HBASE-12751-v1.patch, HBASE-12751-v10.patch, HBASE-12751-v10.patch, 
> HBASE-12751-v11.patch, HBASE-12751-v12.patch, HBASE-12751-v13.patch, 
> HBASE-12751-v14.patch, HBASE-12751-v15.patch, HBASE-12751-v16.patch, 
> HBASE-12751-v17.patch, HBASE-12751-v18.patch, HBASE-12751-v19 (1).patch, 
> HBASE-12751-v19.patch, HBASE-12751-v2.patch, HBASE-12751-v20.patch, 
> HBASE-12751-v20.patch, HBASE-12751-v21.patch, HBASE-12751-v3.patch, 
> HBASE-12751-v4.patch, HBASE-12751-v5.patch, HBASE-12751-v6.patch, 
> HBASE-12751-v7.patch, HBASE-12751-v8.patch, HBASE-12751-v9.patch, 
> HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14382) Missing interface audience annotations

2015-09-08 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735436#comment-14735436
 ] 

Elliott Clark commented on HBASE-14382:
---

Nothing in the compat modules used to need the interface audience annotations. 
They were excluded from that test.

> Missing interface audience annotations
> --
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Nick Dimiduk
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionSource
> 2015-09-08 

[jira] [Updated] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14370:
---
Attachment: (was: 14370-v3.txt)

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14370:
---
Attachment: 14370-v3.txt

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14349) pre-commit zombie finder is overly broad

2015-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735513#comment-14735513
 ] 

Enis Soztutar commented on HBASE-14349:
---

We can add a {{-Dhbase.tests}} or something like that to {{}} for 
surefire and grep for that. We already have  
{{-Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true}}, so maybe we can just add those to the grep 
(although it will be fragile). 

> pre-commit zombie finder is overly broad
> 
>
> Key: HBASE-14349
> URL: https://issues.apache.org/jira/browse/HBASE-14349
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Busbey
>Priority: Critical
>
> Zombie detector is flagging processes from builds that aren't ours.
> ex from HBASE-14337:
> {code}
> -1 core zombie tests. There are 4 zombie test(s): at 
> org.apache.reef.io.network.DeprecatedNetworkConnectionServiceTest.testMultithreadedSharedConnMessagingNetworkConnServiceRate(DeprecatedNetworkConnectionServiceTest.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14378) Get TestAccessController* passing again on branch-1

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14378:
--
Attachment: 14378.branch-1.v2.txt

Seems unrelated. Retry.

> Get TestAccessController* passing again on branch-1
> ---
>
> Key: HBASE-14378
> URL: https://issues.apache.org/jira/browse/HBASE-14378
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14378.branch-1.txt, 14378.branch-1.v2.txt, 
> 14378.branch-1.v2.txt
>
>
> TestAccessController* are failing reliably on branch-1. They go zombie. I 
> learned that setting the junit test timeout facility on the class doesn't 
> make the zombie timeout nor does setting a timeout on each test turn zombies 
> to test failures; the test goes zombie on the way out in the tear down of the 
> cluster.
> Digging, we are out of handlers... all are occupied.
> 3dacee6 HBASE-14290 Spin up less threads in tests cut the default thread 
> count to 3 from 10. Putting the value back on these tests seems to make them 
> pass reliably when I run locally.  For good measure, I'll add in the timeouts 
> .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735527#comment-14735527
 ] 

Ted Yu commented on HBASE-14370:


w.r.t. thread leak, have you seen the following code ?
{code}
+  public void close() {
+executor.shutdown();
{code}
w.r.t. AtomicReference, the goal is for refresher thread to be interruptible.

w.r.t. race condition between nodeChildrenChanged and nodeDeleted, if a table 
(namespace) is deleted, client would get TableNotFoundException 
(NamespaceNotFoundException) for future access - before ACL is checked.
Do you think tighter coordination is needed between the zk thread and the 
refresher thread ?

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14382) Missing interface audience annotations

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-14382.
--
Resolution: Invalid

Resolving as invalid. This is a legitimate failure introduced by HBASE-12911.

> Missing interface audience annotations
> --
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Nick Dimiduk
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> 

[jira] [Commented] (HBASE-12911) Client-side metrics

2015-09-08 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735540#comment-14735540
 ] 

Nick Dimiduk commented on HBASE-12911:
--

Failure of TestInterfaceAudienceAnnotations is actually introduced by my patch. 
It's easy enough to reproduce by applying the hbase-client/pom.xml changes to 
master. This makes sense -- I'm adding hbase-hadoop-compat modules to the 
dependency list. Are we okay with this?

> Client-side metrics
> ---
>
> Key: HBASE-12911
> URL: https://issues.apache.org/jira/browse/HBASE-12911
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, Operability, Performance
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, am.jpg, client metrics 
> RS-Master.jpg, client metrics client.jpg, conn_agg.jpg, connection 
> attributes.jpg, ltt.jpg, standalone.jpg
>
>
> There's very little visibility into the hbase client. Folks who care to add 
> some kind of metrics collection end up wrapping Table method invocations with 
> {{System.currentTimeMillis()}}. For a crude example of this, have a look at 
> what I did in {{PerformanceEvaluation}} for exposing requests latencies up to 
> {{IntegrationTestRegionReplicaPerf}}. The client is quite complex, there's a 
> lot going on under the hood that is impossible to see right now without a 
> profiler. Being a crucial part of the performance of this distributed system, 
> we should have deeper visibility into the client's function.
> I'm not sure that wiring into the hadoop metrics system is the right choice 
> because the client is often embedded as a library in a user's application. We 
> should have integration with our metrics tools so that, i.e., a client 
> embedded in a coprocessor can report metrics through the usual RS channels, 
> or a client used in a MR job can do the same.
> I would propose an interface-based system with pluggable implementations. Out 
> of the box we'd include a hadoop-metrics implementation and one other, 
> possibly [dropwizard/metrics|https://github.com/dropwizard/metrics].
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14370:
---
Attachment: 14370-v3.txt

See if patch v3 is better.

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12751:
--
Attachment: 12751.rebased.v26.txt

Fix up warnings and the NPE that was causing test failures.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, 
> 12751v22.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 
> HBASE-12751-v1.patch, HBASE-12751-v10.patch, HBASE-12751-v10.patch, 
> HBASE-12751-v11.patch, HBASE-12751-v12.patch, HBASE-12751-v13.patch, 
> HBASE-12751-v14.patch, HBASE-12751-v15.patch, HBASE-12751-v16.patch, 
> HBASE-12751-v17.patch, HBASE-12751-v18.patch, HBASE-12751-v19 (1).patch, 
> HBASE-12751-v19.patch, HBASE-12751-v2.patch, HBASE-12751-v20.patch, 
> HBASE-12751-v20.patch, HBASE-12751-v21.patch, HBASE-12751-v3.patch, 
> HBASE-12751-v4.patch, HBASE-12751-v5.patch, HBASE-12751-v6.patch, 
> HBASE-12751-v7.patch, HBASE-12751-v8.patch, HBASE-12751-v9.patch, 
> HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails

2015-09-08 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-14280:
--
Attachment: HBASE-14280_v2.patch

New patch compatible with all version of hadoop

> Bulk Upload from HA cluster to remote HA hbase cluster fails
> 
>
> Key: HBASE-14280
> URL: https://issues.apache.org/jira/browse/HBASE-14280
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, regionserver
>Affects Versions: 0.98.4
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
>  Labels: easyfix, patch
> Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch
>
>
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
>   ... 4 more
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548)
>   ... 11 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14349) pre-commit zombie finder is overly broad

2015-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735517#comment-14735517
 ] 

Enis Soztutar commented on HBASE-14349:
---

We should also change {{jps}} to {{ps -aux}}.  

> pre-commit zombie finder is overly broad
> 
>
> Key: HBASE-14349
> URL: https://issues.apache.org/jira/browse/HBASE-14349
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Busbey
>Priority: Critical
>
> Zombie detector is flagging processes from builds that aren't ours.
> ex from HBASE-14337:
> {code}
> -1 core zombie tests. There are 4 zombie test(s): at 
> org.apache.reef.io.network.DeprecatedNetworkConnectionServiceTest.testMultithreadedSharedConnMessagingNetworkConnServiceRate(DeprecatedNetworkConnectionServiceTest.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735532#comment-14735532
 ] 

Hadoop QA commented on HBASE-14280:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754723/HBASE-14280_v2.patch
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754723

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 2.0.3) to fail.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn post-site goal 
to fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15473//testReport/
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15473//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15473//console

This message is automatically generated.

> Bulk Upload from HA cluster to remote HA hbase cluster fails
> 
>
> Key: HBASE-14280
> URL: https://issues.apache.org/jira/browse/HBASE-14280
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, regionserver
>Affects Versions: 0.98.4
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
>  Labels: easyfix, patch
> Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch
>
>
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0,
>  expected: hdfs://ha-hbase-nameservice1
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750)
>   at 
> 

[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735423#comment-14735423
 ] 

Enis Soztutar commented on HBASE-14370:
---

Forking a thread is expensive, so we do not use that pattern but instead use 
Executors with fixed thread pools. Plus, as mentioned above, we have to make 
sure that the refresh requests from different even notifications should execute 
in the same order that the zk notification comes from. Single threaded executor 
should be able to do that. 

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12911) Client-side metrics

2015-09-08 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735612#comment-14735612
 ] 

Samarth Jain commented on HBASE-12911:
--

Thanks for pointing to the Phoenix jira, [~ndimiduk]!

Summary of what we discussed via email:
As part of its metrics collection, Phoenix provides some metrics around the 
scans/puts it executes. For example - number of bytes read/sent over to hbase. 
If we could surface these additional metrics that reflect the work done by 
client initiated scans and puts, then it would be a really nice addition to the 
metrics that Phoenix reports at a statement and global client level. 
ScanMetrics class looks like a perfect fit for this. But it doesn't look like 
that ScanMetrics are exposed via a client API currently.

Maybe this work fits HBASE-14381 better?

> Client-side metrics
> ---
>
> Key: HBASE-12911
> URL: https://issues.apache.org/jira/browse/HBASE-12911
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, Operability, Performance
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, am.jpg, client metrics 
> RS-Master.jpg, client metrics client.jpg, conn_agg.jpg, connection 
> attributes.jpg, ltt.jpg, standalone.jpg
>
>
> There's very little visibility into the hbase client. Folks who care to add 
> some kind of metrics collection end up wrapping Table method invocations with 
> {{System.currentTimeMillis()}}. For a crude example of this, have a look at 
> what I did in {{PerformanceEvaluation}} for exposing requests latencies up to 
> {{IntegrationTestRegionReplicaPerf}}. The client is quite complex, there's a 
> lot going on under the hood that is impossible to see right now without a 
> profiler. Being a crucial part of the performance of this distributed system, 
> we should have deeper visibility into the client's function.
> I'm not sure that wiring into the hadoop metrics system is the right choice 
> because the client is often embedded as a library in a user's application. We 
> should have integration with our metrics tools so that, i.e., a client 
> embedded in a coprocessor can report metrics through the usual RS channels, 
> or a client used in a MR job can do the same.
> I would propose an interface-based system with pluggable implementations. Out 
> of the box we'd include a hadoop-metrics implementation and one other, 
> possibly [dropwizard/metrics|https://github.com/dropwizard/metrics].
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14382) TestInterfaceAudienceAnnotations should hadoop-compt module resources

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14382:
-
Component/s: (was: Client)
 test

> TestInterfaceAudienceAnnotations should hadoop-compt module resources
> -
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> 

[jira] [Commented] (HBASE-14376) move hbase spark integration examples into their own module

2015-09-08 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735594#comment-14735594
 ] 

Nick Dimiduk commented on HBASE-14376:
--

Why not just add to existing hbase-examples module?

> move hbase spark integration examples into their own module
> ---
>
> Key: HBASE-14376
> URL: https://issues.apache.org/jira/browse/HBASE-14376
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14376.1.patch
>
>
> take the examples that are currently in the hbase-spark module and move them 
> into a hbase-spark-examples module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14376) move hbase spark integration examples into their own module

2015-09-08 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735651#comment-14735651
 ] 

Nick Dimiduk commented on HBASE-14376:
--

True, but it's just the examples module. I don't think this is an issue for 
that module.

> move hbase spark integration examples into their own module
> ---
>
> Key: HBASE-14376
> URL: https://issues.apache.org/jira/browse/HBASE-14376
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14376.1.patch
>
>
> take the examples that are currently in the hbase-spark module and move them 
> into a hbase-spark-examples module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14376) move hbase spark integration examples into their own module

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735701#comment-14735701
 ] 

Hadoop QA commented on HBASE-14376:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754741/warnings.diff
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754741

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15474//console

This message is automatically generated.

> move hbase spark integration examples into their own module
> ---
>
> Key: HBASE-14376
> URL: https://issues.apache.org/jira/browse/HBASE-14376
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14376.1.patch, warnings.diff
>
>
> take the examples that are currently in the hbase-spark module and move them 
> into a hbase-spark-examples module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735703#comment-14735703
 ] 

Enis Soztutar commented on HBASE-14370:
---

bq. w.r.t. thread leak, have you seen the following code ?
Ok, missed that. 
bq. Do you think tighter coordination is needed between the zk thread and the 
refresher thread ?
In theory, there maybe a race where one thread is refreshing the table auths, 
while the other is deleting that permission since now, they will be executing 
in different threads. Maybe we can make every operation 
(nodeCreated,nodeDeleted,nodeDataChanged,nodeChildrenChanged) to execute from 
the executor.  

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735719#comment-14735719
 ] 

Ted Yu commented on HBASE-14370:


bq. one thread is refreshing the table auths, while the other is deleting that 
permission

If table auth is put back by the refresher, it would be overwritten next time 
the table with same name is created.
Before the table is created again, TableNotFoundException serves as the guard.

What do you think ?

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14382) Missing interface audience annotations

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14382:
-
Priority: Minor  (was: Blocker)

> Missing interface audience annotations
> --
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> 

[jira] [Commented] (HBASE-12911) Client-side metrics

2015-09-08 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735619#comment-14735619
 ] 

Andrew Purtell commented on HBASE-12911:


You might get some validation from what the urban airship guys put together a 
couple years ago for 0.94. I mentioned it above. Their stuff tracks op 
latencies by op, region, and server using a wrapper around HTable. I'm working 
on a revamped version for us for 0.98. We'd be most interested in identifying 
slow servers. 

> Client-side metrics
> ---
>
> Key: HBASE-12911
> URL: https://issues.apache.org/jira/browse/HBASE-12911
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, Operability, Performance
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, 
> 0001-HBASE-12911-Client-side-metrics.patch, am.jpg, client metrics 
> RS-Master.jpg, client metrics client.jpg, conn_agg.jpg, connection 
> attributes.jpg, ltt.jpg, standalone.jpg
>
>
> There's very little visibility into the hbase client. Folks who care to add 
> some kind of metrics collection end up wrapping Table method invocations with 
> {{System.currentTimeMillis()}}. For a crude example of this, have a look at 
> what I did in {{PerformanceEvaluation}} for exposing requests latencies up to 
> {{IntegrationTestRegionReplicaPerf}}. The client is quite complex, there's a 
> lot going on under the hood that is impossible to see right now without a 
> profiler. Being a crucial part of the performance of this distributed system, 
> we should have deeper visibility into the client's function.
> I'm not sure that wiring into the hadoop metrics system is the right choice 
> because the client is often embedded as a library in a user's application. We 
> should have integration with our metrics tools so that, i.e., a client 
> embedded in a coprocessor can report metrics through the usual RS channels, 
> or a client used in a MR job can do the same.
> I would propose an interface-based system with pluggable implementations. Out 
> of the box we'd include a hadoop-metrics implementation and one other, 
> possibly [dropwizard/metrics|https://github.com/dropwizard/metrics].
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14376) move hbase spark integration examples into their own module

2015-09-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735646#comment-14735646
 ] 

Sean Busbey commented on HBASE-14376:
-

We could do, but that will introduce a dependency on the spark / Scala
stuff outside of the spark specific stuff. We've been avoiding that.

-- 
Sean



> move hbase spark integration examples into their own module
> ---
>
> Key: HBASE-14376
> URL: https://issues.apache.org/jira/browse/HBASE-14376
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14376.1.patch
>
>
> take the examples that are currently in the hbase-spark module and move them 
> into a hbase-spark-examples module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12751:
--
Attachment: 12751.rebased.v26.txt

I removed H6 from precommit set because failed again with can't find mvn.

Retry.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, 
> 12751.rebased.v26.txt, 12751v22.txt, 12751v23.txt, 12751v23.txt, 
> 12751v23.txt, 12751v23.txt, HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
> HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
> HBASE-12751-v13.patch, HBASE-12751-v14.patch, HBASE-12751-v15.patch, 
> HBASE-12751-v16.patch, HBASE-12751-v17.patch, HBASE-12751-v18.patch, 
> HBASE-12751-v19 (1).patch, HBASE-12751-v19.patch, HBASE-12751-v2.patch, 
> HBASE-12751-v20.patch, HBASE-12751-v20.patch, HBASE-12751-v21.patch, 
> HBASE-12751-v3.patch, HBASE-12751-v4.patch, HBASE-12751-v5.patch, 
> HBASE-12751-v6.patch, HBASE-12751-v7.patch, HBASE-12751-v8.patch, 
> HBASE-12751-v9.patch, HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14382) Missing interface audience annotations

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14382:
-
Attachment: HBASE-14382.00.patch

Not quite, there's a bug there. I need the following patch above HBASE-12911 to 
get the test passing.

> Missing interface audience annotations
> --
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> 

[jira] [Reopened] (HBASE-14382) Missing interface audience annotations

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reopened HBASE-14382:
--
  Assignee: Nick Dimiduk

> Missing interface audience annotations
> --
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> 

[jira] [Commented] (HBASE-14376) move hbase spark integration examples into their own module

2015-09-08 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735660#comment-14735660
 ] 

Gabor Liptak commented on HBASE-14376:
--

[~busbey] I'm not seeing the new warnings. Could you offer some pointers on 
this? (I uploaded the diff ...)


> move hbase spark integration examples into their own module
> ---
>
> Key: HBASE-14376
> URL: https://issues.apache.org/jira/browse/HBASE-14376
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14376.1.patch, warnings.diff
>
>
> take the examples that are currently in the hbase-spark module and move them 
> into a hbase-spark-examples module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14376) move hbase spark integration examples into their own module

2015-09-08 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-14376:
-
Attachment: warnings.diff

> move hbase spark integration examples into their own module
> ---
>
> Key: HBASE-14376
> URL: https://issues.apache.org/jira/browse/HBASE-14376
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Gabor Liptak
>  Labels: beginner
> Attachments: HBASE-14376.1.patch, warnings.diff
>
>
> take the examples that are currently in the hbase-spark module and move them 
> into a hbase-spark-examples module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14378) Get TestAccessController* passing again on branch-1

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735715#comment-14735715
 ] 

Hadoop QA commented on HBASE-14378:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754706/14378.branch-1.v2.txt
  against branch-1 branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754706

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 24 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat2.testExcludeMinorCompaction(TestHFileOutputFormat2.java:1096)
at 
org.apache.hadoop.hbase.mapreduce.TestImportExport.testExportScannerBatching(TestImportExport.java:255)
at 
org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.testMultithreadedTableMapper(TestMultithreadedTableMapper.java:133)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15469//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15469//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15469//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15469//console

This message is automatically generated.

> Get TestAccessController* passing again on branch-1
> ---
>
> Key: HBASE-14378
> URL: https://issues.apache.org/jira/browse/HBASE-14378
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14378.branch-1.txt, 14378.branch-1.v2.txt, 
> 14378.branch-1.v2.txt
>
>
> TestAccessController* are failing reliably on branch-1. They go zombie. I 
> learned that setting the junit test timeout facility on the class doesn't 
> make the zombie timeout nor does setting a timeout on each test turn zombies 
> to test failures; the test goes zombie on the way out in the tear down of the 
> cluster.
> Digging, we are out of handlers... all are occupied.
> 3dacee6 HBASE-14290 Spin up less threads in tests cut the default thread 
> count to 3 from 10. Putting the value back on these tests seems to make them 
> pass reliably when I run locally.  For good measure, I'll add in the timeouts 
> .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14382) TestInterfaceAudienceAnnotations should hadoop-compt module resources

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14382:
-
Summary: TestInterfaceAudienceAnnotations should hadoop-compt module 
resources  (was: Missing interface audience annotations)

> TestInterfaceAudienceAnnotations should hadoop-compt module resources
> -
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> 

[jira] [Updated] (HBASE-14382) TestInterfaceAudienceAnnotations should hadoop-compt module resources

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14382:
-
Status: Patch Available  (was: Reopened)

> TestInterfaceAudienceAnnotations should hadoop-compt module resources
> -
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> 

[jira] [Commented] (HBASE-14365) Error scanning 'labels' table in logs with exception while running bulkload, even visibility feature is disabled

2015-09-08 Thread Bhupendra Kumar Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734839#comment-14734839
 ] 

Bhupendra Kumar Jain commented on HBASE-14365:
--

Test case failures are not related to patch.

> Error scanning 'labels' table in logs with exception while running bulkload, 
> even visibility feature is disabled
> 
>
> Key: HBASE-14365
> URL: https://issues.apache.org/jira/browse/HBASE-14365
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Bhupendra Kumar Jain
>Priority: Minor
> Attachments: 0001-HBASE-14365.patch
>
>
> If visibility feature is disabled, Below exception in logs during importtsv 
> run. In case feature is disabled, No need to log the below as error. Its is 
> bit misleading for the user.
> {code}
> ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.DefaultVisibilityExpressionResolver: Error 
> scanning 'labels' table
> org.apache.hadoop.hbase.TableNotFoundException: hbase:labels
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:858)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:756)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:298)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:151)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:1)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:184)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:311)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:286)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:162)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:155)
> at 
> org.apache.hadoop.hbase.client.ClientSimpleScanner.(ClientSimpleScanner.java:42)
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:381)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14366) NPE in case visibility expression is not present in labels table during importtsv run

2015-09-08 Thread Bhupendra Kumar Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhupendra Kumar Jain updated HBASE-14366:
-
Attachment: 0001-HBASE-14366_1.patch

Also handled invalid visibility label case as bad line.  Please review

> NPE in case visibility expression is not present in labels table during 
> importtsv run
> -
>
> Key: HBASE-14366
> URL: https://issues.apache.org/jira/browse/HBASE-14366
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Bhupendra Kumar Jain
>Priority: Minor
> Attachments: 0001-HBASE-14366.patch, 0001-HBASE-14366_1.patch
>
>
> Below exception is shown in logs if visibility expression is not present in 
> labels table during importtsv run. Appropriate exception / message should be 
> logged for the user to take further action.
> {code}
> WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.mapreduce.DefaultVisibilityExpressionResolver$1.getLabelOrdinal(DefaultVisibilityExpressionResolver.java:127)
> at 
> org.apache.hadoop.hbase.security.visibility.VisibilityUtils.getLabelOrdinals(VisibilityUtils.java:358)
> at 
> org.apache.hadoop.hbase.security.visibility.VisibilityUtils.createVisibilityExpTags(VisibilityUtils.java:323)
> at 
> org.apache.hadoop.hbase.mapreduce.DefaultVisibilityExpressionResolver.createVisibilityExpTags(DefaultVisibilityExpressionResolver.java:137)
> at 
> org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.populatePut(TsvImporterMapper.java:205)
> at 
> org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:165)
> at 
> org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:1)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14380) Correct data also getting skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper

2015-09-08 Thread Bhupendra Kumar Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhupendra Kumar Jain updated HBASE-14380:
-
Status: Patch Available  (was: Open)

> Correct data also getting skipped along with bad data in importTsv bulk load 
> thru TsvImporterTextMapper
> ---
>
> Key: HBASE-14380
> URL: https://issues.apache.org/jira/browse/HBASE-14380
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Bhupendra Kumar Jain
>Assignee: Bhupendra Kumar Jain
> Attachments: 0001-HBASE-14380.patch
>
>
> Cosider the input data is as below 
> ROWKEY, TIEMSTAMP, Col_Value
> r1,1,v1   >> Correct line
> r1 >> Bad line
> r1,3,v3   >> Correct line
> r1,4,v4   >> Correct line
> When data is bulk loaded using importTsv with mapper as TsvImporterTextMapper 
> ,  All the lines are getting ignored even though skipBadLines is set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13984) Add option to allow caller to know the heartbeat and scanner position when scanner timeout

2015-09-08 Thread He Liangliang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Liangliang updated HBASE-13984:
--
Description: 
HBASE-13090 introduced scanner heartbeat. However, there are still some 
limitations (see HBASE-13215). In some application, for example, an operation 
access hbase to scan table data, and there is strict limit that this call must 
return in a fixed interval. At the same time, this call is stateless, so the 
call must return the next position to continue the scan. This is typical use 
case for online applications.

Based on this requirement, some improvements are proposed:
1. Allow client set a flag whether pass the heartbeat (a result contains the 
scanner position) to the caller (via ResultScanner next)
2. Allow the client pass a timeout to the server, which can override the server 
side default value
3. When requested by the client, the server peek the next cell and return to 
the client in the heartbeat message

  was:
HBASE-13090 introduced scanner heartbeat. However, there are still some 
limitations (see HBASE-13215). In some application, for example, an operation 
access hbase to scan table data, and there is strict limit that this call must 
return in a fixed interval. At the same time, this call is stateless, so the 
call must return the next position to continue the scan. This is typical use 
case for online applications.

Based on this requirement, some improvements are proposed:
1. Allow client set a flag whether pass the heartbeat (a fake row) to the 
caller (via ResultScanner next)
2. Allow the client pass a timeout to the server, which can override the server 
side default value
3. When requested by the client, the server peek the next cell and return to 
the client in the heartbeat message


> Add option to allow caller to know the heartbeat and scanner position when 
> scanner timeout
> --
>
> Key: HBASE-13984
> URL: https://issues.apache.org/jira/browse/HBASE-13984
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Reporter: He Liangliang
>Assignee: He Liangliang
> Attachments: HBASE-13984-V1.diff
>
>
> HBASE-13090 introduced scanner heartbeat. However, there are still some 
> limitations (see HBASE-13215). In some application, for example, an operation 
> access hbase to scan table data, and there is strict limit that this call 
> must return in a fixed interval. At the same time, this call is stateless, so 
> the call must return the next position to continue the scan. This is typical 
> use case for online applications.
> Based on this requirement, some improvements are proposed:
> 1. Allow client set a flag whether pass the heartbeat (a result contains the 
> scanner position) to the caller (via ResultScanner next)
> 2. Allow the client pass a timeout to the server, which can override the 
> server side default value
> 3. When requested by the client, the server peek the next cell and return to 
> the client in the heartbeat message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13984) Add option to allow caller to know the heartbeat and scanner position when scanner timeout

2015-09-08 Thread He Liangliang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Liangliang updated HBASE-13984:
--
Attachment: HBASE-13984-V2.diff

added allowHeartbeatResults option in scan and isHeartbeat flag in Result

> Add option to allow caller to know the heartbeat and scanner position when 
> scanner timeout
> --
>
> Key: HBASE-13984
> URL: https://issues.apache.org/jira/browse/HBASE-13984
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Reporter: He Liangliang
>Assignee: He Liangliang
> Attachments: HBASE-13984-V1.diff, HBASE-13984-V2.diff
>
>
> HBASE-13090 introduced scanner heartbeat. However, there are still some 
> limitations (see HBASE-13215). In some application, for example, an operation 
> access hbase to scan table data, and there is strict limit that this call 
> must return in a fixed interval. At the same time, this call is stateless, so 
> the call must return the next position to continue the scan. This is typical 
> use case for online applications.
> Based on this requirement, some improvements are proposed:
> 1. Allow client set a flag whether pass the heartbeat (a result contains the 
> scanner position) to the caller (via ResultScanner next)
> 2. Allow the client pass a timeout to the server, which can override the 
> server side default value
> 3. When requested by the client, the server peek the next cell and return to 
> the client in the heartbeat message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12751:
--
Attachment: 12751.rebased.v27.txt

A new version. The Increment and Append methods had diverged. Align them more. 
Make them the same in way they do mvcc.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, 
> 12751.rebased.v26.txt, 12751.rebased.v27.txt, 12751v22.txt, 12751v23.txt, 
> 12751v23.txt, 12751v23.txt, 12751v23.txt, HBASE-12751-v1.patch, 
> HBASE-12751-v10.patch, HBASE-12751-v10.patch, HBASE-12751-v11.patch, 
> HBASE-12751-v12.patch, HBASE-12751-v13.patch, HBASE-12751-v14.patch, 
> HBASE-12751-v15.patch, HBASE-12751-v16.patch, HBASE-12751-v17.patch, 
> HBASE-12751-v18.patch, HBASE-12751-v19 (1).patch, HBASE-12751-v19.patch, 
> HBASE-12751-v2.patch, HBASE-12751-v20.patch, HBASE-12751-v20.patch, 
> HBASE-12751-v21.patch, HBASE-12751-v3.patch, HBASE-12751-v4.patch, 
> HBASE-12751-v5.patch, HBASE-12751-v6.patch, HBASE-12751-v7.patch, 
> HBASE-12751-v8.patch, HBASE-12751-v9.patch, HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735842#comment-14735842
 ] 

Hadoop QA commented on HBASE-12751:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754745/12751.rebased.v26.txt
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754745

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 90 new 
or modified tests.

{color:red}-1 Anti-pattern{color}.  The patch appears to 
have anti-pattern where BYTES_COMPARATOR was omitted:
 -getRegionInfo(), -1, new TreeMap());.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestFullLogReconstruction
  org.apache.hadoop.hbase.coprocessor.TestWALObserver

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15475//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15475//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15475//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15475//console

This message is automatically generated.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, 
> 12751.rebased.v26.txt, 12751.rebased.v27.txt, 12751v22.txt, 12751v23.txt, 
> 12751v23.txt, 12751v23.txt, 12751v23.txt, HBASE-12751-v1.patch, 
> HBASE-12751-v10.patch, HBASE-12751-v10.patch, HBASE-12751-v11.patch, 
> HBASE-12751-v12.patch, HBASE-12751-v13.patch, HBASE-12751-v14.patch, 
> HBASE-12751-v15.patch, HBASE-12751-v16.patch, HBASE-12751-v17.patch, 
> HBASE-12751-v18.patch, HBASE-12751-v19 (1).patch, HBASE-12751-v19.patch, 
> HBASE-12751-v2.patch, HBASE-12751-v20.patch, HBASE-12751-v20.patch, 
> HBASE-12751-v21.patch, HBASE-12751-v3.patch, HBASE-12751-v4.patch, 
> HBASE-12751-v5.patch, HBASE-12751-v6.patch, HBASE-12751-v7.patch, 
> HBASE-12751-v8.patch, HBASE-12751-v9.patch, HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14272) Enforce major compaction on stores with TTL or time.to.purge.deletes enabled

2015-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735863#comment-14735863
 ] 

Lars Hofhansl commented on HBASE-14272:
---

OK... Assigned to me.

Let's simply start with always doing the time based compaction when TTL is set. 
Period. Simple.

Then we do the following (sub jira or here):
# track a flag during compactions that is set for any HFile that has any delete 
markers in it.
# track the maximum number of version we have seen for any Column. Note: need 
to make sure that we can do that cheaply (need a compare of row-key, family, 
CQ).

With this in place we should be able to decide whether a compaction by the 
following:
{code}
...
} else if (TTL set) {
  if (MIN_VERSIONS > 0) {
if (store.maxNumVersions() > MIN_VERSIONS) {
   compact();
}
  } else {
compact()
  }
} else if (time.to.purge.deletes > 0) {
   if (store.hasDeletes() && !KEEP_DELETED_CELLS) {
 compact()
   }
}
{code}

I'm sure this logic can be expressed more nicely, but you get the point.


> Enforce major compaction on stores with TTL or time.to.purge.deletes enabled
> 
>
> Key: HBASE-14272
> URL: https://issues.apache.org/jira/browse/HBASE-14272
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Lars Hofhansl
> Fix For: 2.0.0
>
> Attachments: HBASE-14272-v2.patch, HBASE-14272.patch
>
>
> Currently, if store has one (major compacted) file, the only case when major 
> compaction will be triggered for this file again - when locality is below 
> threshold, defined by *hbase.hstore.min.locality.to.skip.major.compact* or 
> TTL expired some cells. If file has locality greater than this threshold it 
> will never be major compacted until Store's TTL kicks in. For CF with 
> KEEP_DELETED_CELLS on, compaction must be enabled always (even for single 
> file), regardless of locality, when deleted cells are expired 
> (*hbase.hstore.time.to.purge.deletes*)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14272) Enforce major compaction on stores with TTL or time.to.purge.deletes enabled

2015-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735863#comment-14735863
 ] 

Lars Hofhansl edited comment on HBASE-14272 at 9/8/15 11:56 PM:


OK... Assigned to me.

Let's simply start with always doing the time based compaction when TTL is set. 
Period. Simple.

Then we do the following (sub jira or here):
# track a flag during compactions that is set for any HFile that has any delete 
markers in it.
# track the maximum number of version we have seen for any Column. Note: need 
to make sure that we can do that cheaply (need a compare of row-key, family, 
CQ).

With this in place we should be able to decide whether we need to compact by 
the following:
{code}
...
} else if (TTL set) {
  if (MIN_VERSIONS > 0) {
if (store.maxNumVersions() > MIN_VERSIONS) {
   compact();
}
  } else {
compact()
  }
} else if (time.to.purge.deletes > 0) {
   if (store.hasDeletes() && !KEEP_DELETED_CELLS) {
 compact()
   }
}
{code}

I'm sure this logic can be expressed more nicely, but you get the point.



was (Author: lhofhansl):
OK... Assigned to me.

Let's simply start with always doing the time based compaction when TTL is set. 
Period. Simple.

Then we do the following (sub jira or here):
# track a flag during compactions that is set for any HFile that has any delete 
markers in it.
# track the maximum number of version we have seen for any Column. Note: need 
to make sure that we can do that cheaply (need a compare of row-key, family, 
CQ).

With this in place we should be able to decide whether a compaction by the 
following:
{code}
...
} else if (TTL set) {
  if (MIN_VERSIONS > 0) {
if (store.maxNumVersions() > MIN_VERSIONS) {
   compact();
}
  } else {
compact()
  }
} else if (time.to.purge.deletes > 0) {
   if (store.hasDeletes() && !KEEP_DELETED_CELLS) {
 compact()
   }
}
{code}

I'm sure this logic can be expressed more nicely, but you get the point.


> Enforce major compaction on stores with TTL or time.to.purge.deletes enabled
> 
>
> Key: HBASE-14272
> URL: https://issues.apache.org/jira/browse/HBASE-14272
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Lars Hofhansl
> Fix For: 2.0.0
>
> Attachments: HBASE-14272-v2.patch, HBASE-14272.patch
>
>
> Currently, if store has one (major compacted) file, the only case when major 
> compaction will be triggered for this file again - when locality is below 
> threshold, defined by *hbase.hstore.min.locality.to.skip.major.compact* or 
> TTL expired some cells. If file has locality greater than this threshold it 
> will never be major compacted until Store's TTL kicks in. For CF with 
> KEEP_DELETED_CELLS on, compaction must be enabled always (even for single 
> file), regardless of locality, when deleted cells are expired 
> (*hbase.hstore.time.to.purge.deletes*)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14382) TestInterfaceAudienceAnnotations should hadoop-compt module resources

2015-09-08 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14382:
-
Attachment: HBASE-14382.01.patch

Apply filter change to both tests. Now my test debug out put is excluding the 
hadoop-compat modules entire. This is intended, yes?

{noformat}
2015-09-08 15:38:10,962 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:10,965 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:10,965 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:10,966 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:13,074 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(294): These are the classes that DO NOT 
have @InterfaceStability annotation:
{noformat}

{noformat}
2015-09-08 15:38:13,079 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:13,079 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:13,079 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:13,080 DEBUG [main] hbase.ClassFinder(162): Looking in 
/Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
 isJar=false
2015-09-08 15:38:13,154 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(254): These are the classes that DO NOT 
have @InterfaceAudience annotation:
{noformat}


> TestInterfaceAudienceAnnotations should hadoop-compt module resources
> -
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch, HBASE-14382.01.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> 

[jira] [Updated] (HBASE-14272) Enforce major compaction on stores with TTL or time.to.purge.deletes enabled

2015-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14272:
--
Summary: Enforce major compaction on stores with TTL or 
time.to.purge.deletes enabled  (was: Enforce major compaction on stores with 
TTL and )

> Enforce major compaction on stores with TTL or time.to.purge.deletes enabled
> 
>
> Key: HBASE-14272
> URL: https://issues.apache.org/jira/browse/HBASE-14272
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Lars Hofhansl
> Fix For: 2.0.0
>
> Attachments: HBASE-14272-v2.patch, HBASE-14272.patch
>
>
> Currently, if store has one (major compacted) file, the only case when major 
> compaction will be triggered for this file again - when locality is below 
> threshold, defined by *hbase.hstore.min.locality.to.skip.major.compact* or 
> TTL expired some cells. If file has locality greater than this threshold it 
> will never be major compacted until Store's TTL kicks in. For CF with 
> KEEP_DELETED_CELLS on, compaction must be enabled always (even for single 
> file), regardless of locality, when deleted cells are expired 
> (*hbase.hstore.time.to.purge.deletes*)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14272) Enforce major compaction on stores with KEEP_DELETED_CELLS=true

2015-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-14272:
-

Assignee: Lars Hofhansl  (was: Vladimir Rodionov)

> Enforce major compaction on stores with KEEP_DELETED_CELLS=true
> ---
>
> Key: HBASE-14272
> URL: https://issues.apache.org/jira/browse/HBASE-14272
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Lars Hofhansl
> Fix For: 2.0.0
>
> Attachments: HBASE-14272-v2.patch, HBASE-14272.patch
>
>
> Currently, if store has one (major compacted) file, the only case when major 
> compaction will be triggered for this file again - when locality is below 
> threshold, defined by *hbase.hstore.min.locality.to.skip.major.compact* or 
> TTL expired some cells. If file has locality greater than this threshold it 
> will never be major compacted until Store's TTL kicks in. For CF with 
> KEEP_DELETED_CELLS on, compaction must be enabled always (even for single 
> file), regardless of locality, when deleted cells are expired 
> (*hbase.hstore.time.to.purge.deletes*)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14272) Enforce major compaction on stores with TTL and

2015-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14272:
--
Summary: Enforce major compaction on stores with TTL and   (was: Enforce 
major compaction on stores with KEEP_DELETED_CELLS=true)

> Enforce major compaction on stores with TTL and 
> 
>
> Key: HBASE-14272
> URL: https://issues.apache.org/jira/browse/HBASE-14272
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Lars Hofhansl
> Fix For: 2.0.0
>
> Attachments: HBASE-14272-v2.patch, HBASE-14272.patch
>
>
> Currently, if store has one (major compacted) file, the only case when major 
> compaction will be triggered for this file again - when locality is below 
> threshold, defined by *hbase.hstore.min.locality.to.skip.major.compact* or 
> TTL expired some cells. If file has locality greater than this threshold it 
> will never be major compacted until Store's TTL kicks in. For CF with 
> KEEP_DELETED_CELLS on, compaction must be enabled always (even for single 
> file), regardless of locality, when deleted cells are expired 
> (*hbase.hstore.time.to.purge.deletes*)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735778#comment-14735778
 ] 

Ted Yu commented on HBASE-14370:


Looking at patch v1, the thread is created once when ZKPermissionWatcher is 
created - not for every refresh call.

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14374) Backport parent issue to 1.1 and 1.0

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14374:
--
Attachment: (was: 14317.branch-1.v2.txt)

> Backport parent issue to 1.1 and 1.0
> 
>
> Key: HBASE-14374
> URL: https://issues.apache.org/jira/browse/HBASE-14374
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.3
>
> Attachments: 14317-branch-1.1.txt, 14317.branch-1.1.v2.txt
>
>
> Backport parent issue to branch-1.1. and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14374) Backport parent issue to 1.1 and 1.0

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14374:
--
Attachment: 14317.branch-1.1.v2.txt

I named the file wrong. Should have been against 1.1

> Backport parent issue to 1.1 and 1.0
> 
>
> Key: HBASE-14374
> URL: https://issues.apache.org/jira/browse/HBASE-14374
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.3
>
> Attachments: 14317-branch-1.1.txt, 14317.branch-1.1.v2.txt
>
>
> Backport parent issue to branch-1.1. and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) enable bulkload to support replication

2015-09-08 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734995#comment-14734995
 ] 

Matteo Bertozzi commented on HBASE-13153:
-

I gave a quick read to the doc, but I don't see any mention of the WAL events 
for bulkload/flushes/compactions (HBASE-11567, HBASE-11511, HBASE-11569, 
HBASE-11571, HBASE-11567).

after the work on read-replicas we have the events about files added and moved. 
so, my initial thought was to see something like the bulkload/compaction 
handling as done for read-replicas.
what is the reason to go with a different approach? aren't the wal entries we 
have enough?

> enable bulkload to support replication
> --
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBase Bulk Load Replication.pdf
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14374) Backport parent issue to 1.1 and 1.0

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734991#comment-14734991
 ] 

Hadoop QA commented on HBASE-14374:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12754657/14317.branch-1.1.v2.txt
  against branch-1.1 branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754657

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 2.0.3) to fail.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn post-site goal 
to fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient.testRestoreSnapshotOfCloned(TestRestoreFlushSnapshotFromClient.java:196)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15464//testReport/
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15464//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15464//console

This message is automatically generated.

> Backport parent issue to 1.1 and 1.0
> 
>
> Key: HBASE-14374
> URL: https://issues.apache.org/jira/browse/HBASE-14374
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.3
>
> Attachments: 14317-branch-1.1.txt, 14317.branch-1.1.v2.txt
>
>
> Backport parent issue to branch-1.1. and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14380) Correct data also getting skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734992#comment-14734992
 ] 

Hadoop QA commented on HBASE-14380:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12754625/0001-HBASE-14380.patch
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754625

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15463//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15463//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15463//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15463//console

This message is automatically generated.

> Correct data also getting skipped along with bad data in importTsv bulk load 
> thru TsvImporterTextMapper
> ---
>
> Key: HBASE-14380
> URL: https://issues.apache.org/jira/browse/HBASE-14380
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Bhupendra Kumar Jain
>Assignee: Bhupendra Kumar Jain
> Attachments: 0001-HBASE-14380.patch
>
>
> Cosider the input data is as below 
> ROWKEY, TIEMSTAMP, Col_Value
> r1,1,v1   >> Correct line
> r1 >> Bad line
> r1,3,v3   >> Correct line
> r1,4,v4   >> Correct line
> When data is bulk loaded using importTsv with mapper as TsvImporterTextMapper 
> ,  All the lines are getting ignored even though skipBadLines is set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14378) Get TestAccessController* passing again on branch-1

2015-09-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734964#comment-14734964
 ] 

stack commented on HBASE-14378:
---

bq.  I thought we were timing out based on test categorization now? Or do these 
need more time than the default?

The timeout by category requires instrumenting each test class HBASE-14356 (You 
have to add a junit Rule -- see parent of HBASE-1435 patch for example).

At head of this issue I remark that timeouts do not apply to teardown phase -- 
bummer.

I'd started in on adding timeouts so just ran it out to the end.

Thanks for taking a look at these zombies Sean. Let me try adding in same 
restore of priority handlers to see if it helps on these zombies.

> Get TestAccessController* passing again on branch-1
> ---
>
> Key: HBASE-14378
> URL: https://issues.apache.org/jira/browse/HBASE-14378
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14378.branch-1.txt
>
>
> TestAccessController* are failing reliably on branch-1. They go zombie. I 
> learned that setting the junit test timeout facility on the class doesn't 
> make the zombie timeout nor does setting a timeout on each test turn zombies 
> to test failures; the test goes zombie on the way out in the tear down of the 
> cluster.
> Digging, we are out of handlers... all are occupied.
> 3dacee6 HBASE-14290 Spin up less threads in tests cut the default thread 
> count to 3 from 10. Putting the value back on these tests seems to make them 
> pass reliably when I run locally.  For good measure, I'll add in the timeouts 
> .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14378) Get TestAccessController* passing again on branch-1

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14378:
--
Attachment: 14378.branch-1.v2.txt

Up handlers to default for the above noted tests too.

> Get TestAccessController* passing again on branch-1
> ---
>
> Key: HBASE-14378
> URL: https://issues.apache.org/jira/browse/HBASE-14378
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14378.branch-1.txt, 14378.branch-1.v2.txt
>
>
> TestAccessController* are failing reliably on branch-1. They go zombie. I 
> learned that setting the junit test timeout facility on the class doesn't 
> make the zombie timeout nor does setting a timeout on each test turn zombies 
> to test failures; the test goes zombie on the way out in the tear down of the 
> cluster.
> Digging, we are out of handlers... all are occupied.
> 3dacee6 HBASE-14290 Spin up less threads in tests cut the default thread 
> count to 3 from 10. Putting the value back on these tests seems to make them 
> pass reliably when I run locally.  For good measure, I'll add in the timeouts 
> .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14374) Backport parent issue to 1.1 and 1.0

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14374:
--
Attachment: 14317.branch-1.1.v2.txt

A confused build run: 
"/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/test-framework/dev-support/test-patch.sh:
 line 861: mvn: command not found"

Retry

> Backport parent issue to 1.1 and 1.0
> 
>
> Key: HBASE-14374
> URL: https://issues.apache.org/jira/browse/HBASE-14374
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.3
>
> Attachments: 14317-branch-1.1.txt, 14317.branch-1.1.v2.txt, 
> 14317.branch-1.1.v2.txt
>
>
> Backport parent issue to branch-1.1. and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14374) Backport parent issue to 1.1 and 1.0

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735056#comment-14735056
 ] 

Hadoop QA commented on HBASE-14374:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12754663/14317.branch-1.1.v2.txt
  against branch-1.1 branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754663

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 2.0.3) to fail.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn post-site goal 
to fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15465//testReport/
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15465//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15465//console

This message is automatically generated.

> Backport parent issue to 1.1 and 1.0
> 
>
> Key: HBASE-14374
> URL: https://issues.apache.org/jira/browse/HBASE-14374
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 1.0.3, 1.1.3
>
> Attachments: 14317-branch-1.1.txt, 14317.branch-1.1.v2.txt, 
> 14317.branch-1.1.v2.txt
>
>
> Backport parent issue to branch-1.1. and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14383) Compaction improvements

2015-09-08 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-14383:
-

 Summary: Compaction improvements
 Key: HBASE-14383
 URL: https://issues.apache.org/jira/browse/HBASE-14383
 Project: HBase
  Issue Type: Improvement
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov
 Fix For: 2.0.0


Still major issue in many production environments. The general recommendation - 
disabling region splitting and major compactions to reduce unpredictable IO/CPU 
spikes, especially during peak times and running them manually during off peak 
times. Still do not resolve the issues completely.

h3. Flush storms

* rolling WAL events across cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones.
*  the same is true for memstore flushing due to periodic memstore flusher 
operation. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table.

Both above may produce *flush storms* which are as bad as *compaction storms*. 

What can be done here. We can spread these events over time by randomizing 
(with jitter) several  config options:
# hbase.regionserver.optionalcacheflushinterval
# hbase.regionserver.flush.per.changes
# hbase.regionserver.maxlogs   

h3. ExploringCompactionPolicy max compaction size
One more optimization can be added to ExploringCompactionPolicy. To limit size 
of a compaction there is a config parameter one could use 
hbase.hstore.compaction.max.size. It would be nice to have two separate limits: 
for peak and off peak hours.

h3. ExploringCompactionPolicy selection evaluation algorithm

Just seems too simple: selection with more files always wins, selection of 
smaller size wins if number of files is the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14384) Trying to run canary locally with -regionserver option causes exception

2015-09-08 Thread Sanjeev Srivatsa (JIRA)
Sanjeev Srivatsa created HBASE-14384:


 Summary: Trying to run canary locally with -regionserver option 
causes exception
 Key: HBASE-14384
 URL: https://issues.apache.org/jira/browse/HBASE-14384
 Project: HBase
  Issue Type: Bug
  Components: canary
Reporter: Sanjeev Srivatsa


Tried to run canary locally (on branch master) with command: 
bin/hbase org.apache.hadoop.hbase.tool.Canary -regionserver
Exception was thrown:
Exception in thread "main" java.lang.ClassCastException: 
org.apache.hadoop.hbase.tool.Canary$StdOutSink cannot be cast to 
org.apache.hadoop.hbase.tool.Canary$ExtendedSink
at org.apache.hadoop.hbase.tool.Canary.newMonitor(Canary.java:640)
at org.apache.hadoop.hbase.tool.Canary.run(Canary.java:551)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.tool.Canary.main(Canary.java:1127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14385) Close the sockets that is missing in connection closure.

2015-09-08 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-14385:
---

 Summary: Close the sockets that is missing in connection closure.
 Key: HBASE-14385
 URL: https://issues.apache.org/jira/browse/HBASE-14385
 Project: HBase
  Issue Type: Bug
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14386) Reset MutableHistogram's min/max/sum after snapshot

2015-09-08 Thread binlijin (JIRA)
binlijin created HBASE-14386:


 Summary: Reset MutableHistogram's min/max/sum after snapshot
 Key: HBASE-14386
 URL: https://issues.apache.org/jira/browse/HBASE-14386
 Project: HBase
  Issue Type: Bug
Reporter: binlijin


Current MutableHistogram do not reset min/max/sum after snapshot, so we affect 
by historical data. For example when i monitor the QueueCallTime_mean, i see 
one host's QueueCallTime_mean metric is high, but when i trace the host's 
regionserver log i see the QueueCallTime_mean has been lower, but the metric is 
still high.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14382) TestInterfaceAudienceAnnotations should hadoop-compt module resources

2015-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735950#comment-14735950
 ] 

Enis Soztutar commented on HBASE-14382:
---

LGTM. 

> TestInterfaceAudienceAnnotations should hadoop-compt module resources
> -
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch, HBASE-14382.01.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionHostSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceSourceImpl
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSource
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.master.balancer.MetricsStochasticBalancerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.wal.MetricsEditsReplaySource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactory
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSinkSourceImpl
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapper
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
> 2015-09-08 12:05:31,161 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationGlobalSourceSource
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): class 
> org.apache.hadoop.hbase.replication.regionserver.MetricsReplicationSourceFactoryImpl
> 2015-09-08 12:05:31,162 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> 

[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735954#comment-14735954
 ] 

Enis Soztutar commented on HBASE-14370:
---

bq. Looking at patch v1, the thread is created once when ZKPermissionWatcher is 
created - not for every refresh call.
Yes, it seems that the thread never exits. This is kind of like a busy wait, 
no? The thread looks for whether it should refresh every 2ms without a wait / 
signal. 

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735990#comment-14735990
 ] 

Hadoop QA commented on HBASE-12751:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754767/12751.rebased.v27.txt
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754767

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 90 new 
or modified tests.

{color:red}-1 Anti-pattern{color}.  The patch appears to 
have anti-pattern where BYTES_COMPARATOR was omitted:
 -getRegionInfo(), -1, new TreeMap());.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  txid = this.wal.append(this.htableDescriptor, 
getRegionInfo(), walKey, walEdits, true);
+if (i < (edits.size() - 1) && 
!CellUtil.matchingQualifier(cell, edits.get(i + 1))) idx++;

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.coprocessor.TestWALObserver
  org.apache.hadoop.hbase.regionserver.TestPerColumnFamilyFlush
  org.apache.hadoop.hbase.regionserver.TestHMobStore
  org.apache.hadoop.hbase.regionserver.wal.TestWALReplay
  org.apache.hadoop.hbase.regionserver.TestTags
  org.apache.hadoop.hbase.regionserver.TestStore
  
org.apache.hadoop.hbase.regionserver.wal.TestWALReplayCompressed
  org.apache.hadoop.hbase.regionserver.TestFSErrorsExposed
  org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover
  org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay
  org.apache.hadoop.hbase.regionserver.TestHRegionOnCluster
  org.apache.hadoop.hbase.regionserver.TestFailedAppendAndSync
  org.apache.hadoop.hbase.TestFullLogReconstruction
  org.apache.hadoop.hbase.regionserver.TestRecoveredEdits

 {color:red}-1 core zombie tests{color}.  There are 12 zombie test(s):  
at 
org.apache.hadoop.hbase.client.TestScannerTimeout.test3686b(TestScannerTimeout.java:212)
at 
org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testScanWithCompactionInternals(TestBlockEvictionFromClient.java:854)
at 
org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testScanWithCompaction(TestBlockEvictionFromClient.java:799)
at 
org.apache.hadoop.hbase.client.TestMultiParallel.testBatchWithPut(TestMultiParallel.java:342)
at 
org.apache.hadoop.hbase.client.TestReplicasClient.testSmallScanWithReplicas(TestReplicasClient.java:606)
at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3756)
at 
org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:141)
at 
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient.testCloneLinksAfterDelete(TestMobCloneSnapshotFromClient.java:215)
at 
org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:228)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15478//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15478//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15478//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15478//console

This message is automatically generated.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
>   

[jira] [Updated] (HBASE-14386) Reset MutableHistogram's min/max/sum after snapshot

2015-09-08 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-14386:
-
Attachment: HBASE-14386.patch

> Reset MutableHistogram's min/max/sum after snapshot
> ---
>
> Key: HBASE-14386
> URL: https://issues.apache.org/jira/browse/HBASE-14386
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
> Attachments: HBASE-14386.patch
>
>
> Current MutableHistogram do not reset min/max/sum after snapshot, so we 
> affect by historical data. For example when i monitor the QueueCallTime_mean, 
> i see one host's QueueCallTime_mean metric is high, but when i trace the 
> host's regionserver log i see the QueueCallTime_mean has been lower, but the 
> metric is still high.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14383) Compaction improvements

2015-09-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14383:
--
Description: 
Still major issue in many production environments. The general recommendation - 
disabling region splitting and major compactions to reduce unpredictable IO/CPU 
spikes, especially during peak times and running them manually during off peak 
times. Still do not resolve the issues completely.

h3. Flush storms

* rolling WAL events across cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table.
*  the same is true for memstore flushing due to periodic memstore flusher 
operation. 

Both above may produce *flush storms* which are as bad as *compaction storms*. 

What can be done here. We can spread these events over time by randomizing 
(with jitter) several  config options:
# hbase.regionserver.optionalcacheflushinterval
# hbase.regionserver.flush.per.changes
# hbase.regionserver.maxlogs   

h3. ExploringCompactionPolicy max compaction size
One more optimization can be added to ExploringCompactionPolicy. To limit size 
of a compaction there is a config parameter one could use 
hbase.hstore.compaction.max.size. It would be nice to have two separate limits: 
for peak and off peak hours.

h3. ExploringCompactionPolicy selection evaluation algorithm

Just seems too simple: selection with more files always wins, selection of 
smaller size wins if number of files is the same.

  was:
Still major issue in many production environments. The general recommendation - 
disabling region splitting and major compactions to reduce unpredictable IO/CPU 
spikes, especially during peak times and running them manually during off peak 
times. Still do not resolve the issues completely.

h3. Flush storms

* rolling WAL events across cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones.
*  the same is true for memstore flushing due to periodic memstore flusher 
operation. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table.

Both above may produce *flush storms* which are as bad as *compaction storms*. 

What can be done here. We can spread these events over time by randomizing 
(with jitter) several  config options:
# hbase.regionserver.optionalcacheflushinterval
# hbase.regionserver.flush.per.changes
# hbase.regionserver.maxlogs   

h3. ExploringCompactionPolicy max compaction size
One more optimization can be added to ExploringCompactionPolicy. To limit size 
of a compaction there is a config parameter one could use 
hbase.hstore.compaction.max.size. It would be nice to have two separate limits: 
for peak and off peak hours.

h3. ExploringCompactionPolicy selection evaluation algorithm

Just seems too simple: selection with more files always wins, selection of 
smaller size wins if number of files is the same.


> Compaction improvements
> ---
>
> Key: HBASE-14383
> URL: https://issues.apache.org/jira/browse/HBASE-14383
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
>
> Still major issue in many production environments. The general recommendation 
> - disabling region splitting and major compactions to reduce unpredictable 
> IO/CPU spikes, especially during peak times and running them manually during 
> off peak times. Still do not resolve the issues completely.
> h3. Flush storms
> * rolling WAL events across cluster can be highly correlated, hence flushing 
> memstores, hence triggering minor compactions, that can be promoted to major 
> ones. These events are highly correlated in time if there is a balanced 
> write-load on the regions in a table.
> *  the same is true for memstore flushing due to periodic memstore flusher 
> operation. 
> Both above may produce *flush storms* which are as bad as *compaction 
> storms*. 
> What can be done here. We can spread these events over time by randomizing 
> (with jitter) several  config options:
> # hbase.regionserver.optionalcacheflushinterval
> # hbase.regionserver.flush.per.changes
> # hbase.regionserver.maxlogs   
> h3. ExploringCompactionPolicy max compaction size
> One more optimization can be added to ExploringCompactionPolicy. To limit 
> size of a compaction there is a config parameter one could use 
> hbase.hstore.compaction.max.size. It would be nice to have two separate 
> limits: for peak and off peak hours.
> h3. ExploringCompactionPolicy selection evaluation algorithm
> Just seems too simple: selection with more files always wins, selection of 
> smaller 

[jira] [Assigned] (HBASE-14145) Allow the Canary in regionserver mode to try all regions on the server, not just one

2015-09-08 Thread Sanjeev Srivatsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjeev Srivatsa reassigned HBASE-14145:


Assignee: Sanjeev Srivatsa

> Allow the Canary in regionserver mode to try all regions on the server, not 
> just one
> 
>
> Key: HBASE-14145
> URL: https://issues.apache.org/jira/browse/HBASE-14145
> Project: HBase
>  Issue Type: Bug
>  Components: canary, util
>Affects Versions: 2.0.0, 1.1.0.1
>Reporter: Elliott Clark
>Assignee: Sanjeev Srivatsa
>  Labels: beginner
> Fix For: 2.0.0, 1.3.0
>
>
> We want a pretty in-depth canary that will try every region on a cluster. 
> When doing that for the whole cluster one machine is too slow, so we wanted 
> to split it up and have each regionserver run a canary. That works however 
> the canary does less work as it just tries one random region.
> Lets add a flag that will allow the canary to try all regions on a 
> regionserver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14383) Compaction improvements

2015-09-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14383:
--
Description: 
Still major issue in many production environments. The general recommendation - 
disabling region splitting and major compactions to reduce unpredictable IO/CPU 
spikes, especially during peak times and running them manually during off peak 
times. Still do not resolve the issues completely.

h3. Flush storms

* rolling WAL events across cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table.
*  the same is true for memstore flushing due to periodic memstore flusher 
operation. 

Both above may produce *flush storms* which are as bad as *compaction storms*. 

What can be done here. We can spread these events over time by randomizing 
(with jitter) several  config options:
# hbase.regionserver.optionalcacheflushinterval
# hbase.regionserver.flush.per.changes
# hbase.regionserver.maxlogs   

h3. ExploringCompactionPolicy max compaction size
One more optimization can be added to ExploringCompactionPolicy. To limit size 
of a compaction there is a config parameter one could use 
hbase.hstore.compaction.max.size. It would be nice to have two separate limits: 
for peak and off peak hours.

h3. ExploringCompactionPolicy selection evaluation algorithm

Too simple? Selection with more files always wins, selection of smaller size 
wins if number of files is the same. 

  was:
Still major issue in many production environments. The general recommendation - 
disabling region splitting and major compactions to reduce unpredictable IO/CPU 
spikes, especially during peak times and running them manually during off peak 
times. Still do not resolve the issues completely.

h3. Flush storms

* rolling WAL events across cluster can be highly correlated, hence flushing 
memstores, hence triggering minor compactions, that can be promoted to major 
ones. These events are highly correlated in time if there is a balanced 
write-load on the regions in a table.
*  the same is true for memstore flushing due to periodic memstore flusher 
operation. 

Both above may produce *flush storms* which are as bad as *compaction storms*. 

What can be done here. We can spread these events over time by randomizing 
(with jitter) several  config options:
# hbase.regionserver.optionalcacheflushinterval
# hbase.regionserver.flush.per.changes
# hbase.regionserver.maxlogs   

h3. ExploringCompactionPolicy max compaction size
One more optimization can be added to ExploringCompactionPolicy. To limit size 
of a compaction there is a config parameter one could use 
hbase.hstore.compaction.max.size. It would be nice to have two separate limits: 
for peak and off peak hours.

h3. ExploringCompactionPolicy selection evaluation algorithm

Just seems too simple: selection with more files always wins, selection of 
smaller size wins if number of files is the same.


> Compaction improvements
> ---
>
> Key: HBASE-14383
> URL: https://issues.apache.org/jira/browse/HBASE-14383
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
>
> Still major issue in many production environments. The general recommendation 
> - disabling region splitting and major compactions to reduce unpredictable 
> IO/CPU spikes, especially during peak times and running them manually during 
> off peak times. Still do not resolve the issues completely.
> h3. Flush storms
> * rolling WAL events across cluster can be highly correlated, hence flushing 
> memstores, hence triggering minor compactions, that can be promoted to major 
> ones. These events are highly correlated in time if there is a balanced 
> write-load on the regions in a table.
> *  the same is true for memstore flushing due to periodic memstore flusher 
> operation. 
> Both above may produce *flush storms* which are as bad as *compaction 
> storms*. 
> What can be done here. We can spread these events over time by randomizing 
> (with jitter) several  config options:
> # hbase.regionserver.optionalcacheflushinterval
> # hbase.regionserver.flush.per.changes
> # hbase.regionserver.maxlogs   
> h3. ExploringCompactionPolicy max compaction size
> One more optimization can be added to ExploringCompactionPolicy. To limit 
> size of a compaction there is a config parameter one could use 
> hbase.hstore.compaction.max.size. It would be nice to have two separate 
> limits: for peak and off peak hours.
> h3. ExploringCompactionPolicy selection evaluation algorithm
> Too simple? Selection with more files always wins, selection of smaller size 
> wins if number of 

[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-09-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12751:
--
Attachment: 12751.rebased.v29.txt

Added comments and javadoc.

Sequenceid is a better here but still tough to track. Comments to help. Need to 
remove deprecated HLogKey, Writable read/write Object... etc... and would have 
then a fighting chance at cleaning this up some.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, 
> 12751.rebased.v26.txt, 12751.rebased.v27.txt, 12751.rebased.v29.txt, 
> 12751v22.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 
> HBASE-12751-v1.patch, HBASE-12751-v10.patch, HBASE-12751-v10.patch, 
> HBASE-12751-v11.patch, HBASE-12751-v12.patch, HBASE-12751-v13.patch, 
> HBASE-12751-v14.patch, HBASE-12751-v15.patch, HBASE-12751-v16.patch, 
> HBASE-12751-v17.patch, HBASE-12751-v18.patch, HBASE-12751-v19 (1).patch, 
> HBASE-12751-v19.patch, HBASE-12751-v2.patch, HBASE-12751-v20.patch, 
> HBASE-12751-v20.patch, HBASE-12751-v21.patch, HBASE-12751-v3.patch, 
> HBASE-12751-v4.patch, HBASE-12751-v5.patch, HBASE-12751-v6.patch, 
> HBASE-12751-v7.patch, HBASE-12751-v8.patch, HBASE-12751-v9.patch, 
> HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14382) TestInterfaceAudienceAnnotations should hadoop-compt module resources

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735917#comment-14735917
 ] 

Hadoop QA commented on HBASE-14382:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754751/HBASE-14382.00.patch
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754751

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestAdmin1.testCreateTableNumberOfRegions(TestAdmin1.java:681)
at 
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient.testCloneLinksAfterDelete(TestCloneSnapshotFromClient.java:235)
at 
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient.testCloneSnapshot(TestMobCloneSnapshotFromClient.java:175)
at 
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient.testCloneSnapshotCrossNamespace(TestMobCloneSnapshotFromClient.java:189)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15476//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15476//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15476//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15476//console

This message is automatically generated.

> TestInterfaceAudienceAnnotations should hadoop-compt module resources
> -
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch, HBASE-14382.01.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): 

[jira] [Updated] (HBASE-14340) Add second bulk load option to Spark Bulk Load to send puts as the value

2015-09-08 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-14340:

Attachment: HBASE-14340.1.patch

Initial patch

> Add second bulk load option to Spark Bulk Load to send puts as the value
> 
>
> Key: HBASE-14340
> URL: https://issues.apache.org/jira/browse/HBASE-14340
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Attachments: HBASE-14340.1.patch
>
>
> The initial bulk load option for Spark bulk load sends values over one by one 
> through the shuffle.  This is the similar to how the original MR bulk load 
> worked.
> How ever the MR bulk loader have more then one bulk load option.  There is a 
> second option that allows for all the Column Families, Qualifiers, and Values 
> or a row to be combined in the map side.
> This only works if the row is not super wide.
> But if the row is not super wide this method of sending values through the 
> shuffle will reduce the data and work the shuffle has to deal with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14384) Trying to run canary locally with -regionserver option causes exception

2015-09-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14384:
--
Fix Version/s: 1.2.0
   2.0.0

> Trying to run canary locally with -regionserver option causes exception
> ---
>
> Key: HBASE-14384
> URL: https://issues.apache.org/jira/browse/HBASE-14384
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Sanjeev Srivatsa
>Assignee: Sanjeev Srivatsa
> Fix For: 2.0.0, 1.2.0
>
>
> Tried to run canary locally (on branch master) with command: 
> bin/hbase org.apache.hadoop.hbase.tool.Canary -regionserver
> Exception was thrown:
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hbase.tool.Canary$StdOutSink cannot be cast to 
> org.apache.hadoop.hbase.tool.Canary$ExtendedSink
>   at org.apache.hadoop.hbase.tool.Canary.newMonitor(Canary.java:640)
>   at org.apache.hadoop.hbase.tool.Canary.run(Canary.java:551)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.hbase.tool.Canary.main(Canary.java:1127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14384) Trying to run canary locally with -regionserver option causes exception

2015-09-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14384:
--
Assignee: Sanjeev Srivatsa

> Trying to run canary locally with -regionserver option causes exception
> ---
>
> Key: HBASE-14384
> URL: https://issues.apache.org/jira/browse/HBASE-14384
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Sanjeev Srivatsa
>Assignee: Sanjeev Srivatsa
> Fix For: 2.0.0, 1.2.0
>
>
> Tried to run canary locally (on branch master) with command: 
> bin/hbase org.apache.hadoop.hbase.tool.Canary -regionserver
> Exception was thrown:
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hbase.tool.Canary$StdOutSink cannot be cast to 
> org.apache.hadoop.hbase.tool.Canary$ExtendedSink
>   at org.apache.hadoop.hbase.tool.Canary.newMonitor(Canary.java:640)
>   at org.apache.hadoop.hbase.tool.Canary.run(Canary.java:551)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.hbase.tool.Canary.main(Canary.java:1127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14384) Trying to run canary locally with -regionserver option causes exception

2015-09-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14384:
--
Affects Version/s: 1.2.0
   2.0.0

> Trying to run canary locally with -regionserver option causes exception
> ---
>
> Key: HBASE-14384
> URL: https://issues.apache.org/jira/browse/HBASE-14384
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Sanjeev Srivatsa
>Assignee: Sanjeev Srivatsa
> Fix For: 2.0.0, 1.2.0
>
>
> Tried to run canary locally (on branch master) with command: 
> bin/hbase org.apache.hadoop.hbase.tool.Canary -regionserver
> Exception was thrown:
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hbase.tool.Canary$StdOutSink cannot be cast to 
> org.apache.hadoop.hbase.tool.Canary$ExtendedSink
>   at org.apache.hadoop.hbase.tool.Canary.newMonitor(Canary.java:640)
>   at org.apache.hadoop.hbase.tool.Canary.run(Canary.java:551)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.hbase.tool.Canary.main(Canary.java:1127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14382) TestInterfaceAudienceAnnotations should hadoop-compt module resources

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735963#comment-14735963
 ] 

Hadoop QA commented on HBASE-14382:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12754754/HBASE-14382.01.patch
  against master branch at commit e95358a7fc3f554dcbb351c8b7295cafc01e8c23.
  ATTACHMENT ID: 12754754

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn post-site goal 
to fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestImportExport
  org.apache.hadoop.hbase.util.TestProcessBasedCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15477//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15477//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15477//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15477//console

This message is automatically generated.

> TestInterfaceAudienceAnnotations should hadoop-compt module resources
> -
>
> Key: HBASE-14382
> URL: https://issues.apache.org/jira/browse/HBASE-14382
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14382.00.patch, HBASE-14382.01.patch
>
>
> Over on HBASE-12911, buildbot tells me I'm missing some interface audience 
> annotations. Indeed, from test log, my patch is not the only one missing 
> annotations.
> {noformat}
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-client/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-annotations/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,071 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-common/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,072 DEBUG [main] hbase.ClassFinder(147): Looking in 
> /Users/ndimiduk/repos/hbase/hbase-protocol/target/classes/org/apache/hadoop/hbase;
>  isJar=false
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(252): These are the classes that DO 
> NOT have @InterfaceAudience annotation:
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsRegionClientWrapper
> 2015-09-08 12:05:31,158 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionWrapper
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> org.apache.hadoop.hbase.client.MetricsConnectionSourceFactory
> 2015-09-08 12:05:31,160 INFO  [main] 
> hbase.TestInterfaceAudienceAnnotations(254): interface 
> 

[jira] [Updated] (HBASE-14385) Close the sockets that is missing in connection closure.

2015-09-08 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14385:

Description: As per heading.

> Close the sockets that is missing in connection closure.
> 
>
> Key: HBASE-14385
> URL: https://issues.apache.org/jira/browse/HBASE-14385
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-14385.patch
>
>
> As per heading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14385) Close the sockets that is missing in connection closure.

2015-09-08 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14385:

Status: Patch Available  (was: Open)

> Close the sockets that is missing in connection closure.
> 
>
> Key: HBASE-14385
> URL: https://issues.apache.org/jira/browse/HBASE-14385
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-14385.patch
>
>
> As per heading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14385) Close the sockets that is missing in connection closure.

2015-09-08 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-14385:

Attachment: HBASE-14385.patch

> Close the sockets that is missing in connection closure.
> 
>
> Key: HBASE-14385
> URL: https://issues.apache.org/jira/browse/HBASE-14385
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-14385.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14370) Use separate thread for calling ZKPermissionWatcher#refreshNodes()

2015-09-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735977#comment-14735977
 ] 

Ted Yu commented on HBASE-14370:


Clarification, based in patch v1, the only concern is busy waiting.
If that's the case, I can continue refining patch v1.

> Use separate thread for calling ZKPermissionWatcher#refreshNodes()
> --
>
> Key: HBASE-14370
> URL: https://issues.apache.org/jira/browse/HBASE-14370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 14370-v1.txt, 14370-v3.txt
>
>
> I came off a support case (0.98.0) where main zk thread was seen doing the 
> following:
> {code}
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshAuthManager(ZKPermissionWatcher.java:152)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.refreshNodes(ZKPermissionWatcher.java:135)
>   at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:121)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:348)
>   at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> {code}
> There were 62000 nodes under /acl due to lack of fix from HBASE-12635, 
> leading to slowness in table creation because zk notification for region 
> offline was blocked by the above.
> The attached patch separates refreshNodes() call into its own thread.
> Thanks to Enis and Devaraj for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >