[jira] [Updated] (HBASE-14806) Missing sources.jar for several modules when building HBase

2015-11-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-14806:
--
Attachment: HBASE-14806.patch

Seems only hbase-common and hbase-external-blockcache missing sources.jar.

I modified the pom.xml. But I'm not sure whether the output sources.jar 
contains correct LICENSE files.

[~busbey] Could you please help verifying the sources.jar generated with this 
patch?

Thanks.

> Missing sources.jar for several modules when building HBase
> ---
>
> Key: HBASE-14806
> URL: https://issues.apache.org/jira/browse/HBASE-14806
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
> Attachments: HBASE-14806.patch
>
>
> Introduced by HBASE-14085. The problem is, for example, in 
> hbase-common/pom.xml, we have
> {code:title=pom.xml}
> 
>   org.apache.maven.plugins
>   maven-source-plugin
>   
> true
> 
>   src/main/java
>   ${project.build.outputDirectory}/META-INF
> 
>   
> 
> {code}
> But in fact, the path inside {{}} tag is relative to source 
> directories, not the project directory. So the maven-source-plugin always end 
> with
> {noformat}
> No sources in project. Archive not created.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14806) Missing sources.jar for several modules when building HBase

2015-11-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-14806:
--
 Assignee: Duo Zhang
Affects Version/s: 2.0.0
   Status: Patch Available  (was: Open)

> Missing sources.jar for several modules when building HBase
> ---
>
> Key: HBASE-14806
> URL: https://issues.apache.org/jira/browse/HBASE-14806
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-14806.patch
>
>
> Introduced by HBASE-14085. The problem is, for example, in 
> hbase-common/pom.xml, we have
> {code:title=pom.xml}
> 
>   org.apache.maven.plugins
>   maven-source-plugin
>   
> true
> 
>   src/main/java
>   ${project.build.outputDirectory}/META-INF
> 
>   
> 
> {code}
> But in fact, the path inside {{}} tag is relative to source 
> directories, not the project directory. So the maven-source-plugin always end 
> with
> {noformat}
> No sources in project. Archive not created.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14575) Reduce scope of compactions holding region lock

2015-11-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003903#comment-15003903
 ] 

ramkrishna.s.vasudevan commented on HBASE-14575:


Then it is good. So I don't think you need the region lock at all then. You can 
test once if it works fine. 

> Reduce scope of compactions holding region lock
> ---
>
> Key: HBASE-14575
> URL: https://issues.apache.org/jira/browse/HBASE-14575
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, regionserver
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 14575-v1.patch, 14575-v2.patch, 14575-v3.patch, 
> 14575-v4.patch, 14575-v5.patch, 14575.v00.patch
>
>
> Per [~devaraj]'s idea on parent issue, let's see if we can reduce the scope 
> of critical section under which compactions hold the region read lock.
> Here is summary from parent issue:
> Another idea is we can reduce the scope of when the read lock is held during 
> compaction. In theory the compactor only needs a region read lock while 
> deciding what files to compact and at the time of committing the compaction. 
> We're protected from the case of region close events because compactions are 
> checking (every 10k bytes written) if the store has been closed in order to 
> abort in such a case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14782) FuzzyRowFilter skips valid rows

2015-11-13 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003873#comment-15003873
 ] 

Heng Chen commented on HBASE-14782:
---

I found something more.
All StoreScanner.seekAsDirection and StoreScanner.seekToNextRow called 
StoreScanner.reseek(Cell kv) inside.  
The difference is the param Cell passed in.

In StoreScanner.seekToNextRow,   the param Cell passed in reseek is generated 
by CellUtil.createLastOnRow
But in StoreScanner.seekAsDirection,  it is generated by matcher.getNextKeyHint 
which called FuzzyRowFilter.getNextCellHint inside.

CellUtil.createLastOnRow(Cell kv) will create one cell in the same row as kv,  
but with Long.MIN_VALUE as timestamp.
FuzzyRowFilter.getNextCellHint(Cell kv) will create one cell in the next row 
with Long.MAX_VALUE as timestamp.


There will be logic as below (in {{KeyValueHeap.generalizedSeek}})


{code:title=KeyValueHeap.java} 

if (current == null) {
  return false;
}
heap.add(current);
current = null;

KeyValueScanner scanner;
while ((scanner = heap.poll()) != null) {
  Cell topKey = scanner.peek();
  if (comparator.getComparator().compare(seekKey, topKey) <= 0) {
heap.add(scanner);
current = pollRealKV();
return current != null;
  }

  boolean seekResult;
  if (isLazy && heap.size() > 0) {
seekResult = scanner.requestSeek(seekKey, forward, useBloom);
  } else {
seekResult = NonLazyKeyValueScanner.doRealSeek(
scanner, seekKey, forward);
  }

  if (!seekResult) {
this.scannersForDelayedClose.add(scanner);
  } else {
heap.add(scanner);
  }
}

// Heap is returning empty, scanner is done
return false;
{code}

{code}
For example,  if we just put "\\x9C\\x00\\x044\\x00\\x00\\x00\\x00" 
and "\\x9C\\x00\\x03\\xE9e\\xBB{X\\x1Fwts\\x1F\\x15vRX" into table.

As original logic,  we will go path StoreScanner.seekAsDirection, 
so seekKey in KeyValueHeap.generalizedSeek will be 
'\\x9C\\x00\\x044\\x00\\x00\\x00\\x00' with Long.MAX_VALUE as timestamp

The first round in while,  topKey is 
"\\x9C\\x00\\x03\\xE9e\\xBB{X\\x1Fwts\\x1F\\x15vRX",  
So "if (comparator.getComparator().compare(seekKey, topKey) <= 0)"  will be 
false and 
we can't find seekKey in NonLazyKeyValueScanner.doRealSeek

At last  KeyValueHeap.heap will be empty and KeyValueHeap.current will be null. 
  

{code}














> FuzzyRowFilter skips valid rows
> ---
>
> Key: HBASE-14782
> URL: https://issues.apache.org/jira/browse/HBASE-14782
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Heng Chen
> Attachments: HBASE-14782.patch
>
>
> The issue may affect not only master branch, but previous releases as well.
> This is from one of our customers:
> {quote}
> We are experiencing a problem with the FuzzyRowFilter for HBase scan. We 
> think that it is a bug. 
> Fuzzy filter should pick a row if it matches filter criteria irrespective of 
> other rows present in table but filter is dropping a row depending on some 
> other row present in table. 
> Details/Step to reproduce/Sample outputs below: 
> Missing row key: \x9C\x00\x044\x00\x00\x00\x00 
> Causing row key: \x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX 
> Prerequisites 
> 1. Create a test table. HBase shell command -- create 'fuzzytest','d' 
> 2. Insert some test data. HBase shell commands: 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x01\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x01\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x01",'d:a','junk' 
> • put 'fuzzytest',"\x9B\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> • put 'fuzzytest',"\x9D\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> Now when you run the code, you will find \x9C\x00\x044\x00\x00\x00\x00 in 
> output because it matches filter criteria. (Refer how to run code below) 
> Insert the row key causing bug: 
> HBase shell command: put 
> 'fuzzytest',"\x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX",'d:a','junk' 
> Now when you run the code, you will not find \x9C\x00\x044\x00\x00\x00\x00 in 
> output even though it still matches filter criteria. 
> {quote}
> Verified the issue on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14575) Reduce scope of compactions holding region lock

2015-11-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003711#comment-15003711
 ] 

ramkrishna.s.vasudevan commented on HBASE-14575:


Am fine with this patch. Just thought of one suggestion - if the compaction and 
close depends on the writeState now (close does not proceed if any compaction 
is in progress), so similarly if we can add a 'closing' state to the writeState 
then just before starting the compaction we can check for that state and stop 
the compaction from even happening. The check on writeState is any way 
synchronized. So it will work and also we can just avoid all the region lock in 
this whole flow of compaction. 


> Reduce scope of compactions holding region lock
> ---
>
> Key: HBASE-14575
> URL: https://issues.apache.org/jira/browse/HBASE-14575
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, regionserver
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 14575-v1.patch, 14575-v2.patch, 14575-v3.patch, 
> 14575-v4.patch, 14575-v5.patch, 14575.v00.patch
>
>
> Per [~devaraj]'s idea on parent issue, let's see if we can reduce the scope 
> of critical section under which compactions hold the region read lock.
> Here is summary from parent issue:
> Another idea is we can reduce the scope of when the read lock is held during 
> compaction. In theory the compactor only needs a region read lock while 
> deciding what files to compact and at the time of committing the compaction. 
> We're protected from the case of region close events because compactions are 
> checking (every 10k bytes written) if the store has been closed in order to 
> abort in such a case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13153) Bulk Loaded HFile Replication

2015-11-13 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003699#comment-15003699
 ] 

Ashish Singhi commented on HBASE-13153:
---

In our internal testing [~sreenivasulureddy] today found one more issue. The 
list of StoreDescriptor's in BulkLoadDescriptor is unmodifiable and in result 
the patch will fail work to with WALEntryFilters as expected. I will handle 
this in the patch and will try to add UT for it and also if there are any 
review comments to address by that time then will happily address them also.

> Bulk Loaded HFile Replication
> -
>
> Key: HBASE-13153
> URL: https://issues.apache.org/jira/browse/HBASE-13153
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: sunhaitao
>Assignee: Ashish Singhi
> Fix For: 2.0.0
>
> Attachments: HBASE-13153-v1.patch, HBASE-13153-v10.patch, 
> HBASE-13153-v11.patch, HBASE-13153-v12.patch, HBASE-13153-v13.patch, 
> HBASE-13153-v2.patch, HBASE-13153-v3.patch, HBASE-13153-v4.patch, 
> HBASE-13153-v5.patch, HBASE-13153-v6.patch, HBASE-13153-v7.patch, 
> HBASE-13153-v8.patch, HBASE-13153-v9.patch, HBASE-13153.patch, HBase Bulk 
> Load Replication-v1-1.pdf, HBase Bulk Load Replication-v2.pdf, HBase Bulk 
> Load Replication-v3.pdf, HBase Bulk Load Replication.pdf, HDFS_HA_Solution.PNG
>
>
> Currently we plan to use HBase Replication feature to deal with disaster 
> tolerance scenario.But we encounter an issue that we will use bulkload very 
> frequently,because bulkload bypass write path, and will not generate WAL, so 
> the data will not be replicated to backup cluster. It's inappropriate to 
> bukload twice both on active cluster and backup cluster. So i advise do some 
> modification to bulkload feature to enable bukload to both active cluster and 
> backup cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14575) Reduce scope of compactions holding region lock

2015-11-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003717#comment-15003717
 ] 

Devaraj Das commented on HBASE-14575:
-

I am fine with that [~ram_krish]

> Reduce scope of compactions holding region lock
> ---
>
> Key: HBASE-14575
> URL: https://issues.apache.org/jira/browse/HBASE-14575
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, regionserver
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 14575-v1.patch, 14575-v2.patch, 14575-v3.patch, 
> 14575-v4.patch, 14575-v5.patch, 14575.v00.patch
>
>
> Per [~devaraj]'s idea on parent issue, let's see if we can reduce the scope 
> of critical section under which compactions hold the region read lock.
> Here is summary from parent issue:
> Another idea is we can reduce the scope of when the read lock is held during 
> compaction. In theory the compactor only needs a region read lock while 
> deciding what files to compact and at the time of committing the compaction. 
> We're protected from the case of region close events because compactions are 
> checking (every 10k bytes written) if the store has been closed in order to 
> abort in such a case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14798) NPE reporting server load causes regionserver abort; causes TestAcidGuarantee to fail

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003684#comment-15003684
 ] 

Hadoop QA commented on HBASE-14798:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772125/14798.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772125

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.procedure2.store.wal.TestWALProcedureStore

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16508//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16508//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16508//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16508//console

This message is automatically generated.

> NPE reporting server load causes regionserver abort; causes TestAcidGuarantee 
> to fail
> -
>
> Key: HBASE-14798
> URL: https://issues.apache.org/jira/browse/HBASE-14798
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Attachments: 14798.patch
>
>
> Below crashed out a RS. Caused TestAcidGuarantees to fail because then there 
> were not RS to assign too... 
> {code}
> 2015-11-11 11:36:23,092 ERROR 
> [B.defaultRpcServer.handler=4,queue=0,port=58655] 
> master.MasterRpcServices(388): Region server 
> asf907.gq1.ygridcore.net,55184,1447241756717 reported a fatal error:
> ABORTING region server asf907.gq1.ygridcore.net,55184,1447241756717: 
> Unhandled: null
> Cause:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getOldestHfileTs(HRegion.java:1643)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionLoad(HRegionServer.java:1503)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1210)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1153)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:969)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:140)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:138)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Here is the failure: 
> 

[jira] [Updated] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-14802:
--
Attachment: HBASE-14802-1.patch

Modified the patch to not leak procIDs. This should work as long as failed 
crash recovery procedures are not discarded and requeued (i.e. they should 
eventually complete).

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14802-1.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14575) Reduce scope of compactions holding region lock

2015-11-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003706#comment-15003706
 ] 

Devaraj Das commented on HBASE-14575:
-

[~jinghe] yeah from my read of things, seems you are right. So [~ram_krish] & 
[~jinghe] how should we proceed? The current patch from [~yuzhih...@gmail.com] 
seems to be locking a tad more than needed, but is that a big deal. In the 
current codebase the rename happens without checking for closing flag. Should 
we just retain that [~ram_krish] (as opposed to what you propose; check closing 
with the readlock held...).

> Reduce scope of compactions holding region lock
> ---
>
> Key: HBASE-14575
> URL: https://issues.apache.org/jira/browse/HBASE-14575
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, regionserver
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 14575-v1.patch, 14575-v2.patch, 14575-v3.patch, 
> 14575-v4.patch, 14575-v5.patch, 14575.v00.patch
>
>
> Per [~devaraj]'s idea on parent issue, let's see if we can reduce the scope 
> of critical section under which compactions hold the region read lock.
> Here is summary from parent issue:
> Another idea is we can reduce the scope of when the read lock is held during 
> compaction. In theory the compactor only needs a region read lock while 
> deciding what files to compact and at the time of committing the compaction. 
> We're protected from the case of region close events because compactions are 
> checking (every 10k bytes written) if the store has been closed in order to 
> abort in such a case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14807) TestWALLockup is flakey

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003775#comment-15003775
 ] 

Hadoop QA commented on HBASE-14807:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772127/14807.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772127

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16509//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16509//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16509//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16509//console

This message is automatically generated.

> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Attachments: 14807.patch
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> 

[jira] [Updated] (HBASE-14216) Consolidate MR and Spark BulkLoad shared functions and string consts

2015-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14216:
---
Fix Version/s: 2.0.0

This isn't scheduled, and isn't urgent by any means, so I'll schedule for 2.0 
and we can get to it when we get to it as far as I am concerned.

> Consolidate MR and Spark BulkLoad shared functions and string consts
> 
>
> Key: HBASE-14216
> URL: https://issues.apache.org/jira/browse/HBASE-14216
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Fix For: 2.0.0
>
>
> This is a follow up jira is HBASE-14150.  Andrew P had noticed code that 
> could be consolidate between MR and Spark bulk load code.
> Before I get started I need advice to know where the consolidated code should 
> live.  Once I have the location I can start coding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14160) backport hbase-spark module to branch-1

2015-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004319#comment-15004319
 ] 

Andrew Purtell commented on HBASE-14160:


[~busbey], in your esteemed estimation, what are the must-haves remaining? 


> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
> Fix For: 1.3.0
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14782) FuzzyRowFilter skips valid rows

2015-11-13 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14782:
--
Status: Patch Available  (was: Open)

> FuzzyRowFilter skips valid rows
> ---
>
> Key: HBASE-14782
> URL: https://issues.apache.org/jira/browse/HBASE-14782
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Heng Chen
> Attachments: HBASE-14782.patch
>
>
> The issue may affect not only master branch, but previous releases as well.
> This is from one of our customers:
> {quote}
> We are experiencing a problem with the FuzzyRowFilter for HBase scan. We 
> think that it is a bug. 
> Fuzzy filter should pick a row if it matches filter criteria irrespective of 
> other rows present in table but filter is dropping a row depending on some 
> other row present in table. 
> Details/Step to reproduce/Sample outputs below: 
> Missing row key: \x9C\x00\x044\x00\x00\x00\x00 
> Causing row key: \x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX 
> Prerequisites 
> 1. Create a test table. HBase shell command -- create 'fuzzytest','d' 
> 2. Insert some test data. HBase shell commands: 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x01\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x01\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x01",'d:a','junk' 
> • put 'fuzzytest',"\x9B\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> • put 'fuzzytest',"\x9D\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> Now when you run the code, you will find \x9C\x00\x044\x00\x00\x00\x00 in 
> output because it matches filter criteria. (Refer how to run code below) 
> Insert the row key causing bug: 
> HBase shell command: put 
> 'fuzzytest',"\x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX",'d:a','junk' 
> Now when you run the code, you will not find \x9C\x00\x044\x00\x00\x00\x00 in 
> output even though it still matches filter criteria. 
> {quote}
> Verified the issue on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004167#comment-15004167
 ] 

Hadoop QA commented on HBASE-14804:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12772197/HBASE-14804.v1-trunk.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772197

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16513//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16513//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16513//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16513//console

This message is automatically generated.

> HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
> 
>
> Key: HBASE-14804
> URL: https://issues.apache.org/jira/browse/HBASE-14804
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-14804.v0-trunk.patch, HBASE-14804.v1-trunk.patch
>
>
> I am trying to create a new table and set the NORMALIZATION_ENABLED as true, 
> but seems like the argument NORMALIZATION_ENABLED is being ignored. And the 
> attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on 
> that table
> {code}
> hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 
> 'true'}
> An argument ignored (unknown or overridden): NORMALIZATION_ENABLED
> 0 row(s) in 4.2670 seconds
> => Hbase::Table - test-table-4
> hbase(main):021:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0430 seconds
> {code}
> However, on doing an alter command on that table we can set the 
> NORMALIZATION_ENABLED attribute for that table
> {code}
> hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'}
> Unknown argument ignored: NORMALIZATION_ENABLED
> Updating all regions with the new 

[jira] [Updated] (HBASE-14340) Add second bulk load option to Spark Bulk Load to send puts as the value

2015-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14340:
---
Fix Version/s: 2.0.0

lgtm, except for a double write (cut-paste?) bug. See RB

> Add second bulk load option to Spark Bulk Load to send puts as the value
> 
>
> Key: HBASE-14340
> URL: https://issues.apache.org/jira/browse/HBASE-14340
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14340.1.patch
>
>
> The initial bulk load option for Spark bulk load sends values over one by one 
> through the shuffle.  This is the similar to how the original MR bulk load 
> worked.
> How ever the MR bulk loader have more then one bulk load option.  There is a 
> second option that allows for all the Column Families, Qualifiers, and Values 
> or a row to be combined in the map side.
> This only works if the row is not super wide.
> But if the row is not super wide this method of sending values through the 
> shuffle will reduce the data and work the shuffle has to deal with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Cosmin Lehene (JIRA)
Cosmin Lehene created HBASE-14808:
-

 Summary: HBase Book takes 15 seconds to render
 Key: HBASE-14808
 URL: https://issues.apache.org/jira/browse/HBASE-14808
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.98.16, 1.1.2, 2.0.0
Reporter: Cosmin Lehene


I'm not sure if it's because of some bad layout or just because it's single 
page and too large however, when I load the page to get to something, I need to 
wait 15 seconds until I can do anything. 

As the book should be one of the main resources for all users, the user 
experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14771) RpcServer.getRemoteAddress always returns null.

2015-11-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14771:
---
Status: Patch Available  (was: Open)

> RpcServer.getRemoteAddress always returns null.
> ---
>
> Key: HBASE-14771
> URL: https://issues.apache.org/jira/browse/HBASE-14771
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 1.2.0
>Reporter: Abhishek Kumar
>Assignee: Abhishek Kumar
>Priority: Minor
> Attachments: HBASE-14771-V1.patch, HBASE-14771-V2.patch, 
> HBASE-14771.patch
>
>
> RpcServer.getRemoteAddress always returns null, because Call object is 
> getting initialized with null.This seems to be happening because of using 
> RpcServer.getRemoteIp() in  Call object constructor before RpcServer thread 
> local 'CurCall' being set in CallRunner.run method:
> {noformat}
> // --- RpcServer.java ---
> protected void processRequest(byte[] buf) throws IOException, 
> InterruptedException {
>  .
> // Call object getting initialized here with address 
> // obtained from RpcServer.getRemoteIp()
> Call call = new Call(id, this.service, md, header, param, cellScanner, this, 
> responder,
>   totalRequestSize, traceInfo, RpcServer.getRemoteIp());
>   scheduler.dispatch(new CallRunner(RpcServer.this, call));
>  }
> // getRemoteIp method gets address from threadlocal 'CurCall' which 
> // gets set in CallRunner.run and calling it before this as in above case, 
> will return null
> // --- CallRunner.java ---
> public void run() {
>   .   
>   Pair resultPair = null;
>   RpcServer.CurCall.set(call);
>   ..
> }
> // Using 'this.addr' in place of getRemoteIp method in RpcServer.java seems 
> to be fixing this issue
> Call call = new Call(id, this.service, md, header, param, cellScanner, this, 
> responder,
>   totalRequestSize, traceInfo, this.addr);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14340) Add second bulk load option to Spark Bulk Load to send puts as the value

2015-11-13 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004362#comment-15004362
 ] 

Ted Malaska commented on HBASE-14340:
-

Thank you Andrew for the review I will get to this jira in the next couple of 
days.

> Add second bulk load option to Spark Bulk Load to send puts as the value
> 
>
> Key: HBASE-14340
> URL: https://issues.apache.org/jira/browse/HBASE-14340
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Reporter: Ted Malaska
>Assignee: Ted Malaska
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14340.1.patch
>
>
> The initial bulk load option for Spark bulk load sends values over one by one 
> through the shuffle.  This is the similar to how the original MR bulk load 
> worked.
> How ever the MR bulk loader have more then one bulk load option.  There is a 
> second option that allows for all the Column Families, Qualifiers, and Values 
> or a row to be combined in the map side.
> This only works if the row is not super wide.
> But if the row is not super wide this method of sending values through the 
> shuffle will reduce the data and work the shuffle has to deal with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14771) RpcServer.getRemoteAddress always returns null.

2015-11-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004401#comment-15004401
 ] 

Appy commented on HBASE-14771:
--

+1

> RpcServer.getRemoteAddress always returns null.
> ---
>
> Key: HBASE-14771
> URL: https://issues.apache.org/jira/browse/HBASE-14771
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 1.2.0
>Reporter: Abhishek Kumar
>Assignee: Abhishek Kumar
>Priority: Minor
> Attachments: HBASE-14771-V1.patch, HBASE-14771-V2.patch, 
> HBASE-14771.patch
>
>
> RpcServer.getRemoteAddress always returns null, because Call object is 
> getting initialized with null.This seems to be happening because of using 
> RpcServer.getRemoteIp() in  Call object constructor before RpcServer thread 
> local 'CurCall' being set in CallRunner.run method:
> {noformat}
> // --- RpcServer.java ---
> protected void processRequest(byte[] buf) throws IOException, 
> InterruptedException {
>  .
> // Call object getting initialized here with address 
> // obtained from RpcServer.getRemoteIp()
> Call call = new Call(id, this.service, md, header, param, cellScanner, this, 
> responder,
>   totalRequestSize, traceInfo, RpcServer.getRemoteIp());
>   scheduler.dispatch(new CallRunner(RpcServer.this, call));
>  }
> // getRemoteIp method gets address from threadlocal 'CurCall' which 
> // gets set in CallRunner.run and calling it before this as in above case, 
> will return null
> // --- CallRunner.java ---
> public void run() {
>   .   
>   Pair resultPair = null;
>   RpcServer.CurCall.set(call);
>   ..
> }
> // Using 'this.addr' in place of getRemoteIp method in RpcServer.java seems 
> to be fixing this issue
> Call call = new Call(id, this.service, md, header, param, cellScanner, this, 
> responder,
>   totalRequestSize, traceInfo, this.addr);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004496#comment-15004496
 ] 

Appy commented on HBASE-14808:
--

It takes at least 15 sec for me before I can search for any keyword.
!screenshot1.png|thumbnail!
The document is just 400k, but takes 22 seconds to load! It feels like we are 
generating it for every request, instead of generating only on change. Or maybe 
a simple caching would be enough.
Images didn't take much to load because I wanted to get cached time (all are 
304 not modified).
[~misty] any ideas? 

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14172) Upgrade existing thrift binding using thrift 0.9.2 compiler.

2015-11-13 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004565#comment-15004565
 ] 

Enis Soztutar commented on HBASE-14172:
---

[~eclark] any ideas? 

> Upgrade existing thrift binding using thrift 0.9.2 compiler.
> 
>
> Key: HBASE-14172
> URL: https://issues.apache.org/jira/browse/HBASE-14172
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-14172-branch-1.patch, HBASE-14172.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute

2015-11-13 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004490#comment-15004490
 ] 

Jean-Marc Spaggiari commented on HBASE-14804:
-

Goot to know for TestShell! Thanks for the info!

> HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
> 
>
> Key: HBASE-14804
> URL: https://issues.apache.org/jira/browse/HBASE-14804
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-14804.v0-trunk.patch, HBASE-14804.v1-trunk.patch
>
>
> I am trying to create a new table and set the NORMALIZATION_ENABLED as true, 
> but seems like the argument NORMALIZATION_ENABLED is being ignored. And the 
> attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on 
> that table
> {code}
> hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 
> 'true'}
> An argument ignored (unknown or overridden): NORMALIZATION_ENABLED
> 0 row(s) in 4.2670 seconds
> => Hbase::Table - test-table-4
> hbase(main):021:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0430 seconds
> {code}
> However, on doing an alter command on that table we can set the 
> NORMALIZATION_ENABLED attribute for that table
> {code}
> hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'}
> Unknown argument ignored: NORMALIZATION_ENABLED
> Updating all regions with the new schema...
> 1/1 regions updated.
> Done.
> 0 row(s) in 2.3640 seconds
> hbase(main):023:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'}  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0190 seconds
> {code}
> I think it would be better to have a single step process to enable 
> normalization while creating the table itself, rather than a two step process 
> to alter the table later on to enable normalization



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004369#comment-15004369
 ] 

Esteban Gutierrez commented on HBASE-14808:
---

I see is taking about 7 seconds for me, 2 seconds to download book.html (yeah, 
its about 1.76MB now) and looks like it takes some time to re-size the styles 
once it completes the transfer of book.html.

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute

2015-11-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004443#comment-15004443
 ] 

Appy commented on HBASE-14804:
--

+1
Since you manually tested it, TestShell seems redundant, given the simplicity 
of patch.
But to answer your question, this is what I do when I make any non-trivial 
change in shell files.
Copy TestShell.java. mvn test -Dtest=TestShell. Delete TestShell.java. :D

> HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
> 
>
> Key: HBASE-14804
> URL: https://issues.apache.org/jira/browse/HBASE-14804
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-14804.v0-trunk.patch, HBASE-14804.v1-trunk.patch
>
>
> I am trying to create a new table and set the NORMALIZATION_ENABLED as true, 
> but seems like the argument NORMALIZATION_ENABLED is being ignored. And the 
> attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on 
> that table
> {code}
> hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 
> 'true'}
> An argument ignored (unknown or overridden): NORMALIZATION_ENABLED
> 0 row(s) in 4.2670 seconds
> => Hbase::Table - test-table-4
> hbase(main):021:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0430 seconds
> {code}
> However, on doing an alter command on that table we can set the 
> NORMALIZATION_ENABLED attribute for that table
> {code}
> hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'}
> Unknown argument ignored: NORMALIZATION_ENABLED
> Updating all regions with the new schema...
> 1/1 regions updated.
> Done.
> 0 row(s) in 2.3640 seconds
> hbase(main):023:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'}  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0190 seconds
> {code}
> I think it would be better to have a single step process to enable 
> normalization while creating the table itself, rather than a two step process 
> to alter the table later on to enable normalization



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14808:
-
Attachment: Screen Shot 2015-11-13 at 10.41.28 AM.png

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: Screen Shot 2015-11-13 at 10.41.28 AM.png, 
> screenshot-1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12621) Explain in the book when KEEP_DELETED_CELLS is useful

2015-11-13 Thread Cosmin Lehene (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cosmin Lehene updated HBASE-12621:
--
Attachment: (was: screenshot-1.png)

> Explain in the book when KEEP_DELETED_CELLS is useful
> -
>
> Key: HBASE-12621
> URL: https://issues.apache.org/jira/browse/HBASE-12621
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Lars Hofhansl
> Fix For: 2.0.0
>
>
> KEEP_DELETED_CELLS seems to be very confusing.
> The books need further clarification when this setting is useful and what the 
> implications are.
> (and maybe we should discuss if there's a simpler way to achieve what I 
> intended to achieve with this when I implemented it first)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12621) Explain in the book when KEEP_DELETED_CELLS is useful

2015-11-13 Thread Cosmin Lehene (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cosmin Lehene updated HBASE-12621:
--
Attachment: screenshot-1.png

> Explain in the book when KEEP_DELETED_CELLS is useful
> -
>
> Key: HBASE-12621
> URL: https://issues.apache.org/jira/browse/HBASE-12621
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Lars Hofhansl
> Fix For: 2.0.0
>
> Attachments: screenshot-1.png
>
>
> KEEP_DELETED_CELLS seems to be very confusing.
> The books need further clarification when this setting is useful and what the 
> implications are.
> (and maybe we should discuss if there's a simpler way to achieve what I 
> intended to achieve with this when I implemented it first)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Cosmin Lehene (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cosmin Lehene updated HBASE-14808:
--
Attachment: screenshot-1.png

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14808:
-
Attachment: screenshot1.png

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14808:
-
Attachment: (was: Screen Shot 2015-11-13 at 10.41.28 AM.png)

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute

2015-11-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004450#comment-15004450
 ] 

Appy commented on HBASE-14804:
--

As for the refactoring, I have a running change to refactor other duplicate 
code in admin.rb. I'll move the stuff to the function you created.
Thanks for the patch!

> HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
> 
>
> Key: HBASE-14804
> URL: https://issues.apache.org/jira/browse/HBASE-14804
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-14804.v0-trunk.patch, HBASE-14804.v1-trunk.patch
>
>
> I am trying to create a new table and set the NORMALIZATION_ENABLED as true, 
> but seems like the argument NORMALIZATION_ENABLED is being ignored. And the 
> attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on 
> that table
> {code}
> hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 
> 'true'}
> An argument ignored (unknown or overridden): NORMALIZATION_ENABLED
> 0 row(s) in 4.2670 seconds
> => Hbase::Table - test-table-4
> hbase(main):021:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0430 seconds
> {code}
> However, on doing an alter command on that table we can set the 
> NORMALIZATION_ENABLED attribute for that table
> {code}
> hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'}
> Unknown argument ignored: NORMALIZATION_ENABLED
> Updating all regions with the new schema...
> 1/1 regions updated.
> Done.
> 0 row(s) in 2.3640 seconds
> hbase(main):023:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'}  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0190 seconds
> {code}
> I think it would be better to have a single step process to enable 
> normalization while creating the table itself, rather than a two step process 
> to alter the table later on to enable normalization



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14809) Namespace permission granted to group

2015-11-13 Thread Steven Hancz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Hancz updated HBASE-14809:
-
Description: 
Hi, 

We are looking to roll out HBase and are in the process to design the security 
model. 
We are looking to implement global DBAs and Namespace specific administrators. 
So for example the global dba would create a namespace and grant a user/group 
admin privileges within that ns. 
So that a given ns admin can in turn create objects and grant permission within 
the given ns only. 

We have run into some issues at the ns admin level. It appears that a ns admin 
can NOT grant to a grop unless it also has global admin privilege. But once it 
has global admin privilege it can grant in any NS not just the one where it has 
admin privileges. 

Based on the HBase documentation at 
http://hbase.apache.org/book.html#appendix_acl_matrix 

Table 13. ACL Matrix 
Interface   Operation   Permissions 
AccessController grant(global level) global(A) 
grant(namespace level) global(A)|NS(A) 

grant at a namespace level should be possible for someone with global A OR (|) 
NS A permission. 
As you will see in our test it does not work if NS A permission is granted but 
global A permission is not. 

Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 

hbase(main):011:0> scan 'hbase:acl' 
ROW COLUMN+CELL 
@ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 

However: 
Here you can see that a user who is member of the group hbaseappltest_ns1admin 
can not grant a WRX privilege to a group as it is missing global A privilege. 

$hbase shell 
15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available 
HBase Shell; enter 'help' for list of supported commands. 
Type "exit" to leave the HBase Shell 
Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 

hbase(main):001:0> whoami 
ns1ad...@wlab.net (auth:KERBEROS) 
groups: hbaseappltest_ns1admin 

hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 

ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions for user 'ns1admin' (global, action=ADMIN) 

The way I read the documentation a NS admin should be able to grant as it has 
ns level A privilege not only object level permission.

CDH is a version 5.4.7 and Hbase is version 1.0. 

Regards, 
Steven

  was:
Hi, 

We are looking to roll out HBase and are in the process to design the security 
model. 
We are looking to implement global DBAs and Namespace specific administrators. 
So for example the global dba would create a namespace and grant a user/group 
admin privileges within that ns. 
So that a given ns admin can in turn create objects and grant permission within 
the given ns only. 

We have run into some issues at the ns admin level. It appears that a ns admin 
can NOT grant to a grop unless it also has global admin privilege. But once it 
has global admin privilege it can grant in any NS not just the one where it has 
admin privileges. 

Based on the HBase documentation at 
http://hbase.apache.org/book.html#appendix_acl_matrix 

Table 13. ACL Matrix 
Interface   Operation   Permissions 
AccessController grant(global level) global(A) 
grant(namespace level) global(A)|NS(A) 

grant at a namespace level should be possible for someone with global A OR (|) 
NS A permission. 
As you will see in our test it does not work if NS A permission is granted but 
global A permission is not. 

Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 

hbase(main):011:0> scan 'hbase:acl' 
ROW COLUMN+CELL 
@finance column=l:sh82993, timestamp=105519510, value=RWXCA 
@gcbcppdn column=l:hdfs, timestamp=1446141119602, value=RWCXA 
@hbase column=l:hdfs, timestamp=1446141485136, value=RWCAX 
@ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 

However: 
Here you can see that a user who is member of the group hbaseappltest_ns1admin 
can not grant a WRX privilege to a group as it is missing global A privilege. 

$hbase shell 
15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available 
HBase Shell; enter 'help' for list of supported commands. 
Type "exit" to leave the HBase Shell 
Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 

hbase(main):001:0> whoami 
ns1ad...@wlab.net (auth:KERBEROS) 
groups: hbaseappltest_ns1admin 

hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 

ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions for user 'ns1admin' (global, action=ADMIN) 

The way I read the documentation a NS admin should be able to grant as it has 
ns level A privilege not only object level permission.

CDH is a version 5.4.7 and Hbase is version 1.0. 

Regards, 
Steven


> Namespace 

[jira] [Created] (HBASE-14811) HBaseInterClusterReplicationEndpoint retry logic is broken

2015-11-13 Thread Ashu Pachauri (JIRA)
Ashu Pachauri created HBASE-14811:
-

 Summary: HBaseInterClusterReplicationEndpoint retry logic is broken
 Key: HBASE-14811
 URL: https://issues.apache.org/jira/browse/HBASE-14811
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 2.0.0, 1.2.0, 1.2.1
Reporter: Ashu Pachauri
Assignee: Ashu Pachauri


In HBaseInterClusterReplicationEndpoint, we do something like this:
{code}
entryLists.remove(f.get());
{code}

where f.get() returns an ordinal number which represents the index of the 
element in the entryLists that just succeeded replicating. We remove these 
entries because we want to retry with remaining elements in the list in case of 
a failure. Since entryLists is an ArrayList, the subsequent elements are 
shifted left in case we remove an element. This breaks the intended 
functionality. The fix is to reverse sort the ordinals and then perform the 
deletion in one go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14811) HBaseInterClusterReplicationEndpoint retry logic is broken

2015-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14811:
---
Affects Version/s: 1.0.2
   0.98.16

> HBaseInterClusterReplicationEndpoint retry logic is broken
> --
>
> Key: HBASE-14811
> URL: https://issues.apache.org/jira/browse/HBASE-14811
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.0.2, 1.2.0, 1.2.1, 0.98.16
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
>
> In HBaseInterClusterReplicationEndpoint, we do something like this:
> {code}
> entryLists.remove(f.get());
> {code}
> where f.get() returns an ordinal number which represents the index of the 
> element in the entryLists that just succeeded replicating. We remove these 
> entries because we want to retry with remaining elements in the list in case 
> of a failure. Since entryLists is an ArrayList, the subsequent elements are 
> shifted left in case we remove an element. This breaks the intended 
> functionality. The fix is to reverse sort the ordinals and then perform the 
> deletion in one go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004817#comment-15004817
 ] 

Matteo Bertozzi commented on HBASE-14802:
-

v2 looks ok to me

instead of the code below, you can use 
ProcedureTestingUtility.submitAndWait(pExecutor, proc)
{code}
+pExecutor.submitProcedure(proc);
+while(!pExecutor.isFinished(proc.getProcId())) {
+  Thread.sleep(500);
+}
{code}

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004815#comment-15004815
 ] 

Appy commented on HBASE-14808:
--

[~clehene] Oh I see. Seems like the screenshot I attached earlier where 
downloading took too long was one off case.
btw, I like the layout of databricks site you posted above.

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14809) Namespace permission granted to group

2015-11-13 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004838#comment-15004838
 ] 

Jerry He commented on HBASE-14809:
--

Sounds like a bug indeed. Would you like to provide a patch?

> Namespace permission granted to group 
> --
>
> Key: HBASE-14809
> URL: https://issues.apache.org/jira/browse/HBASE-14809
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.2
>Reporter: Steven Hancz
>
> Hi, 
> We are looking to roll out HBase and are in the process to design the 
> security model. 
> We are looking to implement global DBAs and Namespace specific 
> administrators. 
> So for example the global dba would create a namespace and grant a user/group 
> admin privileges within that ns. 
> So that a given ns admin can in turn create objects and grant permission 
> within the given ns only. 
> We have run into some issues at the ns admin level. It appears that a ns 
> admin can NOT grant to a grop unless it also has global admin privilege. But 
> once it has global admin privilege it can grant in any NS not just the one 
> where it has admin privileges. 
> Based on the HBase documentation at 
> http://hbase.apache.org/book.html#appendix_acl_matrix 
> Table 13. ACL Matrix 
> Interface Operation   Permissions 
> AccessController grant(global level) global(A) 
> grant(namespace level) global(A)|NS(A) 
> grant at a namespace level should be possible for someone with global A OR 
> (|) NS A permission. 
> As you will see in our test it does not work if NS A permission is granted 
> but global A permission is not. 
> Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 
> hbase(main):011:0> scan 'hbase:acl' 
> ROW COLUMN+CELL 
> @ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 
> However: 
> Here you can see that a user who is member of the group 
> hbaseappltest_ns1admin can not grant a WRX privilege to a group as it is 
> missing global A privilege. 
> $hbase shell 
> 15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available 
> HBase Shell; enter 'help' for list of supported commands. 
> Type "exit" to leave the HBase Shell 
> Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 
> hbase(main):001:0> whoami 
> ns1ad...@wlab.net (auth:KERBEROS) 
> groups: hbaseappltest_ns1admin 
> hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 
> ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'ns1admin' (global, action=ADMIN) 
> The way I read the documentation a NS admin should be able to grant as it has 
> ns level A privilege not only object level permission.
> CDH is a version 5.4.7 and Hbase is version 1.0. 
> Regards, 
> Steven



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14782) FuzzyRowFilter skips valid rows

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004629#comment-15004629
 ] 

Hadoop QA commented on HBASE-14782:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12771922/HBASE-14782.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12771922

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16514//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16514//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16514//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16514//console

This message is automatically generated.

> FuzzyRowFilter skips valid rows
> ---
>
> Key: HBASE-14782
> URL: https://issues.apache.org/jira/browse/HBASE-14782
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Heng Chen
> Attachments: HBASE-14782.patch
>
>
> The issue may affect not only master branch, but previous releases as well.
> This is from one of our customers:
> {quote}
> We are experiencing a problem with the FuzzyRowFilter for HBase scan. We 
> think that it is a bug. 
> Fuzzy filter should pick a row if it matches filter criteria irrespective of 
> other rows present in table but filter is dropping a row depending on some 
> other row present in table. 
> Details/Step to reproduce/Sample outputs below: 
> Missing row key: \x9C\x00\x044\x00\x00\x00\x00 
> Causing row key: \x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX 
> Prerequisites 
> 1. Create a test table. HBase shell command -- create 'fuzzytest','d' 
> 2. Insert some test data. HBase shell commands: 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x01\x00\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x00\x01\x00",'d:a','junk' 
> • put 'fuzzytest',"\x9C\x00\x044\x00\x01\x00\x01",'d:a','junk' 
> • put 'fuzzytest',"\x9B\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> • put 'fuzzytest',"\x9D\x00\x044e\xBB\xB2\xBB",'d:a','junk' 
> Now when you run the code, you will find \x9C\x00\x044\x00\x00\x00\x00 in 
> output because it matches filter criteria. (Refer how to run code below) 
> Insert the row key causing bug: 
> HBase shell command: put 
> 'fuzzytest',"\x9C\x00\x03\xE9e\xBB{X\x1Fwts\x1F\x15vRX",'d:a','junk' 
> Now when you run the code, you will not find \x9C\x00\x044\x00\x00\x00\x00 in 
> output even though it still matches filter criteria. 
> {quote}
> Verified the issue on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14809) Namespace permission granted to group

2015-11-13 Thread Steven Hancz (JIRA)
Steven Hancz created HBASE-14809:


 Summary: Namespace permission granted to group 
 Key: HBASE-14809
 URL: https://issues.apache.org/jira/browse/HBASE-14809
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.2
Reporter: Steven Hancz


Hi, 

We are looking to roll out HBase and are in the process to design the security 
model. 
We are looking to implement global DBAs and Namespace specific administrators. 
So for example the global dba would create a namespace and grant a user/group 
admin privileges within that ns. 
So that a given ns admin can in turn create objects and grant permission within 
the given ns only. 

We have run into some issues at the ns admin level. It appears that a ns admin 
can NOT grant to a grop unless it also has global admin privilege. But once it 
has global admin privilege it can grant in any NS not just the one where it has 
admin privileges. 

Based on the HBase documentation at 
http://hbase.apache.org/book.html#appendix_acl_matrix 

Table 13. ACL Matrix 
Interface   Operation   Permissions 
AccessController grant(global level) global(A) 
grant(namespace level) global(A)|NS(A) 

grant at a namespace level should be possible for someone with global A OR (|) 
NS A permission. 
As you will see in our test it does not work if NS A permission is granted but 
global A permission is not. 

Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 

hbase(main):011:0> scan 'hbase:acl' 
ROW COLUMN+CELL 
@finance column=l:sh82993, timestamp=105519510, value=RWXCA 
@gcbcppdn column=l:hdfs, timestamp=1446141119602, value=RWCXA 
@hbase column=l:hdfs, timestamp=1446141485136, value=RWCAX 
@ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 

However: 
Here you can see that a user who is member of the group hbaseappltest_ns1admin 
can not grant a WRX privilege to a group as it is missing global A privilege. 

$hbase shell 
15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available 
HBase Shell; enter 'help' for list of supported commands. 
Type "exit" to leave the HBase Shell 
Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 

hbase(main):001:0> whoami 
ns1ad...@wlab.net (auth:KERBEROS) 
groups: hbaseappltest_ns1admin 

hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 

ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions for user 'ns1admin' (global, action=ADMIN) 

The way I read the documentation a NS admin should be able to grant as it has 
ns level A privilege not only object level permission.

CDH is a version 5.4.7 and Hbase is version 1.0. 

Regards, 
Steven



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14811) HBaseInterClusterReplicationEndpoint retry logic is broken

2015-11-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14811:
--
Priority: Critical  (was: Major)

> HBaseInterClusterReplicationEndpoint retry logic is broken
> --
>
> Key: HBASE-14811
> URL: https://issues.apache.org/jira/browse/HBASE-14811
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
>
> In HBaseInterClusterReplicationEndpoint, we do something like this:
> {code}
> entryLists.remove(f.get());
> {code}
> where f.get() returns an ordinal number which represents the index of the 
> element in the entryLists that just succeeded replicating. We remove these 
> entries because we want to retry with remaining elements in the list in case 
> of a failure. Since entryLists is an ArrayList, the subsequent elements are 
> shifted left in case we remove an element. This breaks the intended 
> functionality. The fix is to reverse sort the ordinals and then perform the 
> deletion in one go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004732#comment-15004732
 ] 

Esteban Gutierrez commented on HBASE-14808:
---

+1 [~ndimiduk] we should split the book.

FYI: The 400KB is b/c the doc is sent compressed. The real book.html file is 
nearly 1.8MBs (1MB text, 800KB html tags)

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"

2015-11-13 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-14810:
---

 Summary: Update Hadoop support description to explain "not tested" 
vs "not supported"
 Key: HBASE-14810
 URL: https://issues.apache.org/jira/browse/HBASE-14810
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Sean Busbey
Priority: Critical


from [~ndimiduk] in thread about hadoop 2.6.1+:

{quote}
While we're in there, we should also clarify the meaning of "Not Supported"
vs "Not Tested". It seems we don't say what we mean by these distinctions.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14811) HBaseInterClusterReplicationEndpoint retry logic is broken

2015-11-13 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004717#comment-15004717
 ] 

Elliott Clark commented on HBASE-14811:
---

This is critical at least.

> HBaseInterClusterReplicationEndpoint retry logic is broken
> --
>
> Key: HBASE-14811
> URL: https://issues.apache.org/jira/browse/HBASE-14811
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
>
> In HBaseInterClusterReplicationEndpoint, we do something like this:
> {code}
> entryLists.remove(f.get());
> {code}
> where f.get() returns an ordinal number which represents the index of the 
> element in the entryLists that just succeeded replicating. We remove these 
> entries because we want to retry with remaining elements in the list in case 
> of a failure. Since entryLists is an ArrayList, the subsequent elements are 
> shifted left in case we remove an element. This breaks the intended 
> functionality. The fix is to reverse sort the ordinals and then perform the 
> deletion in one go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14811) HBaseInterClusterReplicationEndpoint retry logic is broken

2015-11-13 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004723#comment-15004723
 ] 

Matteo Bertozzi commented on HBASE-14811:
-

isn't this HBASE-14777?

> HBaseInterClusterReplicationEndpoint retry logic is broken
> --
>
> Key: HBASE-14811
> URL: https://issues.apache.org/jira/browse/HBASE-14811
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
>
> In HBaseInterClusterReplicationEndpoint, we do something like this:
> {code}
> entryLists.remove(f.get());
> {code}
> where f.get() returns an ordinal number which represents the index of the 
> element in the entryLists that just succeeded replicating. We remove these 
> entries because we want to retry with remaining elements in the list in case 
> of a failure. Since entryLists is an ArrayList, the subsequent elements are 
> shifted left in case we remove an element. This breaks the intended 
> functionality. The fix is to reverse sort the ordinals and then perform the 
> deletion in one go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14777) Replication fails with IndexOutOfBoundsException

2015-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14777:
---
Affects Version/s: 1.0.2
   1.1.2
   0.98.16

> Replication fails with IndexOutOfBoundsException
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0, 0.98.16
>Reporter: Bhupendra Kumar Jain
>Assignee: Bhupendra Kumar Jain
>Priority: Critical
> Attachments: HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004825#comment-15004825
 ] 

Elliott Clark commented on HBASE-14802:
---

Change the junit assert to java's too please.

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004667#comment-15004667
 ] 

Nick Dimiduk commented on HBASE-14808:
--

A multi-page version would help with this quite a bit.

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14771) RpcServer.getRemoteAddress always returns null.

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004673#comment-15004673
 ] 

Hadoop QA commented on HBASE-14771:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772116/HBASE-14771-V2.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772116

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16515//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16515//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16515//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16515//console

This message is automatically generated.

> RpcServer.getRemoteAddress always returns null.
> ---
>
> Key: HBASE-14771
> URL: https://issues.apache.org/jira/browse/HBASE-14771
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 1.2.0
>Reporter: Abhishek Kumar
>Assignee: Abhishek Kumar
>Priority: Minor
> Attachments: HBASE-14771-V1.patch, HBASE-14771-V2.patch, 
> HBASE-14771.patch
>
>
> RpcServer.getRemoteAddress always returns null, because Call object is 
> getting initialized with null.This seems to be happening because of using 
> RpcServer.getRemoteIp() in  Call object constructor before RpcServer thread 
> local 'CurCall' being set in CallRunner.run method:
> {noformat}
> // --- RpcServer.java ---
> protected void processRequest(byte[] buf) throws IOException, 
> InterruptedException {
>  .
> // Call object getting initialized here with address 
> // obtained from RpcServer.getRemoteIp()
> Call call = new Call(id, this.service, md, header, param, cellScanner, this, 
> responder,
>   totalRequestSize, traceInfo, RpcServer.getRemoteIp());
>   scheduler.dispatch(new CallRunner(RpcServer.this, call));
>  }
> // getRemoteIp method gets address from threadlocal 'CurCall' which 
> // gets set in CallRunner.run and calling it before this as in above case, 
> will return null
> // --- CallRunner.java ---
> public void run() {
>   .   
>   Pair resultPair = null;
>   RpcServer.CurCall.set(call);
>   ..
> }
> // Using 'this.addr' in place of getRemoteIp method in RpcServer.java seems 
> to be fixing this issue
> Call call = new Call(id, this.service, md, header, param, cellScanner, this, 
> responder,
>   totalRequestSize, traceInfo, this.addr);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-14802:
--
Attachment: HBASE-14802-2.patch

Modified the patch to print an error when numProcessing goes below zero in some 
unforeseen circumstance, reset to zero and continue.

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14242) TestAccessController#testMergeRegions is flaky in branch-1.0

2015-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14242:
---
Fix Version/s: (was: 1.0.3)
   1.0.4

> TestAccessController#testMergeRegions is flaky in branch-1.0
> 
>
> Key: HBASE-14242
> URL: https://issues.apache.org/jira/browse/HBASE-14242
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 1.0.4
>
>
> Flaked tests: 
> org.apache.hadoop.hbase.security.access.TestAccessController.testMergeRegions(org.apache.hadoop.hbase.security.access.TestAccessController)
>   Run 1: 
> TestAccessController.testMergeRegions:687->SecureTestUtil.verifyAllowed:176->SecureTestUtil.verifyAllowed:168
>  Expected action to pass for user 'owner' but was denied
>   Run 2: PASS
> {noformat}
> java.lang.AssertionError: Expected action to pass for user 'owner' but was 
> denied
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.hbase.security.access.SecureTestUtil.verifyAllowed(SecureTestUtil.java:168)
>   at 
> org.apache.hadoop.hbase.security.access.SecureTestUtil.verifyAllowed(SecureTestUtil.java:176)
>   at 
> org.apache.hadoop.hbase.security.access.TestAccessController.testMergeRegions(TestAccessController.java:687)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004761#comment-15004761
 ] 

Appy commented on HBASE-14808:
--

[~ndimiduk] What would be the impact of multi-page version on searchability? I 
like how easy it is to search for a keyword in whole doc right now.

A question out of curiosity, where are the web servers hosted? Who has access 
to them? Are they shared resources?
Would be interesting to know the request rate. If it's about 1-3 per minute, 
such high load time would mean something is very wrong. Maybe simple tuning 
would be sufficient.

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14800) Expose checkAndMutate via Thrift2

2015-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004779#comment-15004779
 ] 

Andrew Purtell commented on HBASE-14800:


lgtm

Tiny spelling nit: test is named "testeCheckAndMutate", should be 
"testCheckAndMutate" I think.



> Expose checkAndMutate via Thrift2
> -
>
> Key: HBASE-14800
> URL: https://issues.apache.org/jira/browse/HBASE-14800
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-14800.001.patch
>
>
> Had a user ask why checkAndMutate wasn't exposed via Thrift2.
> I see no good reason (since checkAndPut and checkAndDelete are already 
> there), so let's add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-14802:
--
Attachment: HBASE-14802-3.patch

[~mbertozzi] Modified the patch, thanks for the suggestion.

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Cosmin Lehene (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004754#comment-15004754
 ] 

Cosmin Lehene commented on HBASE-14808:
---

This may require more work but I like this 
https://databricks.gitbooks.io/databricks-spark-reference-applications/content/


> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14808) HBase Book takes 15 seconds to render

2015-11-13 Thread Cosmin Lehene (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004764#comment-15004764
 ] 

Cosmin Lehene commented on HBASE-14808:
---

[~appy] it's not the download that takes too long, it's the actual rendering in 
the browser (see attached screenshots).

> HBase Book takes 15 seconds to render
> -
>
> Key: HBASE-14808
> URL: https://issues.apache.org/jira/browse/HBASE-14808
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0, 1.1.2, 0.98.16
>Reporter: Cosmin Lehene
> Attachments: screenshot-1.png, screenshot1.png
>
>
> I'm not sure if it's because of some bad layout or just because it's single 
> page and too large however, when I load the page to get to something, I need 
> to wait 15 seconds until I can do anything. 
> As the book should be one of the main resources for all users, the user 
> experience needs to be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14811) HBaseInterClusterReplicationEndpoint retry logic is broken

2015-11-13 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004785#comment-15004785
 ] 

Ashu Pachauri commented on HBASE-14811:
---

[~eclark] On second thoughts, I see that it is not just the retry logic that is 
broken, but this will cause the HBaseInterClusterReplicationEndpoint.replicate 
to fail every single time because the IndexOutOfBoundsException is not handled 
in there. The ReplicationSource keeps sending the same batch again and again, 
and the replication is completely stuck. This might make it a blocker, rather 
than just critical.

[~mbertozzi] Looks like it is. Thanks for pointing it out.

> HBaseInterClusterReplicationEndpoint retry logic is broken
> --
>
> Key: HBASE-14811
> URL: https://issues.apache.org/jira/browse/HBASE-14811
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.0.2, 1.2.0, 1.2.1, 0.98.16
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
>
> In HBaseInterClusterReplicationEndpoint, we do something like this:
> {code}
> entryLists.remove(f.get());
> {code}
> where f.get() returns an ordinal number which represents the index of the 
> element in the entryLists that just succeeded replicating. We remove these 
> entries because we want to retry with remaining elements in the list in case 
> of a failure. Since entryLists is an ArrayList, the subsequent elements are 
> shifted left in case we remove an element. This breaks the intended 
> functionality. The fix is to reverse sort the ordinals and then perform the 
> deletion in one go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004834#comment-15004834
 ] 

Ashu Pachauri commented on HBASE-14802:
---

Done, thanks for the catch.

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14800) Expose checkAndMutate via Thrift2

2015-11-13 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005068#comment-15005068
 ] 

Josh Elser commented on HBASE-14800:


Thanks for the review, [~apurtell]. The fingers have a mind of their own 
sometime. Will post a new patch shortly.

> Expose checkAndMutate via Thrift2
> -
>
> Key: HBASE-14800
> URL: https://issues.apache.org/jira/browse/HBASE-14800
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-14800.001.patch
>
>
> Had a user ask why checkAndMutate wasn't exposed via Thrift2.
> I see no good reason (since checkAndPut and checkAndDelete are already 
> there), so let's add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14800) Expose checkAndMutate via Thrift2

2015-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005075#comment-15005075
 ] 

Andrew Purtell commented on HBASE-14800:


Oh, don't worry about, I'm more than happy to fix minor nits like that upon 
commit as long as you acknowledge the change. You have, so I will. No need for 
a new patch. No problem if you want to provide a new one, either.

> Expose checkAndMutate via Thrift2
> -
>
> Key: HBASE-14800
> URL: https://issues.apache.org/jira/browse/HBASE-14800
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-14800.001.patch
>
>
> Had a user ask why checkAndMutate wasn't exposed via Thrift2.
> I see no good reason (since checkAndPut and checkAndDelete are already 
> there), so let's add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14813) REST documentation under package.html should go to the book

2015-11-13 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14813:
--
Issue Type: Improvement  (was: Bug)

> REST documentation under package.html should go to the book
> ---
>
> Key: HBASE-14813
> URL: https://issues.apache.org/jira/browse/HBASE-14813
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Enis Soztutar
>
> It seems that we have more up to date and better documentation under 
> {{hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/package.html}} than 
> in the book. We should merge these two. The package.html is only accessible 
> if you know where to look. 
> [~misty] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14813) REST documentation under package.html should go to the book

2015-11-13 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-14813:
-

 Summary: REST documentation under package.html should go to the 
book
 Key: HBASE-14813
 URL: https://issues.apache.org/jira/browse/HBASE-14813
 Project: HBase
  Issue Type: Bug
  Components: documentation, REST
Reporter: Enis Soztutar


It seems that we have more up to date and better documentation under 
{{hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/package.html}} than in 
the book. We should merge these two. The package.html is only accessible if you 
know where to look. 

[~misty] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14812) Fix ResultBoundedCompletionService deadlock

2015-11-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14812:
--
Summary: Fix ResultBoundedCompletionService deadlock  (was: 
ResultBoundedCompletionService deadlock)

> Fix ResultBoundedCompletionService deadlock
> ---
>
> Key: HBASE-14812
> URL: https://issues.apache.org/jira/browse/HBASE-14812
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Thrift
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>
> {code}
> "thrift2-worker-0" #31 daemon prio=5 os_prio=0 tid=0x7f5ad9c45000 
> nid=0x484 in Object.wait() [0x7f5aa3832000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService.take(ResultBoundedCompletionService.java:148)
> - locked <0x0006b6ae7670> (a 
> [Lorg.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture;)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:188)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:212)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:186)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1276)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1182)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:194)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:171)
> - locked <0x0006b6ae79c0> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
> at 
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1033)
> at 
> org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler.putMultiple(ThriftHBaseServiceHandler.java:268)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler$THBaseServiceMetricsProxy.invoke(ThriftHBaseServiceHandler.java:114)
> at com.sun.proxy.$Proxy10.putMultiple(Unknown Source)
> at 
> org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1637)
> at 
> org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1621)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
> at org.apache.thrift.server.Invocation.run(Invocation.java:18)
> at 
> org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute

2015-11-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004934#comment-15004934
 ] 

Ted Yu commented on HBASE-14804:


{code}
272 def parse_htd_args(htd, arg)
{code}
The above method sounds general.
Should handling of arg[DURABILITY], etc be moved inside the method as well.

> HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
> 
>
> Key: HBASE-14804
> URL: https://issues.apache.org/jira/browse/HBASE-14804
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Romil Choksi
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-14804.v0-trunk.patch, HBASE-14804.v1-trunk.patch
>
>
> I am trying to create a new table and set the NORMALIZATION_ENABLED as true, 
> but seems like the argument NORMALIZATION_ENABLED is being ignored. And the 
> attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on 
> that table
> {code}
> hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 
> 'true'}
> An argument ignored (unknown or overridden): NORMALIZATION_ENABLED
> 0 row(s) in 4.2670 seconds
> => Hbase::Table - test-table-4
> hbase(main):021:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0430 seconds
> {code}
> However, on doing an alter command on that table we can set the 
> NORMALIZATION_ENABLED attribute for that table
> {code}
> hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'}
> Unknown argument ignored: NORMALIZATION_ENABLED
> Updating all regions with the new schema...
> 1/1 regions updated.
> Done.
> 0 row(s) in 2.3640 seconds
> hbase(main):023:0> desc 'test-table-4'
> Table test-table-4 is ENABLED 
>   
> 
> test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'}  
>   
> 
> COLUMN FAMILIES DESCRIPTION   
>   
> 
> {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC
> KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
>   
> 
> 1 row(s) in 0.0190 seconds
> {code}
> I think it would be better to have a single step process to enable 
> normalization while creating the table itself, rather than a two step process 
> to alter the table later on to enable normalization



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005106#comment-15005106
 ] 

Elliott Clark commented on HBASE-14802:
---

lgtm lets get what QA bot thinks and get this in.

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14802:
--
Fix Version/s: 1.3.0
   1.2.0
   2.0.0
   Status: Patch Available  (was: Open)

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14800) Expose checkAndMutate via Thrift2

2015-11-13 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-14800:
---
Attachment: HBASE-14800.002.patch

.002 fixes the test method name. Still need to get the thrift situation figured 
out.

> Expose checkAndMutate via Thrift2
> -
>
> Key: HBASE-14800
> URL: https://issues.apache.org/jira/browse/HBASE-14800
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-14800.001.patch, HBASE-14800.002.patch
>
>
> Had a user ask why checkAndMutate wasn't exposed via Thrift2.
> I see no good reason (since checkAndPut and checkAndDelete are already 
> there), so let's add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14806) Missing sources.jar for several modules when building HBase

2015-11-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005116#comment-15005116
 ] 

Duo Zhang commented on HBASE-14806:
---

The META-INF directory contained in the generated sources.jar contains the 
following files. Enough? [~busbey]

{noformat}
" zip.vim version v27
" Browsing zipfile 
/home/zhangduo/hbase/code/hbase-common/target/hbase-common-2.0.0-SNAPSHOT-sources.jar
" Select a file with cursor and press ENTER

META-INF/
META-INF/MANIFEST.MF
META-INF/LICENSE
META-INF/NOTICE
META-INF/DEPENDENCIES

" zip.vim version v27
" Browsing zipfile 
/home/zhangduo/hbase/code/hbase-external-blockcache/target/hbase-external-blockcache-2.0.0-SNAPSHOT-sources.jar
" Select a file with cursor and press ENTER

META-INF/
META-INF/MANIFEST.MF
META-INF/LICENSE
META-INF/NOTICE
META-INF/DEPENDENCIES
{noformat}



> Missing sources.jar for several modules when building HBase
> ---
>
> Key: HBASE-14806
> URL: https://issues.apache.org/jira/browse/HBASE-14806
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-14806.patch
>
>
> Introduced by HBASE-14085. The problem is, for example, in 
> hbase-common/pom.xml, we have
> {code:title=pom.xml}
> 
>   org.apache.maven.plugins
>   maven-source-plugin
>   
> true
> 
>   src/main/java
>   ${project.build.outputDirectory}/META-INF
> 
>   
> 
> {code}
> But in fact, the path inside {{}} tag is relative to source 
> directories, not the project directory. So the maven-source-plugin always end 
> with
> {noformat}
> No sources in project. Archive not created.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14803) Add some debug logs to StoreFileScanner

2015-11-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005150#comment-15005150
 ] 

Anoop Sam John commented on HBASE-14803:


Yes like this way.
Test failure related to patch?  

> Add some debug logs to StoreFileScanner
> ---
>
> Key: HBASE-14803
> URL: https://issues.apache.org/jira/browse/HBASE-14803
> Project: HBase
>  Issue Type: Bug
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.0
>
> Attachments: HBASE-14803.v0-trunk.patch, HBASE-14803.v1-trunk.patch, 
> HBASE-14803.v2-trunk.patch
>
>
> To validate some behaviors I had to add some logs into StoreFileScanner.
> I think it can be interesting for other people looking for debuging. So 
> sharing the modifications here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-11-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004932#comment-15004932
 ] 

Appy commented on HBASE-14769:
--

Ping. Ready for review.
[~stack] [~mbertozzi]

> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14812) ResultBoundedCompletionService deadlock

2015-11-13 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14812:
-

 Summary: ResultBoundedCompletionService deadlock
 Key: HBASE-14812
 URL: https://issues.apache.org/jira/browse/HBASE-14812
 Project: HBase
  Issue Type: Bug
  Components: Client, Thrift
Reporter: Elliott Clark
Assignee: Elliott Clark


{code}
"thrift2-worker-0" #31 daemon prio=5 os_prio=0 tid=0x7f5ad9c45000 nid=0x484 
in Object.wait() [0x7f5aa3832000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService.take(ResultBoundedCompletionService.java:148)
- locked <0x0006b6ae7670> (a 
[Lorg.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture;)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:188)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:212)
at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:186)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1276)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1182)
at 
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
at 
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:194)
at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:171)
- locked <0x0006b6ae79c0> (a 
org.apache.hadoop.hbase.client.BufferedMutatorImpl)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1033)
at 
org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler.putMultiple(ThriftHBaseServiceHandler.java:268)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler$THBaseServiceMetricsProxy.invoke(ThriftHBaseServiceHandler.java:114)
at com.sun.proxy.$Proxy10.putMultiple(Unknown Source)
at 
org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1637)
at 
org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1621)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
at org.apache.thrift.server.Invocation.run(Invocation.java:18)
at org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14812) Fix ResultBoundedCompletionService deadlock

2015-11-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14812:
--
Fix Version/s: 1.3.0
   1.2.0
   2.0.0
Affects Version/s: 1.2.0
   Status: Patch Available  (was: Open)

> Fix ResultBoundedCompletionService deadlock
> ---
>
> Key: HBASE-14812
> URL: https://issues.apache.org/jira/browse/HBASE-14812
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Thrift
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14812.patch
>
>
> {code}
> "thrift2-worker-0" #31 daemon prio=5 os_prio=0 tid=0x7f5ad9c45000 
> nid=0x484 in Object.wait() [0x7f5aa3832000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService.take(ResultBoundedCompletionService.java:148)
> - locked <0x0006b6ae7670> (a 
> [Lorg.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture;)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:188)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:212)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:186)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1276)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1182)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:194)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:171)
> - locked <0x0006b6ae79c0> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
> at 
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1033)
> at 
> org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler.putMultiple(ThriftHBaseServiceHandler.java:268)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler$THBaseServiceMetricsProxy.invoke(ThriftHBaseServiceHandler.java:114)
> at com.sun.proxy.$Proxy10.putMultiple(Unknown Source)
> at 
> org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1637)
> at 
> org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1621)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
> at org.apache.thrift.server.Invocation.run(Invocation.java:18)
> at 
> org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14809) Namespace permission granted to group

2015-11-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14809:
---
Attachment: 14809-v1.txt

Something like this ?

> Namespace permission granted to group 
> --
>
> Key: HBASE-14809
> URL: https://issues.apache.org/jira/browse/HBASE-14809
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.2
>Reporter: Steven Hancz
> Attachments: 14809-v1.txt
>
>
> Hi, 
> We are looking to roll out HBase and are in the process to design the 
> security model. 
> We are looking to implement global DBAs and Namespace specific 
> administrators. 
> So for example the global dba would create a namespace and grant a user/group 
> admin privileges within that ns. 
> So that a given ns admin can in turn create objects and grant permission 
> within the given ns only. 
> We have run into some issues at the ns admin level. It appears that a ns 
> admin can NOT grant to a grop unless it also has global admin privilege. But 
> once it has global admin privilege it can grant in any NS not just the one 
> where it has admin privileges. 
> Based on the HBase documentation at 
> http://hbase.apache.org/book.html#appendix_acl_matrix 
> Table 13. ACL Matrix 
> Interface Operation   Permissions 
> AccessController grant(global level) global(A) 
> grant(namespace level) global(A)|NS(A) 
> grant at a namespace level should be possible for someone with global A OR 
> (|) NS A permission. 
> As you will see in our test it does not work if NS A permission is granted 
> but global A permission is not. 
> Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 
> hbase(main):011:0> scan 'hbase:acl' 
> ROW COLUMN+CELL 
> @ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 
> However: 
> Here you can see that a user who is member of the group 
> hbaseappltest_ns1admin can not grant a WRX privilege to a group as it is 
> missing global A privilege. 
> $hbase shell 
> 15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available 
> HBase Shell; enter 'help' for list of supported commands. 
> Type "exit" to leave the HBase Shell 
> Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 
> hbase(main):001:0> whoami 
> ns1ad...@wlab.net (auth:KERBEROS) 
> groups: hbaseappltest_ns1admin 
> hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 
> ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'ns1admin' (global, action=ADMIN) 
> The way I read the documentation a NS admin should be able to grant as it has 
> ns level A privilege not only object level permission.
> CDH is a version 5.4.7 and Hbase is version 1.0. 
> Regards, 
> Steven



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14791) [0.98] CopyTable is extremely slow when moving delete markers

2015-11-13 Thread Alex Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004957#comment-15004957
 ] 

Alex Araujo commented on HBASE-14791:
-

Thanks for reviewing [~lhofhansl]]! Changing HTable to HTableInterface does not 
break compatibility for subclasses of TableOutputFormat because HTable is 
private. Subclassing HTable instead of using delegation would save a fair 
amount of boilerplate in BufferedHTable, and allow both Mutation types to share 
the same buffer (addresses the ordering issue).

I'll upload a v2 that subclasses HTable and preserves ordering.

> [0.98] CopyTable is extremely slow when moving delete markers
> -
>
> Key: HBASE-14791
> URL: https://issues.apache.org/jira/browse/HBASE-14791
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.16
>Reporter: Lars Hofhansl
>Assignee: Alex Araujo
> Attachments: HBASE-14791-0.98-v1.patch
>
>
> We found that some of our copy table job run for many hours, even when there 
> isn't that much data to copy.
> [~vik.karma] did his magic and found that the issue is with copying delete 
> markers (we use raw mode to also move deletes across).
> Looking at the code in 0.98 it's immediately obvious that deletes (unlike 
> puts) are not batched and hence sent to the other side one by one, causing a 
> network RTT for each delete marker.
> Looks like in trunk it's doing the right thing (using BufferedMutators for 
> all mutations in TableOutputFormat). So likely only a 0.98 (and 1.0, 1.1, 
> 1.2?) issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14597) Fix Groups cache in multi-threaded env

2015-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14597:
---
Fix Version/s: 0.98.17

> Fix Groups cache in multi-threaded env
> --
>
> Key: HBASE-14597
> URL: https://issues.apache.org/jira/browse/HBASE-14597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14597-v1.patch, HBASE-14597-v2.patch, 
> HBASE-14597-v4.patch, HBASE-14597-v5.patch, HBASE-14597-v6.patch, 
> HBASE-14597.patch
>
>
> UGI doesn't hash based on the user as expected so since we have lots of ugi 
> potentially created the cache doesn't do it's job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14512) Cache UGI groups

2015-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14512:
---
Fix Version/s: 0.98.17

> Cache UGI groups
> 
>
> Key: HBASE-14512
> URL: https://issues.apache.org/jira/browse/HBASE-14512
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, security
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14512-v1.patch, HBASE-14512-v2.patch, 
> HBASE-14512-v3.patch, HBASE-14512-v4.patch, HBASE-14512.patch
>
>
> Right now every call gets a new User object.
> We should keep the same user for the life of a connection. We should also 
> cache the group names. However we can't cache the groups for forever as that 
> would mean groups don't get refreshed every 5 mins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14812) Fix ResultBoundedCompletionService deadlock

2015-11-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14812:
--
Attachment: HBASE-14812.patch

Here's my first stab at this. I'm still a little un-sure how this is getting 
stuck.

I'm seeing the deadlock on a thrift server. There are no meta region replicas.
Things weren't all that busy.

> Fix ResultBoundedCompletionService deadlock
> ---
>
> Key: HBASE-14812
> URL: https://issues.apache.org/jira/browse/HBASE-14812
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Thrift
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14812.patch
>
>
> {code}
> "thrift2-worker-0" #31 daemon prio=5 os_prio=0 tid=0x7f5ad9c45000 
> nid=0x484 in Object.wait() [0x7f5aa3832000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService.take(ResultBoundedCompletionService.java:148)
> - locked <0x0006b6ae7670> (a 
> [Lorg.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture;)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:188)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:212)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:186)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1276)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1182)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:194)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:171)
> - locked <0x0006b6ae79c0> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
> at 
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1033)
> at 
> org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler.putMultiple(ThriftHBaseServiceHandler.java:268)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler$THBaseServiceMetricsProxy.invoke(ThriftHBaseServiceHandler.java:114)
> at com.sun.proxy.$Proxy10.putMultiple(Unknown Source)
> at 
> org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1637)
> at 
> org.apache.hadoop.hbase.thrift2.generated.THBaseService$Processor$putMultiple.getResult(THBaseService.java:1621)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
> at org.apache.thrift.server.Invocation.run(Invocation.java:18)
> at 
> org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14812) Fix ResultBoundedCompletionService deadlock

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005167#comment-15005167
 ] 

Hadoop QA commented on HBASE-14812:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772342/HBASE-14812.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772342

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16517//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16517//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16517//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16517//console

This message is automatically generated.

> Fix ResultBoundedCompletionService deadlock
> ---
>
> Key: HBASE-14812
> URL: https://issues.apache.org/jira/browse/HBASE-14812
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Thrift
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14812.patch
>
>
> {code}
> "thrift2-worker-0" #31 daemon prio=5 os_prio=0 tid=0x7f5ad9c45000 
> nid=0x484 in Object.wait() [0x7f5aa3832000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService.take(ResultBoundedCompletionService.java:148)
> - locked <0x0006b6ae7670> (a 
> [Lorg.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture;)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:188)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:212)
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:186)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1276)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1182)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:194)
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:171)
> - locked <0x0006b6ae79c0> (a 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl)
> at 
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
> at 

[jira] [Updated] (HBASE-14807) TestWALLockup is flakey

2015-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14807:
--
   Resolution: Fixed
Fix Version/s: 1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-1.2. Lets see how it does.

> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> ... then times out after being locked up for 30 seconds.  Writes 50+MB of 
> logs while spinning.
> Reported as this:
> {code}
> ---
> Test set: org.apache.hadoop.hbase.regionserver.TestWALLockup
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 198.23 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestWALLockup
> testLockupWhenSyncInMiddleOfZigZagSetup(org.apache.hadoop.hbase.regionserver.TestWALLockup)
>   Time elapsed: 0.049 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out 

[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005219#comment-15005219
 ] 

Hudson commented on HBASE-14802:


SUCCESS: Integrated in HBase-1.3-IT #309 (See 
[https://builds.apache.org/job/HBase-1.3-IT/309/])
HBASE-14802 Replaying server crash recovery procedure after a failover (stack: 
rev bb9fbdb2d2967a001e3c3f2613b82c85c5125199)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java


> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14777) Replication fails with IndexOutOfBoundsException

2015-11-13 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005173#comment-15005173
 ] 

Ashish Singhi commented on HBASE-14777:
---

[~apurtell], HBASE-12988 was committed only to 1.2+, so this jira will not 
affect versions before it.

> Replication fails with IndexOutOfBoundsException
> 
>
> Key: HBASE-14777
> URL: https://issues.apache.org/jira/browse/HBASE-14777
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0, 0.98.16
>Reporter: Bhupendra Kumar Jain
>Assignee: Bhupendra Kumar Jain
>Priority: Critical
> Attachments: HBASE-14777.patch
>
>
> Replication fails with IndexOutOfBoundsException 
> {code}
> regionserver.ReplicationSource$ReplicationSourceWorkerThread(939): 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
>  threw unknown exception:java.lang.IndexOutOfBoundsException: Index: 1, Size: 
> 1
>   at java.util.ArrayList.rangeCheck(Unknown Source)
>   at java.util.ArrayList.remove(Unknown Source)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:222)
> {code}
> Its happening due to incorrect removal of entries from the replication 
> entries list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14802:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Nice [~ashu210890] Thanks.

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14809) Namespace permission granted to group

2015-11-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14809:
---
Status: Patch Available  (was: Open)

> Namespace permission granted to group 
> --
>
> Key: HBASE-14809
> URL: https://issues.apache.org/jira/browse/HBASE-14809
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.2
>Reporter: Steven Hancz
> Attachments: 14809-v1.txt, 14809-v2.txt
>
>
> Hi, 
> We are looking to roll out HBase and are in the process to design the 
> security model. 
> We are looking to implement global DBAs and Namespace specific 
> administrators. 
> So for example the global dba would create a namespace and grant a user/group 
> admin privileges within that ns. 
> So that a given ns admin can in turn create objects and grant permission 
> within the given ns only. 
> We have run into some issues at the ns admin level. It appears that a ns 
> admin can NOT grant to a grop unless it also has global admin privilege. But 
> once it has global admin privilege it can grant in any NS not just the one 
> where it has admin privileges. 
> Based on the HBase documentation at 
> http://hbase.apache.org/book.html#appendix_acl_matrix 
> Table 13. ACL Matrix 
> Interface Operation   Permissions 
> AccessController grant(global level) global(A) 
> grant(namespace level) global(A)|NS(A) 
> grant at a namespace level should be possible for someone with global A OR 
> (|) NS A permission. 
> As you will see in our test it does not work if NS A permission is granted 
> but global A permission is not. 
> Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 
> hbase(main):011:0> scan 'hbase:acl' 
> ROW COLUMN+CELL 
> @ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 
> However: 
> Here you can see that a user who is member of the group 
> hbaseappltest_ns1admin can not grant a WRX privilege to a group as it is 
> missing global A privilege. 
> $hbase shell 
> 15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available 
> HBase Shell; enter 'help' for list of supported commands. 
> Type "exit" to leave the HBase Shell 
> Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 
> hbase(main):001:0> whoami 
> ns1ad...@wlab.net (auth:KERBEROS) 
> groups: hbaseappltest_ns1admin 
> hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 
> ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'ns1admin' (global, action=ADMIN) 
> The way I read the documentation a NS admin should be able to grant as it has 
> ns level A privilege not only object level permission.
> CDH is a version 5.4.7 and Hbase is version 1.0. 
> Regards, 
> Steven



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14809) Namespace permission granted to group

2015-11-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14809:
---
Attachment: 14809-v2.txt

> Namespace permission granted to group 
> --
>
> Key: HBASE-14809
> URL: https://issues.apache.org/jira/browse/HBASE-14809
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.2
>Reporter: Steven Hancz
> Attachments: 14809-v1.txt, 14809-v2.txt
>
>
> Hi, 
> We are looking to roll out HBase and are in the process to design the 
> security model. 
> We are looking to implement global DBAs and Namespace specific 
> administrators. 
> So for example the global dba would create a namespace and grant a user/group 
> admin privileges within that ns. 
> So that a given ns admin can in turn create objects and grant permission 
> within the given ns only. 
> We have run into some issues at the ns admin level. It appears that a ns 
> admin can NOT grant to a grop unless it also has global admin privilege. But 
> once it has global admin privilege it can grant in any NS not just the one 
> where it has admin privileges. 
> Based on the HBase documentation at 
> http://hbase.apache.org/book.html#appendix_acl_matrix 
> Table 13. ACL Matrix 
> Interface Operation   Permissions 
> AccessController grant(global level) global(A) 
> grant(namespace level) global(A)|NS(A) 
> grant at a namespace level should be possible for someone with global A OR 
> (|) NS A permission. 
> As you will see in our test it does not work if NS A permission is granted 
> but global A permission is not. 
> Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 
> hbase(main):011:0> scan 'hbase:acl' 
> ROW COLUMN+CELL 
> @ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 
> However: 
> Here you can see that a user who is member of the group 
> hbaseappltest_ns1admin can not grant a WRX privilege to a group as it is 
> missing global A privilege. 
> $hbase shell 
> 15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available 
> HBase Shell; enter 'help' for list of supported commands. 
> Type "exit" to leave the HBase Shell 
> Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 
> hbase(main):001:0> whoami 
> ns1ad...@wlab.net (auth:KERBEROS) 
> groups: hbaseappltest_ns1admin 
> hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 
> ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'ns1admin' (global, action=ADMIN) 
> The way I read the documentation a NS admin should be able to grant as it has 
> ns level A privilege not only object level permission.
> CDH is a version 5.4.7 and Hbase is version 1.0. 
> Regards, 
> Steven



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005188#comment-15005188
 ] 

stack commented on HBASE-14802:
---

Pushed to branch-1.2+

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14355) Scan different TimeRange for each column family

2015-11-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005210#comment-15005210
 ] 

stack commented on HBASE-14355:
---

Pushed to branch-1 and branch-1.2. I had to run protoc for 1.2 weirdly. 
[~apurtell] Want to pull this into 0.98 for the lads?

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v2.patch, HBASE-14355-v3.patch, 
> HBASE-14355-v4.patch, HBASE-14355-v5.patch, HBASE-14355-v6.patch, 
> HBASE-14355-v7.patch, HBASE-14355-v8.patch, HBASE-14355-v9.patch, 
> HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005195#comment-15005195
 ] 

Elliott Clark commented on HBASE-14802:
---

Thanks [~stack]

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14809) Namespace permission granted to group

2015-11-13 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005199#comment-15005199
 ] 

Ashish Singhi commented on HBASE-14809:
---

1. Can we add the test in {{TestNamespaceCommands}} as we already have some 
grant and revoke operation test their on namespace and will be able to assert 
on global admin also.
2. Also suggest to verify for users denied to perform this action.

> Namespace permission granted to group 
> --
>
> Key: HBASE-14809
> URL: https://issues.apache.org/jira/browse/HBASE-14809
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.2
>Reporter: Steven Hancz
> Attachments: 14809-v1.txt, 14809-v2.txt
>
>
> Hi, 
> We are looking to roll out HBase and are in the process to design the 
> security model. 
> We are looking to implement global DBAs and Namespace specific 
> administrators. 
> So for example the global dba would create a namespace and grant a user/group 
> admin privileges within that ns. 
> So that a given ns admin can in turn create objects and grant permission 
> within the given ns only. 
> We have run into some issues at the ns admin level. It appears that a ns 
> admin can NOT grant to a grop unless it also has global admin privilege. But 
> once it has global admin privilege it can grant in any NS not just the one 
> where it has admin privileges. 
> Based on the HBase documentation at 
> http://hbase.apache.org/book.html#appendix_acl_matrix 
> Table 13. ACL Matrix 
> Interface Operation   Permissions 
> AccessController grant(global level) global(A) 
> grant(namespace level) global(A)|NS(A) 
> grant at a namespace level should be possible for someone with global A OR 
> (|) NS A permission. 
> As you will see in our test it does not work if NS A permission is granted 
> but global A permission is not. 
> Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1. 
> hbase(main):011:0> scan 'hbase:acl' 
> ROW COLUMN+CELL 
> @ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA 
> However: 
> Here you can see that a user who is member of the group 
> hbaseappltest_ns1admin can not grant a WRX privilege to a group as it is 
> missing global A privilege. 
> $hbase shell 
> 15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available 
> HBase Shell; enter 'help' for list of supported commands. 
> Type "exit" to leave the HBase Shell 
> Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015 
> hbase(main):001:0> whoami 
> ns1ad...@wlab.net (auth:KERBEROS) 
> groups: hbaseappltest_ns1admin 
> hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1' 
> ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'ns1admin' (global, action=ADMIN) 
> The way I read the documentation a NS admin should be able to grant as it has 
> ns level A privilege not only object level permission.
> CDH is a version 5.4.7 and Hbase is version 1.0. 
> Regards, 
> Steven



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14807) TestWALLockup is flakey

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005218#comment-15005218
 ] 

Hudson commented on HBASE-14807:


SUCCESS: Integrated in HBase-1.3-IT #309 (See 
[https://builds.apache.org/job/HBase-1.3-IT/309/])
HBASE-14807 TestWALLockup is flakey (stack: rev 
72fbfb589ede3786de1d635476bb24f51b1548da)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALLockup.java


> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> ... then times out after being locked up for 30 seconds.  Writes 50+MB of 
> logs while spinning.
> Reported as this:
> {code}
> ---
> Test set: org.apache.hadoop.hbase.regionserver.TestWALLockup
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 198.23 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestWALLockup
> 

[jira] [Commented] (HBASE-14800) Expose checkAndMutate via Thrift2

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005171#comment-15005171
 ] 

Hadoop QA commented on HBASE-14800:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772341/HBASE-14800.002.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772341

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1729 checkstyle errors (more than the master's current 1727 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+lastComparison = 
Boolean.valueOf(isSetBloomFilterType()).compareTo(other.isSetBloomFilterType());
+lastComparison = 
Boolean.valueOf(isSetBloomFilterVectorSize()).compareTo(other.isSetBloomFilterVectorSize());
+lastComparison = 
Boolean.valueOf(isSetBloomFilterNbHashes()).compareTo(other.isSetBloomFilterNbHashes());
+lastComparison = 
Boolean.valueOf(isSetBlockCacheEnabled()).compareTo(other.isSetBlockCacheEnabled());
+  public AsyncMethodCallback getResultHandler(final AsyncFrameBuffer 
fb, final int seqid) {
+  public AsyncMethodCallback getResultHandler(final AsyncFrameBuffer 
fb, final int seqid) {
+  public AsyncMethodCallback getResultHandler(final 
AsyncFrameBuffer fb, final int seqid) {
+  public AsyncMethodCallback getResultHandler(final AsyncFrameBuffer 
fb, final int seqid) {
+  public AsyncMethodCallback getResultHandler(final AsyncFrameBuffer 
fb, final int seqid) {
+  public AsyncMethodCallback getResultHandler(final 
AsyncFrameBuffer fb, final int seqid) {

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16516//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16516//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16516//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16516//console

This message is automatically generated.

> Expose checkAndMutate via Thrift2
> -
>
> Key: HBASE-14800
> URL: https://issues.apache.org/jira/browse/HBASE-14800
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-14800.001.patch, HBASE-14800.002.patch
>
>
> Had a user ask why checkAndMutate wasn't exposed via Thrift2.
> I see no good reason (since checkAndPut and checkAndDelete are already 
> there), so let's add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005169#comment-15005169
 ] 

Hadoop QA commented on HBASE-14802:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772317/HBASE-14802-3.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772317

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16518//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16518//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16518//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16518//console

This message is automatically generated.

> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-11-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005202#comment-15005202
 ] 

stack commented on HBASE-14769:
---

So you keep this because it was not deprecated though it should have been? 
[~appy]

  public Pair getAlterStatus(final byte[] tableName) throws 
IOException {

Want to add @deprecated as part of this patch or do you want to do that in new 
issue?

What is parent doc in below?

  // See parent doc for deprecation timeline.

Ok to remove this one?

3290  public void snapshot(final String snapshotName,   
3291   final String tableName) throws IOException,  

3292  SnapshotCreationException, IllegalArgumentException { 
3293snapshot(snapshotName, TableName.valueOf(tableName),
3294SnapshotDescription.Type.FLUSH);
3295  }

and a few of the other snapshot methods being removed?

This is a great cleanup patch. Lets get it in [~appy]



> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14798) NPE reporting server load causes regionserver abort; causes TestAcidGuarantee to fail

2015-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14798:
--
Attachment: 14798.patch

Retry.

I think the failed test flakey. Will look at it next.

> NPE reporting server load causes regionserver abort; causes TestAcidGuarantee 
> to fail
> -
>
> Key: HBASE-14798
> URL: https://issues.apache.org/jira/browse/HBASE-14798
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Attachments: 14798.patch, 14798.patch
>
>
> Below crashed out a RS. Caused TestAcidGuarantees to fail because then there 
> were not RS to assign too... 
> {code}
> 2015-11-11 11:36:23,092 ERROR 
> [B.defaultRpcServer.handler=4,queue=0,port=58655] 
> master.MasterRpcServices(388): Region server 
> asf907.gq1.ygridcore.net,55184,1447241756717 reported a fatal error:
> ABORTING region server asf907.gq1.ygridcore.net,55184,1447241756717: 
> Unhandled: null
> Cause:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getOldestHfileTs(HRegion.java:1643)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionLoad(HRegionServer.java:1503)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1210)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1153)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:969)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:140)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:138)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Here is the failure: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Trunk_matrix/457/jdk=latest1.8,label=Hadoop/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-11-13 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005223#comment-15005223
 ] 

Ashish Singhi commented on HBASE-14769:
---

[~appy], nice cleanup.

One question, {{HBaseAdmin}} is marked for private audience and 
{{HBaseTestingUtility}} is marked as public. 
{{HBaseTestingUtility#getHBaseAdmin}} will return a {{HBaseAdmin}} object. Now 
since we have removed most of the(if not all) public apis in {{HBaseAdmin}} 
class which were taking table name as byte[] or String as argument which were 
not marked as deprecated only and some public apis marked as deprecated only in 
2.0 version. Will it not break users using {{HBaseTestingUtility}} class from 
1.x to 2.0 ?

> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14807) TestWALLockup is flakey

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005234#comment-15005234
 ] 

Hudson commented on HBASE-14807:


SUCCESS: Integrated in HBase-1.2-IT #281 (See 
[https://builds.apache.org/job/HBase-1.2-IT/281/])
HBASE-14807 TestWALLockup is flakey (stack: rev 
4e6e93f26b417073fb7fb2d471ef2d8baf5422d9)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALLockup.java


> TestWALLockup is flakey
> ---
>
> Key: HBASE-14807
> URL: https://issues.apache.org/jira/browse/HBASE-14807
> Project: HBase
>  Issue Type: Bug
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14807.patch
>
>
> Fails frequently. 
> Looks like this:
> {code}
> 2015-11-12 10:38:51,812 DEBUG [Time-limited test] regionserver.HRegion(3882): 
> Found 0 recovered edits file(s) under 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d
> 2015-11-12 10:38:51,821 DEBUG [Time-limited test] 
> regionserver.FlushLargeStoresPolicy(56): 
> hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use 
> global config(16777216) instead
> 2015-11-12 10:38:51,880 DEBUG [Time-limited test] wal.WALSplitter(729): Wrote 
> region 
> seqId=/home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad/data/default/testLockupWhenSyncInMiddleOfZigZagSetup/c8694b53368f3301a8d370089120388d/recovered.edits/2.seqid
>  to file, newSeqId=2, maxSeqId=0
> 2015-11-12 10:38:51,881 INFO  [Time-limited test] regionserver.HRegion(868): 
> Onlined c8694b53368f3301a8d370089120388d; next sequenceid=2
> 2015-11-12 10:38:51,994 ERROR [sync.1] wal.FSHLog$SyncRunner(1226): Error 
> syncing, request close of WAL
> java.io.IOException: FAKE! Failed to replace a bad datanode...SYNC
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup$1DodgyFSLog$1.sync(TestWALLockup.java:162)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1222)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-11-12 10:38:51,997 DEBUG [Thread-4] regionserver.LogRoller(139): WAL 
> roll requested
> 2015-11-12 10:38:52,019 DEBUG [flusher] 
> regionserver.FlushLargeStoresPolicy(100): Since none of the CFs were above 
> the size, flushing all.
> 2015-11-12 10:38:52,192 INFO  [Thread-4] 
> regionserver.TestWALLockup$1DodgyFSLog(129): LATCHED
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:146)
>   at 
> org.apache.hadoop.hbase.regionserver.TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup(TestWALLockup.java:245)
> 2015-11-12 10:39:18,609 INFO  [main] regionserver.TestWALLockup(91): Cleaning 
> test directory: 
> /home/jenkins/jenkins-slave/workspace/HBase-1.2/jdk/latest1.7/label/Hadoop/hbase-server/target/test-data/8b8f8f12-1819-47e3-b1f1-8ffa789438ad
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> ... then times out after being locked up for 30 seconds.  Writes 50+MB of 
> logs while spinning.
> Reported as this:
> {code}
> ---
> Test set: org.apache.hadoop.hbase.regionserver.TestWALLockup
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 198.23 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.TestWALLockup
> 

[jira] [Commented] (HBASE-14802) Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005236#comment-15005236
 ] 

Hudson commented on HBASE-14802:


SUCCESS: Integrated in HBase-1.2-IT #281 (See 
[https://builds.apache.org/job/HBase-1.2-IT/281/])
HBASE-14802 Replaying server crash recovery procedure after a failover (stack: 
rev b22feba7fe2b176ef2578264f8b95947df592bba)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java


> Replaying server crash recovery procedure after a failover causes incorrect 
> handling of deadservers
> ---
>
> Key: HBASE-14802
> URL: https://issues.apache.org/jira/browse/HBASE-14802
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.2.1
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14802-1.patch, HBASE-14802-2.patch, 
> HBASE-14802-3.patch, HBASE-14802.patch
>
>
> The way dead servers are processed is that a ServerCrashProcedure is launched 
> for a server after it is added to the dead servers list. 
> Every time a server is added to the dead list, a counter "numProcessing" is 
> incremented and it is decremented when a crash recovery procedure finishes. 
> Since, adding a dead server and recovering it are two separate events, it can 
> cause inconsistencies.
> If a master failover occurs in the middle of the crash recovery, the 
> numProcessing counter resets but the ServerCrashProcedure is replayed by the 
> new master. This causes the counter to go negative and makes the master think 
> that dead servers are still in process of recovery. 
> This has ramifications on the balancer that the balancer ceases to run after 
> such a failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14355) Scan different TimeRange for each column family

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005235#comment-15005235
 ] 

Hudson commented on HBASE-14355:


SUCCESS: Integrated in HBase-1.2-IT #281 (See 
[https://builds.apache.org/job/HBase-1.2-IT/281/])
HBASE-14355 Scan different TimeRange for each column family (stack: rev 
76187d116110c0d67d3a9b73c9e5c25cdfea5ea6)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/NonLazyKeyValueScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Get.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Query.java
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
* hbase-protocol/src/main/protobuf/HBase.proto
* hbase-protocol/src/main/protobuf/Client.proto


> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-v1.patch, HBASE-14355-v10.patch, 
> HBASE-14355-v11.patch, HBASE-14355-v2.patch, HBASE-14355-v3.patch, 
> HBASE-14355-v4.patch, HBASE-14355-v5.patch, HBASE-14355-v6.patch, 
> HBASE-14355-v7.patch, HBASE-14355-v8.patch, HBASE-14355-v9.patch, 
> HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14575) Reduce scope of compactions holding region lock

2015-11-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003838#comment-15003838
 ] 

Ted Yu commented on HBASE-14575:


In HRegion#doClose():
{code}
  writestate.writesEnabled = false;
  LOG.debug("Closing " + this + ": disabling compactions & flushes");
{code}

At the beginning of compact():
{code}
synchronized (writestate) {
  if (writestate.writesEnabled) {
wasStateSet = true;
++writestate.compacting;
  } else {
String msg = "NOT compacting region " + this + ". Writes disabled.";
LOG.info(msg);
status.abort(msg);
return false;
  }
{code}
I think the above already achieves [~ram_krish]'s suggestion.

> Reduce scope of compactions holding region lock
> ---
>
> Key: HBASE-14575
> URL: https://issues.apache.org/jira/browse/HBASE-14575
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, regionserver
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 14575-v1.patch, 14575-v2.patch, 14575-v3.patch, 
> 14575-v4.patch, 14575-v5.patch, 14575.v00.patch
>
>
> Per [~devaraj]'s idea on parent issue, let's see if we can reduce the scope 
> of critical section under which compactions hold the region read lock.
> Here is summary from parent issue:
> Another idea is we can reduce the scope of when the read lock is held during 
> compaction. In theory the compactor only needs a region read lock while 
> deciding what files to compact and at the time of committing the compaction. 
> We're protected from the case of region close events because compactions are 
> checking (every 10k bytes written) if the store has been closed in order to 
> abort in such a case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14806) Missing sources.jar for several modules when building HBase

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003834#comment-15003834
 ] 

Hadoop QA commented on HBASE-14806:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12772162/HBASE-14806.patch
  against master branch at commit 789f8a5a70242c16ce10bc95401c51c7d04debfa.
  ATTACHMENT ID: 12772162

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.4.0.

Compilation errors resume:
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-source-plugin:2.2.1:jar (default) on project 
hbase-server: Execution default of goal 
org.apache.maven.plugins:maven-source-plugin:2.2.1:jar failed: 
java.lang.reflect.InvocationTargetException: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase/hbase-server/target/MiniMRCluster_64588581/MiniMRCluster_64588581-logDir-nm-1_0/mLjP4
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16510//console

This message is automatically generated.

> Missing sources.jar for several modules when building HBase
> ---
>
> Key: HBASE-14806
> URL: https://issues.apache.org/jira/browse/HBASE-14806
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-14806.patch
>
>
> Introduced by HBASE-14085. The problem is, for example, in 
> hbase-common/pom.xml, we have
> {code:title=pom.xml}
> 
>   org.apache.maven.plugins
>   maven-source-plugin
>   
> true
> 
>   src/main/java
>   ${project.build.outputDirectory}/META-INF
> 
>   
> 
> {code}
> But in fact, the path inside {{}} tag is relative to source 
> directories, not the project directory. So the maven-source-plugin always end 
> with
> {noformat}
> No sources in project. Archive not created.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14806) Missing sources.jar for several modules when building HBase

2015-11-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003860#comment-15003860
 ] 

Duo Zhang commented on HBASE-14806:
---

I tried locally with command

{noformat}
mvn clean install -DskipTests -DHBasePatchProcess -Dhadoop-two.version=2.4.0
{noformat}

passed.

Let me try again to see if jenkins fail at the same place...

> Missing sources.jar for several modules when building HBase
> ---
>
> Key: HBASE-14806
> URL: https://issues.apache.org/jira/browse/HBASE-14806
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-14806.patch
>
>
> Introduced by HBASE-14085. The problem is, for example, in 
> hbase-common/pom.xml, we have
> {code:title=pom.xml}
> 
>   org.apache.maven.plugins
>   maven-source-plugin
>   
> true
> 
>   src/main/java
>   ${project.build.outputDirectory}/META-INF
> 
>   
> 
> {code}
> But in fact, the path inside {{}} tag is relative to source 
> directories, not the project directory. So the maven-source-plugin always end 
> with
> {noformat}
> No sources in project. Archive not created.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >