[jira] [Commented] (HBASE-9984) AggregationClient creates a new Htable, HConnection,and ExecutorService in every CP call.

2013-11-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826286#comment-13826286
 ] 

Anoop Sam John commented on HBASE-9984:
---

Fine. Can u prepare a backport and attach. +1 for doing this in 94

> AggregationClient creates a new Htable, HConnection,and ExecutorService in 
> every CP call.
> -
>
> Key: HBASE-9984
> URL: https://issues.apache.org/jira/browse/HBASE-9984
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.13
>Reporter: Anil Gupta
>Priority: Minor
>  Labels: aggregate, client, coprocessors, hbase
>
> At present AggregationClient takes Conf in constructor and create a new 
> Htable instance on every method calls. The constructor of HTable used in 
> AggregationClient is very heavy as it creates a new HConnection and 
> ExecutorService. 
> Above mechanism is not convenient where the Application is managing HTable, 
> HConnection, ExecutorService by itself. So, i propose 
> 1# AggregationClient should provide an additional constructor: 
> AggregationClient(HTable)
> 2# Provide methods that takes Htable.
> In this way we can avoid creation of Htable, HConnection,and ExecutorService 
> in every CP call. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8937) createEphemeralNodeAndWatch don't set watcher if the node is created successfully

2013-11-18 Thread Aaron Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826276#comment-13826276
 ] 

Aaron Lei commented on HBASE-8937:
--

I also wonder how it works without registering a watcher when create master 
znode.

Is this code:
byte [] bytes = ZKUtil.getDataAndWatch(this.watcher, 
this.watcher.masterAddressZNode);
that makes it work well?

> createEphemeralNodeAndWatch don't set watcher if the node is created 
> successfully
> -
>
> Key: HBASE-8937
> URL: https://issues.apache.org/jira/browse/HBASE-8937
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Liu Shaohui
>Priority: Minor
>  Labels: master, wacter
>
> CreateEphemeralNodeAndWatch in zkUtil don't set watcher if the node is 
> created successfully. This is not consistent with the comment and may causes 
> the ActiveMasterManager cannot get events that master node is deleted or 
> changed.
> {code}
>   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
>   String znode, byte [] data)
>   throws KeeperException {
> try {
>   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
>   CreateMode.EPHEMERAL);
> } catch (KeeperException.NodeExistsException nee) {
>   if(!watchAndCheckExists(zkw, znode)) {
> // It did exist but now it doesn't, try again
> return createEphemeralNodeAndWatch(zkw, znode, data);
>   }
>   return false;
> } catch (InterruptedException e) {
>   LOG.info("Interrupted", e);
>   Thread.currentThread().interrupt();
> }
> return true;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826272#comment-13826272
 ] 

Hadoop QA commented on HBASE-9969:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12614566/KeyValueHeapBenchmark_v1.ods
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7926//console

This message is automatically generated.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9969-0.94.txt, KeyValueHeapBenchmark_v1.ods, 
> hbase-9969-pq-v1.patch, hbase-9969-v2.patch, hbase-9969-v3.patch, 
> hbase-9969.patch, hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826270#comment-13826270
 ] 

Hudson commented on HBASE-9831:
---

SUCCESS: Integrated in hbase-0.96 #195 (See 
[https://builds.apache.org/job/hbase-0.96/195/])
HBASE-9831 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D 
option (Takeshi Miao) (jmhsieh: rev 1543138)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826269#comment-13826269
 ] 

Hudson commented on HBASE-9973:
---

SUCCESS: Integrated in hbase-0.96 #195 (See 
[https://builds.apache.org/job/hbase-0.96/195/])
HBASE-9973 Users with 'Admin' ACL permission will lose permissions after 
upgrade to 0.96.x from 0.94.x or 0.92.x (Himanshu Vashishtha) (mbertozzi: rev 
1543178)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/NamespaceUpgrade.java


> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: migration, security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Himanshu Vashishtha
>  Labels: acl
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9973-v2.patch, 9973-v2.patch, 9973.patch
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningless:
> {code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
> value=A {code}
> As a result, 
> Proposed fix:
> I see the fix being relatively straightforward. As part of the migration, 
> change any entries in the '_acl_' table with key '_acl_' into a new row with 
> key 'hbase:acl', all else being the same. And the old entry would be deleted.
> This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9924) Avoid potential filename conflict in region_mover.rb

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826268#comment-13826268
 ] 

Hudson commented on HBASE-9924:
---

SUCCESS: Integrated in hbase-0.96 #195 (See 
[https://builds.apache.org/job/hbase-0.96/195/])
HBASE-9924 Avoid potential filename conflict in region_mover.rb (tedyu: rev 
1543224)
* /hbase/branches/0.96/bin/region_mover.rb


> Avoid potential filename conflict in region_mover.rb
> 
>
> Key: HBASE-9924
> URL: https://issues.apache.org/jira/browse/HBASE-9924
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 0.96.0, 0.94.13
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBase-9924.txt
>
>
> when i worked at a shared/common box with my colleague, found this error 
> while moving region:
> NativeException: java.io.FileNotFoundException: /tmp/hh-hadoop-srv-st01.bj 
> (Permission denied)
>   writeFile at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:283
>   unloadRegions at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:354
>  (root) at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:480
> 2013-11-07 15:08:12 Unload host hh-hadoop-srv-st01.bj failed.
> The root cause is currently getFilename in region move script will get the 
> same output with diff users. One possible quick fix is just add the username 
> to the filename.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9982) TestClientNoCluster should use random numbers

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826271#comment-13826271
 ] 

Hudson commented on HBASE-9982:
---

SUCCESS: Integrated in hbase-0.96 #195 (See 
[https://builds.apache.org/job/hbase-0.96/195/])
HBASE-9982 TestClientNoCluster should use random numbers (nkeywal: rev 1543052)
* 
/hbase/branches/0.96/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java


> TestClientNoCluster should use random numbers
> -
>
> Key: HBASE-9982
> URL: https://issues.apache.org/jira/browse/HBASE-9982
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.96.1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9982.v1.patch, 9982.v2.patch
>
>
> Using random number increases the number of calls to the meta scanner.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9987) Remove some synchronisation points in HConnectionManager

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826267#comment-13826267
 ] 

Hudson commented on HBASE-9987:
---

SUCCESS: Integrated in hbase-0.96 #195 (See 
[https://builds.apache.org/job/hbase-0.96/195/])
HBASE-9987 Remove some synchronisation points in HConnectionManager (nkeywal: 
rev 1543050)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java


> Remove some synchronisation points in HConnectionManager
> 
>
> Key: HBASE-9987
> URL: https://issues.apache.org/jira/browse/HBASE-9987
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9987.v1.patch, 9987.v2.patch
>
>
> Change a Map to a concurrentMap
> Removed the "cachedServer (introduced in HBASE-4785). I suspect that this 
> function is not needed anymore as we also have a list of dead servers, and 
> accessing the list is not blocking. I will dig into this more however.
> The patch gives a 10% improvement with the NoClusterClient.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9984) AggregationClient creates a new Htable, HConnection,and ExecutorService in every CP call.

2013-11-18 Thread Anil Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826262#comment-13826262
 ] 

Anil Gupta commented on HBASE-9984:
---

[~anoopsamjohn] Yes, AggregationClient is done in that way in 0.96/trunk. 
However, majority of users are still using 0.94 since 0.96 is not backward 
compatible. 

> AggregationClient creates a new Htable, HConnection,and ExecutorService in 
> every CP call.
> -
>
> Key: HBASE-9984
> URL: https://issues.apache.org/jira/browse/HBASE-9984
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Coprocessors
>Affects Versions: 0.94.13
>Reporter: Anil Gupta
>Priority: Minor
>  Labels: aggregate, client, coprocessors, hbase
>
> At present AggregationClient takes Conf in constructor and create a new 
> Htable instance on every method calls. The constructor of HTable used in 
> AggregationClient is very heavy as it creates a new HConnection and 
> ExecutorService. 
> Above mechanism is not convenient where the Application is managing HTable, 
> HConnection, ExecutorService by itself. So, i propose 
> 1# AggregationClient should provide an additional constructor: 
> AggregationClient(HTable)
> 2# Provide methods that takes Htable.
> In this way we can avoid creation of Htable, HConnection,and ExecutorService 
> in every CP call. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9995) Not stoping ReplicationSink when using custom implementation for the ReplicationSink.

2013-11-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-9995:
--

Status: Patch Available  (was: Open)

> Not stoping ReplicationSink when using custom implementation for the 
> ReplicationSink.
> -
>
> Key: HBASE-9995
> URL: https://issues.apache.org/jira/browse/HBASE-9995
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.13
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Minor
> Attachments: HBASE-9995.patch
>
>
> Missed this in HBASE-9975.
> Also solving a new javadoc warning induced by HBASE-9975



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Matt Corgan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Corgan updated HBASE-9969:
---

Attachment: KeyValueHeapBenchmark_v1.ods

Attaching KeyValueHeapBenchmark_v1.ods with the benchmark output for both 1 
col/row and 16 cols/row.

Some of it's hard to explain.  Looks like LoserTree is often faster at next() 
when there is more heaping to do, but not when KVs are coming from the same 
scanner, like when numScanners=1 or colsPerRow=16.  Maybe because it doesn't 
have an optimization for that case?

Anyway, just putting it up there for people to poke holes in or continue to 
test.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9969-0.94.txt, KeyValueHeapBenchmark_v1.ods, 
> hbase-9969-pq-v1.patch, hbase-9969-v2.patch, hbase-9969-v3.patch, 
> hbase-9969.patch, hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9995) Not stoping ReplicationSink when using custom implementation for the ReplicationSink.

2013-11-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-9995:
--

Attachment: HBASE-9995.patch

> Not stoping ReplicationSink when using custom implementation for the 
> ReplicationSink.
> -
>
> Key: HBASE-9995
> URL: https://issues.apache.org/jira/browse/HBASE-9995
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.13
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Minor
> Attachments: HBASE-9995.patch
>
>
> Missed this in HBASE-9975.
> Also solving a new javadoc warning induced by HBASE-9975



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826254#comment-13826254
 ] 

Hudson commented on HBASE-9831:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #124 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/124/])
HBASE-9831 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D 
option (Takeshi Miao) (jmhsieh: rev 1543138)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9995) Not stoping ReplicationSink when using custom implementation for the ReplicationSink.

2013-11-18 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-9995:
-

 Summary: Not stoping ReplicationSink when using custom 
implementation for the ReplicationSink.
 Key: HBASE-9995
 URL: https://issues.apache.org/jira/browse/HBASE-9995
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.13
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor


Missed this in HBASE-9975.
Also solving a new javadoc warning induced by HBASE-9975



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826253#comment-13826253
 ] 

Hudson commented on HBASE-9973:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #124 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/124/])
HBASE-9973 Users with 'Admin' ACL permission will lose permissions after 
upgrade to 0.96.x from 0.94.x or 0.92.x (Himanshu Vashishtha) (mbertozzi: rev 
1543178)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/NamespaceUpgrade.java


> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: migration, security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Himanshu Vashishtha
>  Labels: acl
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9973-v2.patch, 9973-v2.patch, 9973.patch
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningless:
> {code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
> value=A {code}
> As a result, 
> Proposed fix:
> I see the fix being relatively straightforward. As part of the migration, 
> change any entries in the '_acl_' table with key '_acl_' into a new row with 
> key 'hbase:acl', all else being the same. And the old entry would be deleted.
> This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9924) Avoid potential filename conflict in region_mover.rb

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826252#comment-13826252
 ] 

Hudson commented on HBASE-9924:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #124 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/124/])
HBASE-9924 Avoid potential filename conflict in region_mover.rb (tedyu: rev 
1543224)
* /hbase/branches/0.96/bin/region_mover.rb


> Avoid potential filename conflict in region_mover.rb
> 
>
> Key: HBASE-9924
> URL: https://issues.apache.org/jira/browse/HBASE-9924
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 0.96.0, 0.94.13
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBase-9924.txt
>
>
> when i worked at a shared/common box with my colleague, found this error 
> while moving region:
> NativeException: java.io.FileNotFoundException: /tmp/hh-hadoop-srv-st01.bj 
> (Permission denied)
>   writeFile at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:283
>   unloadRegions at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:354
>  (root) at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:480
> 2013-11-07 15:08:12 Unload host hh-hadoop-srv-st01.bj failed.
> The root cause is currently getFilename in region move script will get the 
> same output with diff users. One possible quick fix is just add the username 
> to the filename.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Matt Corgan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Corgan updated HBASE-9969:
---

Attachment: hbase-9969-pq-v1.patch

attaching hbase-9969-pq-v1.patch

* adds KeyValueScannerPriorityQueue, which is a stripped down copy of 
PriorityQueue that we can play with
* adds KeyValueScannerHeap (almost identical to KeyValueHeap, but uses the 
above)
* includes LoserTreeKeyValueHeap and LoserTree
* each of the 3 heaps implments BenchmarkableKeyValueHeap (not complete)
* enhances KeyValueHeapBenchmark to benchmark all 3 implementations

* always tests 1mm KVs, no matter how many scanners
* sorts the input KVs, though that doesn't seem to matter much
* does a few warmup runs

One problem is that this goes from 1 to 3 implementations behind the same 
interface which may not get inlined as well.  Absolute performance will 
therefore be lower, but hopefully performance will be comparable across the 3 
implementations.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9969-0.94.txt, hbase-9969-pq-v1.patch, 
> hbase-9969-v2.patch, hbase-9969-v3.patch, hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9969:
-

Fix Version/s: (was: 0.94.15)

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969-v3.patch, 
> hbase-9969.patch, hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826238#comment-13826238
 ] 

Lars Hofhansl commented on HBASE-9969:
--

[~mcorgan]
Interestingly I also do not see any measurable performance gain from removing 
that check.

This does not seem too surprising. If current was the last scanner on the heap, 
adding it back causes no extra comparison, neither does pollRealKV do any 
appreciable work. If there is only scanner it will be returned in the first 
iteration of the loop (and might do a real seek in that process, which is 
necessary for lazy seek).

Do have data for the number of comparisons.

[~jmhsieh] I agree. We can let this stew (if we want) in trunk for a bit and 
then decide about backports. I'll remove 0.94 from the fix targets.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969-v3.patch, 
> hbase-9969.patch, hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-7663) [Per-KV security] Visibility labels

2013-11-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-7663:
--

Attachment: HBASE-7663_V10.patch

This is what I applied to Trunk.

> [Per-KV security] Visibility labels
> ---
>
> Key: HBASE-7663
> URL: https://issues.apache.org/jira/browse/HBASE-7663
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: HBASE-7663.patch, HBASE-7663_V10.patch, 
> HBASE-7663_V2.patch, HBASE-7663_V3.patch, HBASE-7663_V4.patch, 
> HBASE-7663_V5.patch, HBASE-7663_V6.patch, HBASE-7663_V7.patch, 
> HBASE-7663_V8.patch, HBASE-7663_V9.patch
>
>
> Implement Accumulo-style visibility labels. Consider the following design 
> principles:
> - Coprocessor based implementation
> - Minimal to no changes to core code
> - Use KeyValue tags (HBASE-7448) to carry labels
> - Use OperationWithAttributes# {get,set}Attribute for handling visibility 
> labels in the API
> - Implement a new filter for evaluating visibility labels as KVs are streamed 
> through.
> This approach would be consistent in deployment and API details with other 
> per-KV security work, supporting environments where they might be both be 
> employed, even stacked on some tables.
> See the parent issue for more discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-7663) [Per-KV security] Visibility labels

2013-11-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-7663:
--

  Resolution: Fixed
Release Note: 
VisibilityController CP handles the visibility
The visibility labels are stored as tags with KVs
Use Mutation#setCellVisibility(new CellVisibility()); to add 
visibility expressions to cells
The label expression can contain visibility labels joined with logical 
expressions &, | and !. Also using (, ) one can specify the precedence order
Eg : SECRET & CONFIDENTIAL & !PUBLIC
Please note that passing CellVisibility in a Delete mutation is illegal.

During read, (Scan/Get) one can specify labels associated with that, in 
Authorizations
scan.setAuthorizations(new Authorizations(SECRET, CONFIDENTIAL));


Visibility Label admin operations

Labels can be added to the system using VisibilityClient#addLabels(). Also can 
use add_labels shell command
Only super user (hbase.superuse) have permission to add the labels into the 
system.
A set of labels can be associated for a user using setAuths. 
VisibilityClient#setAuths()
Similarly labels can be removed from user auths using clearAuths.
getAuths API can be used to view user auths.
Also there is support for set_auths, clear_auths and get_auths shell commands
Same way as in addLabels, only super user have permission for these operations.
When AccessController is ON the permission checks are handled by AC.
Using AC along with Visibility is optional. When AC is not available, 
permission checks are done at VisibilityController level itself.
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> [Per-KV security] Visibility labels
> ---
>
> Key: HBASE-7663
> URL: https://issues.apache.org/jira/browse/HBASE-7663
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, 
> HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, 
> HBASE-7663_V6.patch, HBASE-7663_V7.patch, HBASE-7663_V8.patch, 
> HBASE-7663_V9.patch
>
>
> Implement Accumulo-style visibility labels. Consider the following design 
> principles:
> - Coprocessor based implementation
> - Minimal to no changes to core code
> - Use KeyValue tags (HBASE-7448) to carry labels
> - Use OperationWithAttributes# {get,set}Attribute for handling visibility 
> labels in the API
> - Implement a new filter for evaluating visibility labels as KVs are streamed 
> through.
> This approach would be consistent in deployment and API details with other 
> per-KV security work, supporting environments where they might be both be 
> employed, even stacked on some tables.
> See the parent issue for more discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7663) [Per-KV security] Visibility labels

2013-11-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826218#comment-13826218
 ] 

Anoop Sam John commented on HBASE-7663:
---

Committed to Trunk..
Thanks to Ram for pairing with me in this implementation.
Thanks to Andrew for his suggestions, discussions, and reviews.
Thanks to Stack for his detailed review and comments.

> [Per-KV security] Visibility labels
> ---
>
> Key: HBASE-7663
> URL: https://issues.apache.org/jira/browse/HBASE-7663
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, 
> HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, 
> HBASE-7663_V6.patch, HBASE-7663_V7.patch, HBASE-7663_V8.patch, 
> HBASE-7663_V9.patch
>
>
> Implement Accumulo-style visibility labels. Consider the following design 
> principles:
> - Coprocessor based implementation
> - Minimal to no changes to core code
> - Use KeyValue tags (HBASE-7448) to carry labels
> - Use OperationWithAttributes# {get,set}Attribute for handling visibility 
> labels in the API
> - Implement a new filter for evaluating visibility labels as KVs are streamed 
> through.
> This approach would be consistent in deployment and API details with other 
> per-KV security work, supporting environments where they might be both be 
> employed, even stacked on some tables.
> See the parent issue for more discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9994:
--

Fix Version/s: 0.98.0
 Hadoop Flags: Reviewed

Integrated to trunk.

Thanks for the review, Anoop.

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826198#comment-13826198
 ] 

Hadoop QA commented on HBASE-9994:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614528/9994-v1.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
10 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7925//console

This message is automatically generated.

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8369) MapReduce over snapshot files

2013-11-18 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-8369:
-

Attachment: hbase-8369_v11.patch

Forgot to attach the latest committed version from RB. The only difference is 
fixed javadoc between v10 from RB and v11. 

> MapReduce over snapshot files
> -
>
> Key: HBASE-8369
> URL: https://issues.apache.org/jira/browse/HBASE-8369
> Project: HBase
>  Issue Type: New Feature
>  Components: mapreduce, snapshots
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0
>
> Attachments: HBASE-8369-0.94.patch, HBASE-8369-0.94_v2.patch, 
> HBASE-8369-0.94_v3.patch, HBASE-8369-0.94_v4.patch, HBASE-8369-0.94_v5.patch, 
> HBASE-8369-trunk_v1.patch, HBASE-8369-trunk_v2.patch, 
> HBASE-8369-trunk_v3.patch, hbase-8369_v0.patch, hbase-8369_v11.patch, 
> hbase-8369_v5.patch, hbase-8369_v6.patch, hbase-8369_v7.patch, 
> hbase-8369_v8.patch, hbase-8369_v9.patch
>
>
> The idea is to add an InputFormat, which can run the mapreduce job over 
> snapshot files directly bypassing hbase server layer. The IF is similar in 
> usage to TableInputFormat, taking a Scan object from the user, but instead of 
> running from an online table, it runs from a table snapshot. We do one split 
> per region in the snapshot, and open an HRegion inside the RecordReader. A 
> RegionScanner is used internally for doing the scan without any HRegionServer 
> bits. 
> Users have been asking and searching for ways to run MR jobs by reading 
> directly from hfiles, so this allows new use cases if reading from stale data 
> is ok:
>  - Take snapshots periodically, and run MR jobs only on snapshots.
>  - Export snapshots to remote hdfs cluster, run the MR jobs at that cluster 
> without HBase cluster.
>  - (Future use case) Combine snapshot data with online hbase data: Scan from 
> yesterday's snapshot, but read today's data from online hbase cluster. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8369) MapReduce over snapshot files

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826183#comment-13826183
 ] 

Hudson commented on HBASE-8369:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #843 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/843/])
HBASE-8369 MapReduce over snapshot files (enis: rev 1543195)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractClientScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestTableSnapshotInputFormat.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/MapReduce.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/HDFSBlocksDistribution.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/ClientSideRegionScanner.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/AbstractHBaseTool.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java


> MapReduce over snapshot files
> -
>
> Key: HBASE-8369
> URL: https://issues.apache.org/jira/browse/HBASE-8369
> Project: HBase
>  Issue Type: New Feature
>  Components: mapreduce, snapshots
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0
>
> Attachments: HBASE-8369-0.94.patch, HBASE-8369-0.94_v2.patch, 
> HBASE-8369-0.94_v3.patch, HBASE-8369-0.94_v4.patch, HBASE-8369-0.94_v5.patch, 
> HBASE-8369-trunk_v1.patch, HBASE-8369-trunk_v2.patch, 
> HBASE-8369-trunk_v3.patch, hbase-8369_v0.patch, hbase-8369_v5.patch, 
> hbase-8369_v6.patch, hbase-8369_v7.patch, hbase-8369_v8.patch, 
> hbase-8369_v9.patch
>
>
> The idea is to add an InputFormat, which can run the mapreduce job over 
> snapshot files directly bypassing hbase server layer. The IF is similar in 
> usage to TableInputFormat, taking a Scan object from the user, but instead of 
> running from an online table, it runs from a table snapshot. We do one split 
> per region in the snapshot, and open an HRegion inside the RecordReader. A 
> RegionScanner is used internally for doing the scan without any HRegionServer 
> bits. 
> Users have been asking and searching for ways to run MR jobs by reading 
> directly from hfiles, so this allows new use cases if reading from stale data 
> is ok:
>  - Take snapshots periodically, and run MR jobs only on snapshots.
>  - Export snapshots to remote hdfs cluster, run the MR jobs at that cluster 
> without HBase cluster.
>  - (Future use case) Combine snapshot data with online hbase data: Scan from 
> yesterday's snapshot, but read today's data from online hbase cluster. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826184#comment-13826184
 ] 

Hudson commented on HBASE-9831:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #843 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/843/])
HBASE-9831 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D 
option (Takeshi Miao) (jmhsieh: rev 1543137)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826182#comment-13826182
 ] 

Hudson commented on HBASE-9973:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #843 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/843/])
HBASE-9973 Users with 'Admin' ACL permission will lose permissions after 
upgrade to 0.96.x from 0.94.x or 0.92.x (Himanshu Vashishtha) (mbertozzi: rev 
1543179)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/NamespaceUpgrade.java


> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: migration, security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Himanshu Vashishtha
>  Labels: acl
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9973-v2.patch, 9973-v2.patch, 9973.patch
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningless:
> {code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
> value=A {code}
> As a result, 
> Proposed fix:
> I see the fix being relatively straightforward. As part of the migration, 
> change any entries in the '_acl_' table with key '_acl_' into a new row with 
> key 'hbase:acl', all else being the same. And the old entry would be deleted.
> This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9924) Avoid potential filename conflict in region_mover.rb

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826181#comment-13826181
 ] 

Hudson commented on HBASE-9924:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #843 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/843/])
HBASE-9924 Avoid potential filename conflict in region_mover.rb (tedyu: rev 
1543225)
* /hbase/trunk/bin/region_mover.rb


> Avoid potential filename conflict in region_mover.rb
> 
>
> Key: HBASE-9924
> URL: https://issues.apache.org/jira/browse/HBASE-9924
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 0.96.0, 0.94.13
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBase-9924.txt
>
>
> when i worked at a shared/common box with my colleague, found this error 
> while moving region:
> NativeException: java.io.FileNotFoundException: /tmp/hh-hadoop-srv-st01.bj 
> (Permission denied)
>   writeFile at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:283
>   unloadRegions at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:354
>  (root) at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:480
> 2013-11-07 15:08:12 Unload host hh-hadoop-srv-st01.bj failed.
> The root cause is currently getFilename in region move script will get the 
> same output with diff users. One possible quick fix is just add the username 
> to the filename.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9949) Fix the race condition between Compaction and StoreScanner.init

2013-11-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826177#comment-13826177
 ] 

Ted Yu commented on HBASE-9949:
---

Checked in addendum to trunk.

> Fix the race condition between Compaction and StoreScanner.init
> ---
>
> Key: HBASE-9949
> URL: https://issues.apache.org/jira/browse/HBASE-9949
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
>Priority: Minor
> Fix For: 0.89-fb, 0.98.0
>
> Attachments: 9949-0.96.addendum, 9949-trunk-v1.txt, 
> 9949-trunk-v2.txt, 9949-trunk-v3.txt, 9949.addendum
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The StoreScanner constructor has multiple stages and there can be a race 
> betwwen an ongoing compaction and the StoreScanner constructor where we might 
> get the list of scanners before a compaction and seek on those scanners after 
> the compaction.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9949) Fix the race condition between Compaction and StoreScanner.init

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9949:
--

Attachment: 9949.addendum

> Fix the race condition between Compaction and StoreScanner.init
> ---
>
> Key: HBASE-9949
> URL: https://issues.apache.org/jira/browse/HBASE-9949
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
>Priority: Minor
> Fix For: 0.89-fb, 0.98.0
>
> Attachments: 9949-0.96.addendum, 9949-trunk-v1.txt, 
> 9949-trunk-v2.txt, 9949-trunk-v3.txt, 9949.addendum
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The StoreScanner constructor has multiple stages and there can be a race 
> betwwen an ongoing compaction and the StoreScanner constructor where we might 
> get the list of scanners before a compaction and seek on those scanners after 
> the compaction.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9949) Fix the race condition between Compaction and StoreScanner.init

2013-11-18 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826169#comment-13826169
 ] 

Jonathan Hsieh commented on HBASE-9949:
---

Sounds great.  Thanks!  Want to start the thread on dev list?

> Fix the race condition between Compaction and StoreScanner.init
> ---
>
> Key: HBASE-9949
> URL: https://issues.apache.org/jira/browse/HBASE-9949
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
>Priority: Minor
> Fix For: 0.89-fb, 0.98.0
>
> Attachments: 9949-0.96.addendum, 9949-trunk-v1.txt, 
> 9949-trunk-v2.txt, 9949-trunk-v3.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The StoreScanner constructor has multiple stages and there can be a race 
> betwwen an ongoing compaction and the StoreScanner constructor where we might 
> get the list of scanners before a compaction and seek on those scanners after 
> the compaction.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9961) [WINDOWS] Multicast should bind to local address

2013-11-18 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826168#comment-13826168
 ] 

Jonathan Hsieh commented on HBASE-9961:
---

[~enis] I believe this patch is responsible for 8-10 javadoc warnings. Please 
fix.

{code}
[WARNING] Javadoc Warnings
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:51:
 warning: sun.misc.Unsafe is Sun proprietary API and may be removed in a future 
release
[WARNING] import sun.misc.Unsafe;
[WARNING] ^
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:1099:
 warning: sun.misc.Unsafe is Sun proprietary API and may be removed in a future 
release
[WARNING] static final Unsafe theUnsafe;
[WARNING] ^
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see:illegal character: "58" in 
"https://issues.apache.org/jira/browse/HBASE-9961";
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see:illegal character: "47" in 
"https://issues.apache.org/jira/browse/HBASE-9961";
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see:illegal character: "47" in 
"https://issues.apache.org/jira/browse/HBASE-9961";
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see:illegal character: "47" in 
"https://issues.apache.org/jira/browse/HBASE-9961";
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see:illegal character: "47" in 
"https://issues.apache.org/jira/browse/HBASE-9961";
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see:illegal character: "47" in 
"https://issues.apache.org/jira/browse/HBASE-9961";
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see:illegal character: "45" in 
"https://issues.apache.org/jira/browse/HBASE-9961";
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:939:
 warning - Tag @see: reference not found: 
https://issues.apache.org/jira/browse/HBASE-9961
[INFO]  
{code}

> [WINDOWS] Multicast should bind to local address
> 
>
> Key: HBASE-9961
> URL: https://issues.apache.org/jira/browse/HBASE-9961
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9961_v1.patch, hbase-9961_v2.patch
>
>
> Binding to a multicast address (such as "hbase.status.multicast.address.ip") 
> seems to be the preferred method on most unix systems and linux(2,3). At 
> least in RedHat, binding to multicast address might not filter out other 
> traffic coming to the same port, but for different multi cast groups (2)]. 
> However, on windows, you cannot bind to a non local (class D) address (1), 
> which seems to be correct according to the spec.
> # http://msdn.microsoft.com/en-us/library/ms737550%28v=vs.85%29.aspx
> # https://bugzilla.redhat.com/show_bug.cgi?id=231899
> # 
> http://stackoverflow.com/questions/10692956/what-does-it-mean-to-bind-a-multicast-udp-socket
> # https://issues.jboss.org/browse/JGRP-515
> The solution is to bind to mcast address on linux, but a local address on 
> windows. 
> TestHCM is also failing because of this. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9949) Fix the race condition between Compaction and StoreScanner.init

2013-11-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826166#comment-13826166
 ] 

Ted Yu commented on HBASE-9949:
---

How about I take out the new infrastructure through an addendum and keep the 
fix (since you're fine with it).

That would give us more time in streamlining proper test infrastructure for 
this ?

> Fix the race condition between Compaction and StoreScanner.init
> ---
>
> Key: HBASE-9949
> URL: https://issues.apache.org/jira/browse/HBASE-9949
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
>Priority: Minor
> Fix For: 0.89-fb, 0.98.0
>
> Attachments: 9949-0.96.addendum, 9949-trunk-v1.txt, 
> 9949-trunk-v2.txt, 9949-trunk-v3.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The StoreScanner constructor has multiple stages and there can be a race 
> betwwen an ongoing compaction and the StoreScanner constructor where we might 
> get the list of scanners before a compaction and seek on those scanners after 
> the compaction.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9949) Fix the race condition between Compaction and StoreScanner.init

2013-11-18 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826162#comment-13826162
 ] 

Jonathan Hsieh commented on HBASE-9949:
---

Hey [~te...@apache.org] commented on Friday or Thursday that I'm concerned 
about this infrastructure creeping in throughout the code.  Specifically the in 
the main comment i mentioned that "This was not addressed" and then you 
committed without addressing the concern I had with the code in the review and 
there were no +1's in review board.   (Though sergey had a conditional +1 in 
jira).

I'm assuming this was a an oversight.

To be clear, I'm basically fine with the fix -- I'm mostly concerned about the 
new framework.

It seems like yet another infrastructure and it is one that I'm not 
particularly fond of because of it seems cumbersome and has the potential to 
perf impact in other areas if extended. This will take more work but it can be 
done in a way that makes the code more readable and maintainable and I'd rather 
we move in that direction instead of adding yet more one of infrastructures.  
Can we instead make use a factories patterns + mocks to do this injection?  
Happy to move this discussion to the mailing list.  


> Fix the race condition between Compaction and StoreScanner.init
> ---
>
> Key: HBASE-9949
> URL: https://issues.apache.org/jira/browse/HBASE-9949
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
>Priority: Minor
> Fix For: 0.89-fb, 0.98.0
>
> Attachments: 9949-0.96.addendum, 9949-trunk-v1.txt, 
> 9949-trunk-v2.txt, 9949-trunk-v3.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The StoreScanner constructor has multiple stages and there can be a race 
> betwwen an ongoing compaction and the StoreScanner constructor where we might 
> get the list of scanners before a compaction and seek on those scanners after 
> the compaction.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826142#comment-13826142
 ] 

Ted Yu commented on HBASE-9994:
---

The change in WALCellCodec resulted in test failure below:

TestReplicationKillMasterRSCompressed,TestWALReplayCompressed,TestHLogSplitCompressed

This patch is limited to the two classes listed above.

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9893) Incorrect assert condition in OrderedBytes decoding

2013-11-18 Thread He Liangliang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826134#comment-13826134
 ] 

He Liangliang commented on HBASE-9893:
--

lgtm, thanks.

> Incorrect assert condition in OrderedBytes decoding
> ---
>
> Key: HBASE-9893
> URL: https://issues.apache.org/jira/browse/HBASE-9893
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: He Liangliang
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-9893.00.patch, HBASE-9893.patch
>
>
> The following assert condition is incorrect when decoding blob var byte array.
> {code}
> assert t == 0 : "Unexpected bits remaining after decoding blob.";
> {code}
> When the number of bytes to decode is multiples of 8 (i.e the original number 
> of bytes is multiples of 7), this assert may fail.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8465) Auto-drop rollback snapshot for snapshot restore

2013-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826131#comment-13826131
 ] 

Hadoop QA commented on HBASE-8465:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614520/HBASE-8465-v6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
10 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7924//console

This message is automatically generated.

> Auto-drop rollback snapshot for snapshot restore
> 
>
> Key: HBASE-8465
> URL: https://issues.apache.org/jira/browse/HBASE-8465
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Matteo Bertozzi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 8465-trunk-v1.txt, 8465-trunk-v2.txt, 
> HBASE-8465-v3.patch, HBASE-8465-v4.patch, HBASE-8465-v5.patch, 
> HBASE-8465-v6.patch
>
>
> Below is an excerpt from snapshot restore javadoc:
> {code}
>* Restore the specified snapshot on the original table. (The table must be 
> disabled)
>* Before restoring the table, a new snapshot with the current table state 
> is created.
>* In case of failure, the table will be rolled back to the its original 
> state.
> {code}
> We can improve the handling of rollbackSnapshot in two ways:
> 1. give better name to the rollbackSnapshot (adding 
> {code}'-for-rollback-'{code}). Currently the name is of the form:
> String rollbackSnapshot = snapshotName + "-" + 
> EnvironmentEdgeManager.currentTimeMillis();
> 2. drop rollbackSnapshot at the end of restoreSnapshot() if the restore is 
> successful. We can introduce new config param, named 
> 'hbase.snapshot.restore.drop.rollback', to keep compatibility with current 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9893) Incorrect assert condition in OrderedBytes decoding

2013-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826121#comment-13826121
 ] 

Hadoop QA commented on HBASE-9893:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614497/HBASE-9893.00.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
10 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.access.TestNamespaceCommands

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7923//console

This message is automatically generated.

> Incorrect assert condition in OrderedBytes decoding
> ---
>
> Key: HBASE-9893
> URL: https://issues.apache.org/jira/browse/HBASE-9893
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: He Liangliang
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-9893.00.patch, HBASE-9893.patch
>
>
> The following assert condition is incorrect when decoding blob var byte array.
> {code}
> assert t == 0 : "Unexpected bits remaining after decoding blob.";
> {code}
> When the number of bytes to decode is multiples of 8 (i.e the original number 
> of bytes is multiples of 7), this assert may fail.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826117#comment-13826117
 ] 

Anoop Sam John commented on HBASE-9994:
---

LGTM
So you handle the above 2 classes in this patch. I can see some more places 
where we use copyFrom(). So we can do that in follow up jiras?
+1 for this change.

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-9736) Alow more than one log splitter per RS

2013-11-18 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong reassigned HBASE-9736:


Assignee: Jeffrey Zhong

> Alow more than one log splitter per RS
> --
>
> Key: HBASE-9736
> URL: https://issues.apache.org/jira/browse/HBASE-9736
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Reporter: stack
>Assignee: Jeffrey Zhong
>Priority: Critical
>
> IIRC, this is an idea that came from the lads at Xiaomi.
> I have a small cluster of 6 RSs and one went down.  It had a few WALs.  I see 
> this in logs:
> 2013-10-09 05:47:27,890 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: 
> total tasks = 25 unassigned = 21
> WAL splitting is held up for want of slots out on the cluster to split WALs.
> We need to be careful we don't overwhelm the foreground regionservers but 
> more splitters should help get all back online faster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826052#comment-13826052
 ] 

Lars Hofhansl commented on HBASE-9969:
--

bq. I can't figure out why we need to do a heap.add() and pollRealKV when 
topScanner==null.

Do we still have to enforce a seek if !current.realSeekDone()?
Getting there we know current.peek != null, if current is not seeked we need to 
enforce that it seems.


> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969-v3.patch, 
> hbase-9969.patch, hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826048#comment-13826048
 ] 

Hudson commented on HBASE-9831:
---

SUCCESS: Integrated in HBase-0.94 #1207 (See 
[https://builds.apache.org/job/HBase-0.94/1207/])
HBASE-9831 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D 
option (Takeshi Miao) (jmhsieh: rev 1543139)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9865) Reused WALEdits in replication may cause RegionServers to go OOM

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826047#comment-13826047
 ] 

Hudson commented on HBASE-9865:
---

SUCCESS: Integrated in HBase-0.94 #1207 (See 
[https://builds.apache.org/job/HBase-0.94/1207/])
HBASE-9993 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit. 
(larsh: rev 1543220)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Reused WALEdits in replication may cause RegionServers to go OOM
> 
>
> Key: HBASE-9865
> URL: https://issues.apache.org/jira/browse/HBASE-9865
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5, 0.95.0
>Reporter: churro morales
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9865-0.94-v2.txt, 9865-0.94-v4.txt, 9865-sample-1.txt, 
> 9865-sample.txt, 9865-trunk-v2.txt, 9865-trunk-v3.txt, 9865-trunk-v4.txt, 
> 9865-trunk.txt
>
>
> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM.
> A little background on this issue.  We noticed that our source replication 
> regionservers would get into gc storms and sometimes even OOM. 
> We noticed a case where it showed that there were around 25k WALEdits to 
> replicate, each one with an ArrayList of KeyValues.  The array list had a 
> capacity of around 90k (using 350KB of heap memory) but had around 6 non null 
> entries.
> When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a 
> WALEdit it removes all kv's that are scoped other than local.  
> But in doing so we don't account for the capacity of the ArrayList when 
> determining heapSize for a WALEdit.  The logic for shipping a batch is 
> whether you have hit a size capacity or number of entries capacity.  
> Therefore if have a WALEdit with 25k entries and suppose all are removed: 
> The size of the arrayList is 0 (we don't even count the collection's heap 
> size currently) but the capacity is ignored.
> This will yield a heapSize() of 0 bytes while in the best case it would be at 
> least 10 bytes (provided you pass initialCapacity and you have 32 bit 
> JVM) 
> I have some ideas on how to address this problem and want to know everyone's 
> thoughts:
> 1. We use a probabalistic counter such as HyperLogLog and create something 
> like:
>   * class CapacityEstimateArrayList implements ArrayList
>   ** this class overrides all additive methods to update the 
> probabalistic counts
>   ** it includes one additional method called estimateCapacity 
> (we would take estimateCapacity - size() and fill in sizes for all references)
>   * Then we can do something like this in WALEdit.heapSize:
>   
> {code}
>   public long heapSize() {
> long ret = ClassSize.ARRAYLIST;
> for (KeyValue kv : kvs) {
>   ret += kv.heapSize();
> }
> long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
> ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
> if (scopes != null) {
>   ret += ClassSize.TREEMAP;
>   ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
>   // TODO this isn't quite right, need help here
> }
> return ret;
>   }   
> {code}
> 2. In ReplicationSource.removeNonReplicableEdits() we know the size of the 
> array originally, and we provide some percentage threshold.  When that 
> threshold is met (50% of the entries have been removed) we can call 
> kvs.trimToSize()
> 3. in the heapSize() method for WALEdit we could use reflection (Please don't 
> shoot me for this) to grab the actual capacity of the list.  Doing something 
> like this:
> {code}
> public int getArrayListCapacity()  {
> try {
>   Field f = ArrayList.class.getDeclaredField("elementData");
>   f.setAccessible(true);
>   return ((Object[]) f.get(kvs)).length;
> } catch (Exception e) {
>   log.warn("Exception in trying to get capacity on ArrayList", e);
>   return kvs.size();
> }
> {code}
> I am partial to (1) using HyperLogLog and creating a 
> CapacityEstimateArrayList, this is reusable throughout the code for other 
> classes that implement HeapSize which contains ArrayLists.  The memory 
> footprint is very small and it is very fast.  The issue is that this is an 
> estimate, although we can configure the precision we most likely always be 
> conservative.  The estimateCapacity will always be less than the 
> actualCapacity, but it will be close. I think that putting the logic in 
> removeNonReplicableEdits will work, but this only solves the heapSize problem 
> in this particular scenario.  Solution 3 is

[jira] [Commented] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826049#comment-13826049
 ] 

Hudson commented on HBASE-9993:
---

SUCCESS: Integrated in HBase-0.94 #1207 (See 
[https://builds.apache.org/job/HBase-0.94/1207/])
HBASE-9993 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit. 
(larsh: rev 1543220)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 0.94.14
>
> Attachments: 9993.txt
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9994:
--

Attachment: (was: 9994-v1.txt)

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9994:
--

Attachment: 9994-v1.txt

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9407) Online Schema Change causes Test Load and Verify to fail.

2013-11-18 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang resolved HBASE-9407.


Resolution: Fixed

The test is ok to me now.

> Online Schema Change causes Test Load and Verify to fail.
> -
>
> Key: HBASE-9407
> URL: https://issues.apache.org/jira/browse/HBASE-9407
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Elliott Clark
>Assignee: Jimmy Xiang
>  Labels: online_schema_change
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9994:
--

Description: 
The following classes use ByteString.copyFrom() which should be replaced with 
ZeroCopyLiteralByteString.wrap() :

hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java

  was:
The following classes use ByteString.copyFrom() which should be replaced with 
ZeroCopyLiteralByteString.wrap() :

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java


> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826025#comment-13826025
 ] 

Hudson commented on HBASE-9993:
---

SUCCESS: Integrated in HBase-0.94-security #341 (See 
[https://builds.apache.org/job/HBase-0.94-security/341/])
HBASE-9993 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit. 
(larsh: rev 1543220)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 0.94.14
>
> Attachments: 9993.txt
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9865) Reused WALEdits in replication may cause RegionServers to go OOM

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826023#comment-13826023
 ] 

Hudson commented on HBASE-9865:
---

SUCCESS: Integrated in HBase-0.94-security #341 (See 
[https://builds.apache.org/job/HBase-0.94-security/341/])
HBASE-9993 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit. 
(larsh: rev 1543220)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Reused WALEdits in replication may cause RegionServers to go OOM
> 
>
> Key: HBASE-9865
> URL: https://issues.apache.org/jira/browse/HBASE-9865
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5, 0.95.0
>Reporter: churro morales
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9865-0.94-v2.txt, 9865-0.94-v4.txt, 9865-sample-1.txt, 
> 9865-sample.txt, 9865-trunk-v2.txt, 9865-trunk-v3.txt, 9865-trunk-v4.txt, 
> 9865-trunk.txt
>
>
> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM.
> A little background on this issue.  We noticed that our source replication 
> regionservers would get into gc storms and sometimes even OOM. 
> We noticed a case where it showed that there were around 25k WALEdits to 
> replicate, each one with an ArrayList of KeyValues.  The array list had a 
> capacity of around 90k (using 350KB of heap memory) but had around 6 non null 
> entries.
> When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a 
> WALEdit it removes all kv's that are scoped other than local.  
> But in doing so we don't account for the capacity of the ArrayList when 
> determining heapSize for a WALEdit.  The logic for shipping a batch is 
> whether you have hit a size capacity or number of entries capacity.  
> Therefore if have a WALEdit with 25k entries and suppose all are removed: 
> The size of the arrayList is 0 (we don't even count the collection's heap 
> size currently) but the capacity is ignored.
> This will yield a heapSize() of 0 bytes while in the best case it would be at 
> least 10 bytes (provided you pass initialCapacity and you have 32 bit 
> JVM) 
> I have some ideas on how to address this problem and want to know everyone's 
> thoughts:
> 1. We use a probabalistic counter such as HyperLogLog and create something 
> like:
>   * class CapacityEstimateArrayList implements ArrayList
>   ** this class overrides all additive methods to update the 
> probabalistic counts
>   ** it includes one additional method called estimateCapacity 
> (we would take estimateCapacity - size() and fill in sizes for all references)
>   * Then we can do something like this in WALEdit.heapSize:
>   
> {code}
>   public long heapSize() {
> long ret = ClassSize.ARRAYLIST;
> for (KeyValue kv : kvs) {
>   ret += kv.heapSize();
> }
> long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
> ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
> if (scopes != null) {
>   ret += ClassSize.TREEMAP;
>   ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
>   // TODO this isn't quite right, need help here
> }
> return ret;
>   }   
> {code}
> 2. In ReplicationSource.removeNonReplicableEdits() we know the size of the 
> array originally, and we provide some percentage threshold.  When that 
> threshold is met (50% of the entries have been removed) we can call 
> kvs.trimToSize()
> 3. in the heapSize() method for WALEdit we could use reflection (Please don't 
> shoot me for this) to grab the actual capacity of the list.  Doing something 
> like this:
> {code}
> public int getArrayListCapacity()  {
> try {
>   Field f = ArrayList.class.getDeclaredField("elementData");
>   f.setAccessible(true);
>   return ((Object[]) f.get(kvs)).length;
> } catch (Exception e) {
>   log.warn("Exception in trying to get capacity on ArrayList", e);
>   return kvs.size();
> }
> {code}
> I am partial to (1) using HyperLogLog and creating a 
> CapacityEstimateArrayList, this is reusable throughout the code for other 
> classes that implement HeapSize which contains ArrayLists.  The memory 
> footprint is very small and it is very fast.  The issue is that this is an 
> estimate, although we can configure the precision we most likely always be 
> conservative.  The estimateCapacity will always be less than the 
> actualCapacity, but it will be close. I think that putting the logic in 
> removeNonReplicableEdits will work, but this only solves the heapSize problem 
> in this particular scenario

[jira] [Commented] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826024#comment-13826024
 ] 

Hudson commented on HBASE-9831:
---

SUCCESS: Integrated in HBase-0.94-security #341 (See 
[https://builds.apache.org/job/HBase-0.94-security/341/])
HBASE-9831 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D 
option (Takeshi Miao) (jmhsieh: rev 1543139)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9989) Add a test on get in TestClientNoCluster

2013-11-18 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825999#comment-13825999
 ] 

Nick Dimiduk commented on HBASE-9989:
-

protobuf-gcless looks very interesting.

+1 on the patch. Let's see what BuildBot says.

> Add a test on get in TestClientNoCluster
> 
>
> Key: HBASE-9989
> URL: https://issues.apache.org/jira/browse/HBASE-9989
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9989.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8465) Auto-drop rollback snapshot for snapshot restore

2013-11-18 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-8465:
---

Attachment: HBASE-8465-v6.patch

> Auto-drop rollback snapshot for snapshot restore
> 
>
> Key: HBASE-8465
> URL: https://issues.apache.org/jira/browse/HBASE-8465
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Matteo Bertozzi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 8465-trunk-v1.txt, 8465-trunk-v2.txt, 
> HBASE-8465-v3.patch, HBASE-8465-v4.patch, HBASE-8465-v5.patch, 
> HBASE-8465-v6.patch
>
>
> Below is an excerpt from snapshot restore javadoc:
> {code}
>* Restore the specified snapshot on the original table. (The table must be 
> disabled)
>* Before restoring the table, a new snapshot with the current table state 
> is created.
>* In case of failure, the table will be rolled back to the its original 
> state.
> {code}
> We can improve the handling of rollbackSnapshot in two ways:
> 1. give better name to the rollbackSnapshot (adding 
> {code}'-for-rollback-'{code}). Currently the name is of the form:
> String rollbackSnapshot = snapshotName + "-" + 
> EnvironmentEdgeManager.currentTimeMillis();
> 2. drop rollbackSnapshot at the end of restoreSnapshot() if the restore is 
> successful. We can introduce new config param, named 
> 'hbase.snapshot.restore.drop.rollback', to keep compatibility with current 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9994:
--

Description: 
The following classes use ByteString.copyFrom() which should be replaced with 
ZeroCopyLiteralByteString.wrap() :

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java

  was:
The following classes use ByteString.copyFrom() which should be replaced with 
ZeroCopyLiteralByteString.wrap() :

Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java


> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9994:
--

Attachment: 9994-v1.txt

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9994:
--

Status: Patch Available  (was: Open)

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-9994:
-

Assignee: Ted Yu

> ZeroCopyLiteralByteString.wrap() should be used in place of 
> ByteString.copyFrom()
> -
>
> Key: HBASE-9994
> URL: https://issues.apache.org/jira/browse/HBASE-9994
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9994-v1.txt
>
>
> The following classes use ByteString.copyFrom() which should be replaced with 
> ZeroCopyLiteralByteString.wrap() :
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9994) ZeroCopyLiteralByteString.wrap() should be used in place of ByteString.copyFrom()

2013-11-18 Thread Ted Yu (JIRA)
Ted Yu created HBASE-9994:
-

 Summary: ZeroCopyLiteralByteString.wrap() should be used in place 
of ByteString.copyFrom()
 Key: HBASE-9994
 URL: https://issues.apache.org/jira/browse/HBASE-9994
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu


The following classes use ByteString.copyFrom() which should be replaced with 
ZeroCopyLiteralByteString.wrap() :

Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9992) [hbck] Refactor so that arbitrary -D cmdline options are included

2013-11-18 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825929#comment-13825929
 ] 

Jonathan Hsieh commented on HBASE-9992:
---

I was thinking something much more localized -- an inner class that extends 
Tool and removing the tool base class from HBaseFsck.  All we really want is 
the conf file parser that tool provides.  Fixing it this way should mean we 
don't need to change tests.


> [hbck] Refactor so that arbitrary -D cmdline options are included 
> --
>
> Key: HBASE-9992
> URL: https://issues.apache.org/jira/browse/HBASE-9992
> Project: HBase
>  Issue Type: Bug
>Reporter: Jonathan Hsieh
>
> A review of HBASE-9831 pointed out the fact that -D options aren't being 
> passed into the configuration object used by hbck.  This means overriding -D 
> options will not work unless special hooks are for specific options.  A first 
> attempt to fix this was in HBASE-9831 but it affected many other files.
> The right approach would be to create a new HbckTool class that had the 
> configured interface and change to existing HBaseFsck main to instantiate 
> that to have it parse args, and then create the HBaseFsck object inside run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9924) Avoid potential filename conflict in region_mover.rb

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9924:
--

Fix Version/s: 0.96.1
   0.98.0
 Hadoop Flags: Reviewed

Integrated to 0.96 and trunk.

Thanks for the reviews.

> Avoid potential filename conflict in region_mover.rb
> 
>
> Key: HBASE-9924
> URL: https://issues.apache.org/jira/browse/HBASE-9924
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 0.96.0, 0.94.13
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBase-9924.txt
>
>
> when i worked at a shared/common box with my colleague, found this error 
> while moving region:
> NativeException: java.io.FileNotFoundException: /tmp/hh-hadoop-srv-st01.bj 
> (Permission denied)
>   writeFile at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:283
>   unloadRegions at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:354
>  (root) at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:480
> 2013-11-07 15:08:12 Unload host hh-hadoop-srv-st01.bj failed.
> The root cause is currently getFilename in region move script will get the 
> same output with diff users. One possible quick fix is just add the username 
> to the filename.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9924) Avoid potential filename conflict in region_mover.rb

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9924:
--

Summary: Avoid potential filename conflict in region_mover.rb  (was: avoid 
filename conflict in region_mover.rb)

> Avoid potential filename conflict in region_mover.rb
> 
>
> Key: HBASE-9924
> URL: https://issues.apache.org/jira/browse/HBASE-9924
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 0.96.0, 0.94.13
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9924.txt
>
>
> when i worked at a shared/common box with my colleague, found this error 
> while moving region:
> NativeException: java.io.FileNotFoundException: /tmp/hh-hadoop-srv-st01.bj 
> (Permission denied)
>   writeFile at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:283
>   unloadRegions at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:354
>  (root) at 
> /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:480
> 2013-11-07 15:08:12 Unload host hh-hadoop-srv-st01.bj failed.
> The root cause is currently getFilename in region move script will get the 
> same output with diff users. One possible quick fix is just add the username 
> to the filename.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9831:
-

Fix Version/s: (was: 0.94.15)
   0.94.14

> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9988) DOn't use HRI#getEncodedName in the client

2013-11-18 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825919#comment-13825919
 ] 

Nick Dimiduk commented on HBASE-9988:
-

Thanks for clarifying. +1

> DOn't use HRI#getEncodedName in the client
> --
>
> Key: HBASE-9988
> URL: https://issues.apache.org/jira/browse/HBASE-9988
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9988.v1.patch, 9988.v2.patch
>
>
> This functions does a lazy initialisation. It cost memory and it creates a 
> synchronisation point.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-9993.
--

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to 0.94

> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 0.94.14
>
> Attachments: 9993.txt
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825909#comment-13825909
 ] 

Jesse Yates commented on HBASE-9993:


+0.75 Would be little better to have a comment in there as to why you are doing 
the casting, so reading the code is a little clearer. Fine if you want to 
commit without it though too.

> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 0.94.14
>
> Attachments: 9993.txt
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9993:
-

Priority: Blocker  (was: Major)

> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 0.94.14
>
> Attachments: 9993.txt
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9993:
-

Attachment: 9993.txt

Simple fix. Not as pretty, but OK I think.
If WALEdit ever changes in a way where this would not work, the replication 
tests would catch it.

If there are no objections I will commit this soon and roll a new RC.

> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.14
>
> Attachments: 9993.txt
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9993:
-

Description: 
{code}
  public List getKeyValues() {
{code}
Was changed to 
{code}
  public ArrayList getKeyValues() {
{code}

This break existing coprocessors (such as those used in Phoenix).
It's fine to change in 0.96+, but in 0.94 it should remain backwards compatible.

[~giacomotaylor], FYI.

> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.14
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9993:
-

Fix Version/s: 0.94.14

> 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.
> ---
>
> Key: HBASE-9993
> URL: https://issues.apache.org/jira/browse/HBASE-9993
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.14
>
>
> {code}
>   public List getKeyValues() {
> {code}
> Was changed to 
> {code}
>   public ArrayList getKeyValues() {
> {code}
> This break existing coprocessors (such as those used in Phoenix).
> It's fine to change in 0.96+, but in 0.94 it should remain backwards 
> compatible.
> [~giacomotaylor], FYI.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9993) 0.94: HBASE-9865 breaks coprocessor compatibility with WALEdit.

2013-11-18 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-9993:


 Summary: 0.94: HBASE-9865 breaks coprocessor compatibility with 
WALEdit.
 Key: HBASE-9993
 URL: https://issues.apache.org/jira/browse/HBASE-9993
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9865) Reused WALEdits in replication may cause RegionServers to go OOM

2013-11-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825881#comment-13825881
 ] 

Lars Hofhansl commented on HBASE-9865:
--

Could do what Churro was saying or just cast the returned List to an ArrayList.

> Reused WALEdits in replication may cause RegionServers to go OOM
> 
>
> Key: HBASE-9865
> URL: https://issues.apache.org/jira/browse/HBASE-9865
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5, 0.95.0
>Reporter: churro morales
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9865-0.94-v2.txt, 9865-0.94-v4.txt, 9865-sample-1.txt, 
> 9865-sample.txt, 9865-trunk-v2.txt, 9865-trunk-v3.txt, 9865-trunk-v4.txt, 
> 9865-trunk.txt
>
>
> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM.
> A little background on this issue.  We noticed that our source replication 
> regionservers would get into gc storms and sometimes even OOM. 
> We noticed a case where it showed that there were around 25k WALEdits to 
> replicate, each one with an ArrayList of KeyValues.  The array list had a 
> capacity of around 90k (using 350KB of heap memory) but had around 6 non null 
> entries.
> When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a 
> WALEdit it removes all kv's that are scoped other than local.  
> But in doing so we don't account for the capacity of the ArrayList when 
> determining heapSize for a WALEdit.  The logic for shipping a batch is 
> whether you have hit a size capacity or number of entries capacity.  
> Therefore if have a WALEdit with 25k entries and suppose all are removed: 
> The size of the arrayList is 0 (we don't even count the collection's heap 
> size currently) but the capacity is ignored.
> This will yield a heapSize() of 0 bytes while in the best case it would be at 
> least 10 bytes (provided you pass initialCapacity and you have 32 bit 
> JVM) 
> I have some ideas on how to address this problem and want to know everyone's 
> thoughts:
> 1. We use a probabalistic counter such as HyperLogLog and create something 
> like:
>   * class CapacityEstimateArrayList implements ArrayList
>   ** this class overrides all additive methods to update the 
> probabalistic counts
>   ** it includes one additional method called estimateCapacity 
> (we would take estimateCapacity - size() and fill in sizes for all references)
>   * Then we can do something like this in WALEdit.heapSize:
>   
> {code}
>   public long heapSize() {
> long ret = ClassSize.ARRAYLIST;
> for (KeyValue kv : kvs) {
>   ret += kv.heapSize();
> }
> long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
> ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
> if (scopes != null) {
>   ret += ClassSize.TREEMAP;
>   ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
>   // TODO this isn't quite right, need help here
> }
> return ret;
>   }   
> {code}
> 2. In ReplicationSource.removeNonReplicableEdits() we know the size of the 
> array originally, and we provide some percentage threshold.  When that 
> threshold is met (50% of the entries have been removed) we can call 
> kvs.trimToSize()
> 3. in the heapSize() method for WALEdit we could use reflection (Please don't 
> shoot me for this) to grab the actual capacity of the list.  Doing something 
> like this:
> {code}
> public int getArrayListCapacity()  {
> try {
>   Field f = ArrayList.class.getDeclaredField("elementData");
>   f.setAccessible(true);
>   return ((Object[]) f.get(kvs)).length;
> } catch (Exception e) {
>   log.warn("Exception in trying to get capacity on ArrayList", e);
>   return kvs.size();
> }
> {code}
> I am partial to (1) using HyperLogLog and creating a 
> CapacityEstimateArrayList, this is reusable throughout the code for other 
> classes that implement HeapSize which contains ArrayLists.  The memory 
> footprint is very small and it is very fast.  The issue is that this is an 
> estimate, although we can configure the precision we most likely always be 
> conservative.  The estimateCapacity will always be less than the 
> actualCapacity, but it will be close. I think that putting the logic in 
> removeNonReplicableEdits will work, but this only solves the heapSize problem 
> in this particular scenario.  Solution 3 is slow and horrible but that gives 
> us the exact answer.
> I would love to hear if anyone else has any other ideas on how to remedy this 
> problem?  I have code for trunk and 0.94 for all 3 ideas and can provide a 
> patch if the community thinks any of these approaches is a viable one.



--
This m

[jira] [Commented] (HBASE-9865) Reused WALEdits in replication may cause RegionServers to go OOM

2013-11-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825879#comment-13825879
 ] 

Lars Hofhansl commented on HBASE-9865:
--

It turns out this change breaks Phoenix. And older compiled coprocessor is 
using WALEdit directly, and it is now referring to the getKeyValues that 
returned a list.

> Reused WALEdits in replication may cause RegionServers to go OOM
> 
>
> Key: HBASE-9865
> URL: https://issues.apache.org/jira/browse/HBASE-9865
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5, 0.95.0
>Reporter: churro morales
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9865-0.94-v2.txt, 9865-0.94-v4.txt, 9865-sample-1.txt, 
> 9865-sample.txt, 9865-trunk-v2.txt, 9865-trunk-v3.txt, 9865-trunk-v4.txt, 
> 9865-trunk.txt
>
>
> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM.
> A little background on this issue.  We noticed that our source replication 
> regionservers would get into gc storms and sometimes even OOM. 
> We noticed a case where it showed that there were around 25k WALEdits to 
> replicate, each one with an ArrayList of KeyValues.  The array list had a 
> capacity of around 90k (using 350KB of heap memory) but had around 6 non null 
> entries.
> When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a 
> WALEdit it removes all kv's that are scoped other than local.  
> But in doing so we don't account for the capacity of the ArrayList when 
> determining heapSize for a WALEdit.  The logic for shipping a batch is 
> whether you have hit a size capacity or number of entries capacity.  
> Therefore if have a WALEdit with 25k entries and suppose all are removed: 
> The size of the arrayList is 0 (we don't even count the collection's heap 
> size currently) but the capacity is ignored.
> This will yield a heapSize() of 0 bytes while in the best case it would be at 
> least 10 bytes (provided you pass initialCapacity and you have 32 bit 
> JVM) 
> I have some ideas on how to address this problem and want to know everyone's 
> thoughts:
> 1. We use a probabalistic counter such as HyperLogLog and create something 
> like:
>   * class CapacityEstimateArrayList implements ArrayList
>   ** this class overrides all additive methods to update the 
> probabalistic counts
>   ** it includes one additional method called estimateCapacity 
> (we would take estimateCapacity - size() and fill in sizes for all references)
>   * Then we can do something like this in WALEdit.heapSize:
>   
> {code}
>   public long heapSize() {
> long ret = ClassSize.ARRAYLIST;
> for (KeyValue kv : kvs) {
>   ret += kv.heapSize();
> }
> long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
> ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
> if (scopes != null) {
>   ret += ClassSize.TREEMAP;
>   ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
>   // TODO this isn't quite right, need help here
> }
> return ret;
>   }   
> {code}
> 2. In ReplicationSource.removeNonReplicableEdits() we know the size of the 
> array originally, and we provide some percentage threshold.  When that 
> threshold is met (50% of the entries have been removed) we can call 
> kvs.trimToSize()
> 3. in the heapSize() method for WALEdit we could use reflection (Please don't 
> shoot me for this) to grab the actual capacity of the list.  Doing something 
> like this:
> {code}
> public int getArrayListCapacity()  {
> try {
>   Field f = ArrayList.class.getDeclaredField("elementData");
>   f.setAccessible(true);
>   return ((Object[]) f.get(kvs)).length;
> } catch (Exception e) {
>   log.warn("Exception in trying to get capacity on ArrayList", e);
>   return kvs.size();
> }
> {code}
> I am partial to (1) using HyperLogLog and creating a 
> CapacityEstimateArrayList, this is reusable throughout the code for other 
> classes that implement HeapSize which contains ArrayLists.  The memory 
> footprint is very small and it is very fast.  The issue is that this is an 
> estimate, although we can configure the precision we most likely always be 
> conservative.  The estimateCapacity will always be less than the 
> actualCapacity, but it will be close. I think that putting the logic in 
> removeNonReplicableEdits will work, but this only solves the heapSize problem 
> in this particular scenario.  Solution 3 is slow and horrible but that gives 
> us the exact answer.
> I would love to hear if anyone else has any other ideas on how to remedy this 
> problem?  I have code for trunk and 0.94 for all 3 ideas and can provide a

[jira] [Commented] (HBASE-9988) DOn't use HRI#getEncodedName in the client

2013-11-18 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825880#comment-13825880
 ] 

Nicolas Liochon commented on HBASE-9988:


We were just putting the failure in an internal array, without publishing it 
anywhere. As well it was only if log.trace was on, so not really useful in 
practice. We now log the errors (at the info level) after a configurable number 
of retries.

> DOn't use HRI#getEncodedName in the client
> --
>
> Key: HBASE-9988
> URL: https://issues.apache.org/jira/browse/HBASE-9988
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9988.v1.patch, 9988.v2.patch
>
>
> This functions does a lazy initialisation. It cost memory and it creates a 
> synchronisation point.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9358) Possible invalid iterator in ServerManager#processQueuedDeadServers()

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-9358.
---

Resolution: Later

> Possible invalid iterator in ServerManager#processQueuedDeadServers()
> -
>
> Key: HBASE-9358
> URL: https://issues.apache.org/jira/browse/HBASE-9358
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Trivial
>
> serverIterator is used this way:
> {code}
> Iterator serverIterator = queuedDeadServers.iterator();
> while (serverIterator.hasNext()) {
>   ServerName tmpServerName = serverIterator.next();
>   expireServer(tmpServerName);
>   serverIterator.remove();
> {code}
> expireServer() modifies Iterable "this.queuedDeadServers" which invalidates 
> iterator "serverIterator"
> Call to remove() is applied on invalid iterator "serverIterator"



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9893) Incorrect assert condition in OrderedBytes decoding

2013-11-18 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825873#comment-13825873
 ] 

Nick Dimiduk commented on HBASE-9893:
-

FYI, I retained the assertion and preserve the value of 't' in the general case 
for fear of some other bug cropping up and producing invalid data. It is my 
intention to retain these assertions for a few more releases while the new code 
is exercised more thoroughly.

> Incorrect assert condition in OrderedBytes decoding
> ---
>
> Key: HBASE-9893
> URL: https://issues.apache.org/jira/browse/HBASE-9893
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: He Liangliang
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-9893.00.patch, HBASE-9893.patch
>
>
> The following assert condition is incorrect when decoding blob var byte array.
> {code}
> assert t == 0 : "Unexpected bits remaining after decoding blob.";
> {code}
> When the number of bytes to decode is multiples of 8 (i.e the original number 
> of bytes is multiples of 7), this assert may fail.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9812) Intermittent TestSplitLogManager#testMultipleResubmits failure

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-9812.
---

Resolution: Cannot Reproduce

> Intermittent TestSplitLogManager#testMultipleResubmits failure
> --
>
> Key: HBASE-9812
> URL: https://issues.apache.org/jira/browse/HBASE-9812
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From 
> https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/801/testReport/org.apache.hadoop.hbase.master/TestSplitLogManager/testMultipleResubmits/
>  :
> {code}
> junit.framework.AssertionFailedError: Waiting timed out after [9,600] msec
>   at junit.framework.Assert.fail(Assert.java:57)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:193)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:146)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitFor(HBaseTestingUtility.java:3220)
>   at 
> org.apache.hadoop.hbase.master.TestSplitLogManager.waitForCounter(TestSplitLogManager.java:164)
>   at 
> org.apache.hadoop.hbase.master.TestSplitLogManager.waitForCounter(TestSplitLogManager.java:157)
>   at 
> org.apache.hadoop.hbase.master.TestSplitLogManager.testMultipleResubmits(TestSplitLogManager.java:280)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> 2013-10-21 11:52:15,148 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(430): 
> split-log-manager-tests145fa180-5cc5-4165-9d67-4073ab3f921b-0x141dadbb932 
> Set watcher on znode that does not yet exist, /hbase/splitWAL/foo%2F1
> 2013-10-21 11:52:15,148 DEBUG [pool-1-thread-1] 
> master.TestSplitLogManager(186): waiting for task node creation
> 2013-10-21 11:52:15,164 DEBUG [pool-1-thread-1-EventThread] 
> zookeeper.ZooKeeperWatcher(310): 
> split-log-manager-tests145fa180-5cc5-4165-9d67-4073ab3f921b-0x141dadbb932 
> Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, 
> path=/hbase/splitWAL/foo%2F1
> 2013-10-21 11:52:15,164 DEBUG [pool-1-thread-1-EventThread] 
> regionserver.TestMasterAddressManager$NodeCreationListener(107): 
> nodeCreated(/hbase/splitWAL/foo%2F1)
> 2013-10-21 11:52:15,164 DEBUG [pool-1-thread-1] 
> master.TestSplitLogManager(188): task created
> 2013-10-21 11:52:15,164 DEBUG [pool-1-thread-1-EventThread] 
> master.SplitLogManager(711): put up splitlog task at znode 
> /hbase/splitWAL/foo%2F1
> 2013-10-21 11:52:15,166 DEBUG [pool-1-thread-1-EventThread] 
> master.SplitLogManager(753): task not yet acquired /hbase/splitWAL/foo%2F1 
> ver = 0
> 2013-10-21 11:52:15,193 DEBUG [pool-1-thread-1-EventThread] 
> zookeeper.ZooKeeperWatcher(310): 
> split-log-manager-tests145fa180-5cc5-4165-9d67-4073ab3f921b-0x141dadbb932 
> Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, 
> path=/hbase/splitWAL/foo%2F1
> 2013-10-21 11:52:15,194 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
> to [3,200] milli-secs(wait.for.ratio=[1])
> 2013-10-21 11:52:15,194 INFO  [pool-1-thread-1-EventThread] 
> master.SplitLogManager(826): task /hbase/splitWAL/foo%2F1 acquired by 
> worker1,1,1
> 2013-10-21 11:52:15,204 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
> to [9,600] milli-secs(wait.for.ratio=[1])
> 2013-10-21 11:52:15,247 INFO  
> [dummy-master,1,1.splitLogManagerTimeoutMonitor] 
> master.SplitLogManager$TimeoutMonitor(1408): total tasks = 1 unassigned = 0 
> tasks={/hbase/splitWAL/foo%2F1=last_update = 1382356335195 last_version = 1 
> cur_worker_name = worker1,1,1 status = in_progress incarnation = 0 resubmits 
> = 0 batch = installed = 1 done = 0 error = 0}
> 2013-10-21 11:52:20,250 INFO  
> [dummy-master,1,1.splitLogManagerTimeoutMonitor] 
> master.SplitLogManager$TimeoutMonitor(1408): total tasks = 1 unassigned = 0 
> tasks={/hbase/splitWAL/foo%2F1=last_update = 1382356335195 last_version = 1 
> cur_worker_name = worker1,1,1 status = in_progress incarnation = 0 resubmits 
> = 0 batch = installed = 1 done = 0 error = 0}
> 2013-10-21 11:52:21,251 INFO  
> [dummy-master,1,1.splitLogManagerTimeoutMonitor] master.SplitLogManager(875): 
> resubmitting task /hbase/splitWAL/foo%2F1
> 2013-10-21 11:52:24,808 DEBUG 
> [dummy-master,1,1.splitLogManagerTimeoutMonitor] 
> zookeeper.ZooKeeperWatcher(458): 
> split-log-manager-tests145fa180-5cc5-4165-9d67-4073ab3f921b-0x141dadbb932 
> Received InterruptedException, doing nothing here
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:485)
>   at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
>   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1264)
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:414)
>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:881)
>  

[jira] [Assigned] (HBASE-9629) SnapshotReferenceUtil#snapshot should catch RemoteWithExtrasException

2013-11-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-9629:
-

Assignee: (was: Ted Yu)

> SnapshotReferenceUtil#snapshot should catch RemoteWithExtrasException
> -
>
> Key: HBASE-9629
> URL: https://issues.apache.org/jira/browse/HBASE-9629
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 9629.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/7329//testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/
>  :
> {code}
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=snapshotAfterMerge table=test type=FLUSH } had an error.  Procedure 
> snapshotAfterMerge { waiting=[] done=[] }
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:208)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:219)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:123)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:94)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3156)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2705)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2638)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2645)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.snapshot(SnapshotTestingUtils.java:260)
>   at 
> org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge(TestFlushSnapshotFromClient.java:318)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=snapshotAfterMerge table=test type=FLUSH } had an error.  Procedure 
> snapshotAfterMerge { waiting=[] done=[] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:365)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32890)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via 
> Failed taking snapshot { ss=snapshotAfterMerge table=test type=FLUSH } due to 
> exception:Missing parent hfile for: 
> 9592c67505ab4cdc9d95a943

[jira] [Updated] (HBASE-9893) Incorrect assert condition in OrderedBytes decoding

2013-11-18 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9893:


Status: Patch Available  (was: Open)

> Incorrect assert condition in OrderedBytes decoding
> ---
>
> Key: HBASE-9893
> URL: https://issues.apache.org/jira/browse/HBASE-9893
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: He Liangliang
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-9893.00.patch, HBASE-9893.patch
>
>
> The following assert condition is incorrect when decoding blob var byte array.
> {code}
> assert t == 0 : "Unexpected bits remaining after decoding blob.";
> {code}
> When the number of bytes to decode is multiples of 8 (i.e the original number 
> of bytes is multiples of 7), this assert may fail.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9893) Incorrect assert condition in OrderedBytes decoding

2013-11-18 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9893:


Attachment: HBASE-9893.00.patch

Excellent catch. Attached is a patch that also includes updated test cases. 
Please let me know if you have other value permutations you'd like to see 
tested. It'd be nice to have a more thorough test suite around this code, a la 
the suite Orderly has.

>From the commit message

{quote}
Correct an invalid assumption in remaining assertion code around 
OrderedBytes#decodeVarBlob. When an encoded value contains a 1-bit in its LSB 
position and the length of the encoded byte array is divisible by 7, the value 
remaining in variable t will be 0x80, resulting in the failed assertion coming 
out of the decoding loop. This patch preserves the assertion for the general 
case by resetting 't' at the conclusion of the 7-byte cycle.
{quote}

> Incorrect assert condition in OrderedBytes decoding
> ---
>
> Key: HBASE-9893
> URL: https://issues.apache.org/jira/browse/HBASE-9893
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: He Liangliang
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-9893.00.patch, HBASE-9893.patch
>
>
> The following assert condition is incorrect when decoding blob var byte array.
> {code}
> assert t == 0 : "Unexpected bits remaining after decoding blob.";
> {code}
> When the number of bytes to decode is multiples of 8 (i.e the original number 
> of bytes is multiples of 7), this assert may fail.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8369) MapReduce over snapshot files

2013-11-18 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-8369:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've finally committed this to trunk. Thanks everyone for reviews and 
discussions. 

Unfortunately, I did not have any time for 0.94 backport. Not sure about 0.96 
branch as well. If we are doing 0.94 backport, it seems that we should have it 
in 0.96 as well. 

> MapReduce over snapshot files
> -
>
> Key: HBASE-8369
> URL: https://issues.apache.org/jira/browse/HBASE-8369
> Project: HBase
>  Issue Type: New Feature
>  Components: mapreduce, snapshots
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0
>
> Attachments: HBASE-8369-0.94.patch, HBASE-8369-0.94_v2.patch, 
> HBASE-8369-0.94_v3.patch, HBASE-8369-0.94_v4.patch, HBASE-8369-0.94_v5.patch, 
> HBASE-8369-trunk_v1.patch, HBASE-8369-trunk_v2.patch, 
> HBASE-8369-trunk_v3.patch, hbase-8369_v0.patch, hbase-8369_v5.patch, 
> hbase-8369_v6.patch, hbase-8369_v7.patch, hbase-8369_v8.patch, 
> hbase-8369_v9.patch
>
>
> The idea is to add an InputFormat, which can run the mapreduce job over 
> snapshot files directly bypassing hbase server layer. The IF is similar in 
> usage to TableInputFormat, taking a Scan object from the user, but instead of 
> running from an online table, it runs from a table snapshot. We do one split 
> per region in the snapshot, and open an HRegion inside the RecordReader. A 
> RegionScanner is used internally for doing the scan without any HRegionServer 
> bits. 
> Users have been asking and searching for ways to run MR jobs by reading 
> directly from hfiles, so this allows new use cases if reading from stale data 
> is ok:
>  - Take snapshots periodically, and run MR jobs only on snapshots.
>  - Export snapshots to remote hdfs cluster, run the MR jobs at that cluster 
> without HBase cluster.
>  - (Future use case) Combine snapshot data with online hbase data: Scan from 
> yesterday's snapshot, but read today's data from online hbase cluster. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825824#comment-13825824
 ] 

Ted Yu commented on HBASE-9969:
---

bq.  I actually removed the topScanner==null check from the above and the 
single file scanner was 50% faster.

This optimization would benefit both PriorityQueue and LoserTree 
implementations, right ?

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969-v3.patch, 
> hbase-9969.patch, hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-18 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825816#comment-13825816
 ] 

Matt Corgan commented on HBASE-9969:


Regarding adding 10 cols/row:
{quote}I tried this on my laptop but seems your case above is even faster than 
before. Maybe there is something wrong with my environment. I will try it on my 
devbox tomorrow.{quote}yes, you are right.  i was profiling this weekend and 
confirmed the current heap is handling that situation favorably.  still good to 
test to make sure we don't lose this aspect!

I made a stripped down version of the PriorityQueue based heap to compare with 
the LoserTree.  It adds some counters to track the number of KV comparisons 
which is interesting to see.  I was seeing that PQ is faster for next(), 
especially with just 1 file, and LT is faster for reseek().  I'll try to post a 
patch tonight.

I was paying particular attention to this code at KeyValueHeap:103
{code}
  KeyValueScanner topScanner = this.heap.peek();
  if (topScanner == null ||
  this.comparator.compare(kvNext, topScanner.peek()) >= 0) {
this.heap.add(this.current);
this.current = pollRealKV();
  }
{code}
I can't figure out why we need to do a heap.add() and pollRealKV when 
topScanner==null.  I actually removed the topScanner==null check from the above 
and the single file scanner was 50% faster.  The whole test suite passed, so 
either it's not necessary, or we could use another unit test.  Maybe it has 
something to do with LazySeek?


> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: 9969-0.94.txt, hbase-9969-v2.patch, hbase-9969-v3.patch, 
> hbase-9969.patch, hbase-9969.patch, kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-18 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9973:
---

   Resolution: Fixed
Fix Version/s: 0.98.0
   Status: Resolved  (was: Patch Available)

committed to trunk and 96, thanks for the patch and the reviews!

> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: migration, security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Himanshu Vashishtha
>  Labels: acl
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9973-v2.patch, 9973-v2.patch, 9973.patch
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningless:
> {code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
> value=A {code}
> As a result, 
> Proposed fix:
> I see the fix being relatively straightforward. As part of the migration, 
> change any entries in the '_acl_' table with key '_acl_' into a new row with 
> key 'hbase:acl', all else being the same. And the old entry would be deleted.
> This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9992) [hbck] Refactor so that arbitrary -D cmdline options are included

2013-11-18 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825773#comment-13825773
 ] 

Elliott Clark commented on HBASE-9992:
--

AbstractHBaseTool ?

> [hbck] Refactor so that arbitrary -D cmdline options are included 
> --
>
> Key: HBASE-9992
> URL: https://issues.apache.org/jira/browse/HBASE-9992
> Project: HBase
>  Issue Type: Bug
>Reporter: Jonathan Hsieh
>
> A review of HBASE-9831 pointed out the fact that -D options aren't being 
> passed into the configuration object used by hbck.  This means overriding -D 
> options will not work unless special hooks are for specific options.  A first 
> attempt to fix this was in HBASE-9831 but it affected many other files.
> The right approach would be to create a new HbckTool class that had the 
> configured interface and change to existing HBaseFsck main to instantiate 
> that to have it parse args, and then create the HBaseFsck object inside run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9976) Don't create duplicated TableName objects

2013-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825762#comment-13825762
 ] 

Hadoop QA commented on HBASE-9976:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614451/9976.v7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
10 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.master.balancer.TestFavoredNodeAssignmentHelper.testSecondaryAndTertiaryPlacementWithMultipleRacks(TestFavoredNodeAssignmentHelper.java:187)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7922//console

This message is automatically generated.

> Don't create duplicated TableName objects
> -
>
> Key: HBASE-9976
> URL: https://issues.apache.org/jira/browse/HBASE-9976
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9976.v1.patch, 9976.v4.patch, 9976.v6.patch, 
> 9976.v7.patch
>
>
> A profiling show that the table name is reponsible for 25% of the memory 
> needed to keep the region locations. As well, comparisons will be faster if 
> two identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9992) [hbck] Refactor so that arbitrary -D cmdline options are included

2013-11-18 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825754#comment-13825754
 ] 

Nick Dimiduk commented on HBASE-9992:
-

Sounds like good cleanup.

> [hbck] Refactor so that arbitrary -D cmdline options are included 
> --
>
> Key: HBASE-9992
> URL: https://issues.apache.org/jira/browse/HBASE-9992
> Project: HBase
>  Issue Type: Bug
>Reporter: Jonathan Hsieh
>
> A review of HBASE-9831 pointed out the fact that -D options aren't being 
> passed into the configuration object used by hbck.  This means overriding -D 
> options will not work unless special hooks are for specific options.  A first 
> attempt to fix this was in HBASE-9831 but it affected many other files.
> The right approach would be to create a new HbckTool class that had the 
> configured interface and change to existing HBaseFsck main to instantiate 
> that to have it parse args, and then create the HBaseFsck object inside run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9992) [hbck] Refactor so that arbitrary -D cmdline options are included

2013-11-18 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-9992:
-

 Summary: [hbck] Refactor so that arbitrary -D cmdline options are 
included 
 Key: HBASE-9992
 URL: https://issues.apache.org/jira/browse/HBASE-9992
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh


A review of HBASE-9831 pointed out the fact that -D options aren't being passed 
into the configuration object used by hbck.  This means overriding -D options 
will not work unless special hooks are for specific options.  A first attempt 
to fix this was in HBASE-9831 but it affected many other files.

The right approach would be to create a new HbckTool class that had the 
configured interface and change to existing HBaseFsck main to instantiate that 
to have it parse args, and then create the HBaseFsck object inside run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825748#comment-13825748
 ] 

Jonathan Hsieh commented on HBASE-9831:
---

Filed a follow on issue to fix this class of problem and for all at HBASE-9992.

> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9990) HTable uses the conf for each "newCaller"

2013-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825744#comment-13825744
 ] 

Hadoop QA commented on HBASE-9990:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614448/9990.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
10 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7921//console

This message is automatically generated.

> HTable uses the conf for each "newCaller"
> -
>
> Key: HBASE-9990
> URL: https://issues.apache.org/jira/browse/HBASE-9990
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9990.v1.patch
>
>
> You can construct a RpcRetryingCallerFactory, but actually the conf is read 
> for each caller creation. Reading the conf is obviously expensive, and a 
> profiling session shows it. If we want to sent hundreds of thousands of 
> queries per second, we should not do that.
> RpcRetryingCallerFactory.newCaller is called for each get, for example.
> This is not a regression, we have something similar in 0.94.
> On the 0.96, we see the creation of: java.util.regex.Matcher: 15739712b after 
> a few thousand calls to "get".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9988) DOn't use HRI#getEncodedName in the client

2013-11-18 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825738#comment-13825738
 ] 

Nick Dimiduk commented on HBASE-9988:
-

The change in retry logic -- you're no longer gathering the failures? That 
feature is being dropped?

> DOn't use HRI#getEncodedName in the client
> --
>
> Key: HBASE-9988
> URL: https://issues.apache.org/jira/browse/HBASE-9988
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9988.v1.patch, 9988.v2.patch
>
>
> This functions does a lazy initialisation. It cost memory and it creates a 
> synchronisation point.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825735#comment-13825735
 ] 

Jonathan Hsieh commented on HBASE-9831:
---

[~takeshi.miao] Thanks for the patch. committed to 0.94, 0.96 and trunk.

> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9831:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9831:
--

Assignee: takeshi.miao

> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9831) 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option

2013-11-18 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9831:
--

Summary: 'hbasefsck.numthreads' property isn't passed to hbck via cmdline 
-D option  (was: 'hbasefsck.numthreads' property can not pass to hbck via 
generic option)

> 'hbasefsck.numthreads' property isn't passed to hbck via cmdline -D option
> --
>
> Key: HBASE-9831
> URL: https://issues.apache.org/jira/browse/HBASE-9831
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.94.12
>Reporter: takeshi.miao
>Priority: Minor
>  Labels: hbck
> Fix For: 0.98.0, 0.96.1, 0.94.15
>
> Attachments: HBASE-9831-0.94-v02.patch, HBASE-9831-0.94-v03.patch, 
> HBASE-9831-trunk-v01.patch, HBASE-9831-trunk-v02.patch, 
> HBASE-9831-trunk-v03.patch, HBASE-9831.v01.patch
>
>
> We use generic option way to pass _'hbasefsck.numthreads'_ property to 
> _'hbase hbck'_, but it does not accept our new setting value
> {code}
> hbase hbck -D hbasefsck.numthreads=5
> {code}
> We can still find there are threads over than 5 we already set via generic 
> opttion
> {code}
> [2013-10-24 
> 09:25:02,561][pool-2-thread-6][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,562][pool-2-thread-10][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,565][pool-2-thread-13][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,566][pool-2-thread-11][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,567][pool-2-thread-9][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,568][pool-2-thread-12][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,570][pool-2-thread-7][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> [2013-10-24 
> 09:25:02,571][pool-2-thread-14][DEBUG][org.apache.hadoop.security.UserGroupInformation]:
>  PrivilegedAction as:hbase/spn-d-hdn1.s...@ispn.trendmicro.com 
> (auth:KERBEROS) from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) (UserGroupInformation.java:1430)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825693#comment-13825693
 ] 

Hadoop QA commented on HBASE-9973:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12614439/9973-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
10 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7919//console

This message is automatically generated.

> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: migration, security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Himanshu Vashishtha
>  Labels: acl
> Fix For: 0.96.1
>
> Attachments: 9973-v2.patch, 9973-v2.patch, 9973.patch
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningles

[jira] [Commented] (HBASE-9989) Add a test on get in TestClientNoCluster

2013-11-18 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825649#comment-13825649
 ] 

Nicolas Liochon commented on HBASE-9989:


And as a side note it seems that the protobuf teams considers that byte copy or 
object creation is a non issue in java:
array copy is cheap:  http://code.google.com/p/protobuf/issues/detail?id=374
creating objects is cheap:  
http://comments.gmane.org/gmane.comp.lib.protocol-buffers.general/2667

someone created this: http://code.google.com/p/protobuf-gcless/ :-)



> Add a test on get in TestClientNoCluster
> 
>
> Key: HBASE-9989
> URL: https://issues.apache.org/jira/browse/HBASE-9989
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9989.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-18 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-9973:
---

Component/s: migration

> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: migration, security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Himanshu Vashishtha
>  Labels: acl
> Fix For: 0.96.1
>
> Attachments: 9973-v2.patch, 9973-v2.patch, 9973.patch
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningless:
> {code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
> value=A {code}
> As a result, 
> Proposed fix:
> I see the fix being relatively straightforward. As part of the migration, 
> change any entries in the '_acl_' table with key '_acl_' into a new row with 
> key 'hbase:acl', all else being the same. And the old entry would be deleted.
> This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9989) Add a test on get in TestClientNoCluster

2013-11-18 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825633#comment-13825633
 ] 

Nicolas Liochon commented on HBASE-9989:


I suppose that the solution to this is to create the builders with the HTable 
objects, then to pass them all along...

> Add a test on get in TestClientNoCluster
> 
>
> Key: HBASE-9989
> URL: https://issues.apache.org/jira/browse/HBASE-9989
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9989.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9976) Don't create duplicated TableName objects

2013-11-18 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825628#comment-13825628
 ] 

Nicolas Liochon commented on HBASE-9976:


v7, without the new fancy 1.7 features :-)

> Don't create duplicated TableName objects
> -
>
> Key: HBASE-9976
> URL: https://issues.apache.org/jira/browse/HBASE-9976
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9976.v1.patch, 9976.v4.patch, 9976.v6.patch, 
> 9976.v7.patch
>
>
> A profiling show that the table name is reponsible for 25% of the memory 
> needed to keep the region locations. As well, comparisons will be faster if 
> two identical table names are a single java object.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >