[jira] [Commented] (HBASE-8565) stop-hbase.sh clean up: backup master

2013-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728471#comment-13728471
 ] 

Hadoop QA commented on HBASE-8565:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12595716/HBASE-8565-v1-0.94.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6587//console

This message is automatically generated.

> stop-hbase.sh clean up: backup master
> -
>
> Key: HBASE-8565
> URL: https://issues.apache.org/jira/browse/HBASE-8565
> Project: HBase
>  Issue Type: Bug
>  Components: master, scripts
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-8565-v1-0.94.patch, HBASE-8565-v1-trunk.patch
>
>
> In stop-hbase.sh:
> {code}
>   # TODO: store backup masters in ZooKeeper and have the primary send them a 
> shutdown message
>   # stop any backup masters
>   "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
> --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
> {code}
> After HBASE-5213, stop-hbase.sh -> hbase master stop will bring down the 
> backup master too via the cluster status znode.
> We should not need the above code anymore.
> Another issue happens when the current master died and the backup master 
> became the active master.
> {code}
> nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \
>--config "${HBASE_CONF_DIR}" \
>master stop "$@" > "$logout" 2>&1 < /dev/null &
> waitForProcessEnd `cat $pid` 'stop-master-command'
> {code}
> We can still issue 'hbase-stop.sh' from the old master.
> stop-hbase.sh -> hbase master stop -> look for active master -> request 
> shutdown
> This process still works.
> But the waitForProcessEnd statement will not work since the local master pid 
> is not relevant anymore.
> What is the best way in the this case?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8816) Add support of loading multiple tables into LoadTestTool

2013-08-02 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-8816:
-

Attachment: hbase-8816-v3-trunk.patch

Upload trunk patch and will check in soon after 0.94.11 is out.

> Add support of loading multiple tables into LoadTestTool
> 
>
> Key: HBASE-8816
> URL: https://issues.apache.org/jira/browse/HBASE-8816
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.94.9
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: hbase-8816.patch, hbase-8816-v1.patch, 
> hbase-8816-v2.patch, hbase-8816-v3.patch, hbase-8816-v3-trunk.patch
>
>
> Introducing an optional parameter 'num_tables' into LoadTestTool. When it's 
> specified with positive integer n, LoadTestTool will load n tables parallely. 
> -tn parameter value becomes table name prefix. Tables are created with name 
> in format _1..._n. A sample command line "-tn test -num_tables 2" 
> will create & load tables:"test_1" and "test_2"
> The motivation is to add a handy way to load multiple tables concurrently. In 
> addition, we could use this option to test resource leakage of long running 
> clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8561) [replication] Don't instantiate a ReplicationSource if the passed implementation isn't found

2013-08-02 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728465#comment-13728465
 ] 

Gabriel Reid commented on HBASE-8561:
-

Wouldn't it be easier to just throw an exception up the stack and then (I 
assume) abort the regionserver? That way it'll be immediately clear if there is 
an issue, as well as removing the need to do null checking everywhere in the 
replication code.

> [replication] Don't instantiate a ReplicationSource if the passed 
> implementation isn't found
> 
>
> Key: HBASE-8561
> URL: https://issues.apache.org/jira/browse/HBASE-8561
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6.1
>Reporter: Jean-Daniel Cryans
> Fix For: 0.98.0, 0.95.2
>
>
> I was debugging a case where the region servers were dying with:
> {noformat}
> ABORTING region server someserver.com,60020,1368123702806: Writing 
> replication status 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for 
> /hbase/replication/rs/someserver.com,60020,1368123702806/etcetcetc/somserver.com%2C60020%2C1368123702740.1368123705091
>  
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) 
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) 
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266) 
> at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:354)
>  
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:846) 
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:898) 
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:892) 
> at 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper.writeReplicationStatus(ReplicationZookeeper.java:558)
>  
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:154)
>  
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:638)
>  
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:387)
> {noformat}
> Turns out the problem really was:
> {noformat}
> 2013-05-09 11:21:45,625 WARN 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager: 
> Passed replication source implementation throws errors, defaulting to 
> ReplicationSource
> java.lang.ClassNotFoundException: Some.Other.ReplicationSource.Implementation
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:186)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.getReplicationSource(ReplicationSourceManager.java:324)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.addSource(ReplicationSourceManager.java:202)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.init(ReplicationSourceManager.java:174)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.Replication.startReplicationService(Replication.java:171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1583)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1042)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:698)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}
> So I think instantiating a ReplicationSource here is wrong and makes it 
> harder to debug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8816) Add support of loading multiple tables into LoadTestTool

2013-08-02 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-8816:
-

Description: 
Introducing an optional parameter 'num_tables' into LoadTestTool. When it's 
specified with positive integer n, LoadTestTool will load n tables parallely. 
-tn parameter value becomes table name prefix. Tables are created with name in 
format _1..._n. A sample command line "-tn test -num_tables 2" will 
create & load tables:"test_1" and "test_2"

The motivation is to add a handy way to load multiple tables concurrently. In 
addition, we could use this option to test resource leakage of long running 
clients.

  was:
Introducing an optional parameter 'concurrent_factor' into LoadTestTool. When 
it's specified with positive integer n, LoadTestTool will load n tables 
parallely. -tn parameter value becomes table name prefix. Tables are created 
with name in format _1..._n. A sample command line "-tn test 
-concurrent_factor 2" will create & load tables:"test_1" and "test_2"

The motivation is to add a handy way to load multiple tables concurrently. In 
addition, we could use this option to test resource leakage of long running 
clients.


> Add support of loading multiple tables into LoadTestTool
> 
>
> Key: HBASE-8816
> URL: https://issues.apache.org/jira/browse/HBASE-8816
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.94.9
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: hbase-8816.patch, hbase-8816-v1.patch, 
> hbase-8816-v2.patch, hbase-8816-v3.patch, hbase-8816-v3-trunk.patch
>
>
> Introducing an optional parameter 'num_tables' into LoadTestTool. When it's 
> specified with positive integer n, LoadTestTool will load n tables parallely. 
> -tn parameter value becomes table name prefix. Tables are created with name 
> in format _1..._n. A sample command line "-tn test -num_tables 2" 
> will create & load tables:"test_1" and "test_2"
> The motivation is to add a handy way to load multiple tables concurrently. In 
> addition, we could use this option to test resource leakage of long running 
> clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8816) Add support of loading multiple tables into LoadTestTool

2013-08-02 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-8816:
-

Fix Version/s: (was: 0.94.11)
   0.94.12

> Add support of loading multiple tables into LoadTestTool
> 
>
> Key: HBASE-8816
> URL: https://issues.apache.org/jira/browse/HBASE-8816
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.94.9
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: hbase-8816.patch, hbase-8816-v1.patch, 
> hbase-8816-v2.patch, hbase-8816-v3.patch
>
>
> Introducing an optional parameter 'concurrent_factor' into LoadTestTool. When 
> it's specified with positive integer n, LoadTestTool will load n tables 
> parallely. -tn parameter value becomes table name prefix. Tables are created 
> with name in format _1..._n. A sample command line "-tn test 
> -concurrent_factor 2" will create & load tables:"test_1" and "test_2"
> The motivation is to add a handy way to load multiple tables concurrently. In 
> addition, we could use this option to test resource leakage of long running 
> clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8816) Add support of loading multiple tables into LoadTestTool

2013-08-02 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-8816:
-

Attachment: hbase-8816-v3.patch

> Add support of loading multiple tables into LoadTestTool
> 
>
> Key: HBASE-8816
> URL: https://issues.apache.org/jira/browse/HBASE-8816
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.94.9
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: hbase-8816.patch, hbase-8816-v1.patch, 
> hbase-8816-v2.patch, hbase-8816-v3.patch
>
>
> Introducing an optional parameter 'concurrent_factor' into LoadTestTool. When 
> it's specified with positive integer n, LoadTestTool will load n tables 
> parallely. -tn parameter value becomes table name prefix. Tables are created 
> with name in format _1..._n. A sample command line "-tn test 
> -concurrent_factor 2" will create & load tables:"test_1" and "test_2"
> The motivation is to add a handy way to load multiple tables concurrently. In 
> addition, we could use this option to test resource leakage of long running 
> clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728461#comment-13728461
 ] 

Lars Hofhansl commented on HBASE-9115:
--

Wouldn't the safety net be just as well served (or even better - as in safer) 
by the server?

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.addendum2, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Byte

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728458#comment-13728458
 ] 

Ted Yu commented on HBASE-9115:
---

Append functionality has been available for a while.
This was the first report where client sent un-sorted KeyValue's to server.

This means most people adhere to sorting on client.
My patch is just a safety net for the above.

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.addendum2, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForC

[jira] [Commented] (HBASE-8663) a HBase Shell command to list the tables replicated from current cluster

2013-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728459#comment-13728459
 ] 

Hadoop QA commented on HBASE-8663:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12595708/HBASE-8663-trunk-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6586//console

This message is automatically generated.

> a HBase Shell command to list the tables replicated from current cluster
> 
>
> Key: HBASE-8663
> URL: https://issues.apache.org/jira/browse/HBASE-8663
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, shell
> Environment: clusters setup as Master and Slave for replication of 
> tables 
>Reporter: Demai Ni
>Assignee: Demai Ni
>Priority: Critical
> Attachments: HBASE-8663.PATCH, HBASE-8663-trunk-v0.patch, 
> HBASE-8663-trunk-v1.patch, HBASE-8663-trunk-v2.patch, HBASE-8663-v2.PATCH
>
>
> Thanks for the discussion and very good suggestions,I'd reduce the scope of 
> this jira to only display the tables replicated from current cluster. Since 
> currently no good(accurate and consistent) way to flag a table on slave 
> cluster, this jira will not cover such scenario. Instead, the patch will be 
> flexible enough to adapt such scenario and a follow up JIRA will be opened to 
> address such situation. 
> The shell command and output will be like. Since all replication is 'global', 
> so no need to display the cluster name here. In the future, the command will 
> be extended for other scenarios, such as 1) replicated only to selected peers 
> or 2) indicate table:colfam on slave side
> {code: title=hbase shell command:list_replicated_tables |borderStyle=solid}
> hbase(main):001:0> list_replicated_tables
> TABLE:COLUMNFAMILY   ReplicationType  
>  
>  t1_dn:cf1   GLOBAL   
>  
>  t2_dn:cf2   GLOBAL 

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728457#comment-13728457
 ] 

Lars Hofhansl commented on HBASE-9115:
--

Addendum 2 is good too.
Why do you prefer sorting on the client?

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.addendum2, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column

[jira] [Updated] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9115:
--

Attachment: 9115-trunk.addendum2

How about addendum 2 ?

I prefer the sorting to be done by the client.

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.addendum2, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resul

[jira] [Comment Edited] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728451#comment-13728451
 ] 

Lars Hofhansl edited comment on HBASE-9115 at 8/3/13 5:00 AM:
--

And (last comment, promised :) ), it is better if the server would sort anyway, 
that way it does not rely on the client providing data in the right order:
{code}
Index: src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
===
--- src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
(revision 1509936)
+++ src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
(working copy)
@@ -5098,6 +5098,7 @@
 
   // Get previous values for all columns in this family
   Get get = new Get(row);
+  Collections.sort(family.getValue(), store.getComparator());
   for (KeyValue kv : family.getValue()) {
 get.addColumn(family.getKey(), kv.getQualifier());
   }
{code}

  was (Author: lhofhansl):
And (last comment, promised :) ), it is better if the server would sort 
anyway, that way it does not rely on the client providing data in the right 
order.

  
> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);

[jira] [Comment Edited] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728449#comment-13728449
 ] 

Lars Hofhansl edited comment on HBASE-9115 at 8/3/13 4:54 AM:
--

We are sorting, it does not matter how it is implemented. 

Sorting should be O(n*log\(n)). Collections.sort as O\(n) complexity when the 
data is sorted and is better than n*log\(n) when the data is partially sorted.
Why invent our own sorting when we have support for this in the JDK.

Or... If the List in question question here would be a sorted set (using 
KVComparator) then we just add to that set and we're done.


  was (Author: lhofhansl):
We are sorting, it does not matter how it is implemented. 

Sorting should be O(n*log(n)). Collections.sort as O(n) complexity when the 
data is sorted and is better than n*log(n) when the data is partially sorted.
Why invent our own sorting when we have support for this in the JDK.

Or... If the List in question question here would be a sorted set (using 
KVComparator) then we just add to that set and we're done.

  
> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
>   

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728451#comment-13728451
 ] 

Lars Hofhansl commented on HBASE-9115:
--

And (last comment, promised :) ), it is better if the server would sort anyway, 
that way it does not rely on the client providing data in the right order.


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728450#comment-13728450
 ] 

Lars Hofhansl commented on HBASE-9115:
--

Although the last is hard, because the familyMap is maintained in Mutation.

I do not feel super strong about this.

Current patch is fine. Addendum is fine too, but is a bandaid, as it only helps 
if caller tends to add columns in a sorted way, otherwise we're approaching n^2 
again.


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result re

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728449#comment-13728449
 ] 

Lars Hofhansl commented on HBASE-9115:
--

We are sorting, it does not matter how it is implemented. 

Sorting should be O(n*log(n)). Collections.sort as O(n) complexity when the 
data is sorted and is better than n*log(n) when the data is partially sorted.
Why invent our own sorting when we have support for this in the JDK.

Or... If the List in question question here would be a sorted set (using 
KVComparator) then we just add to that set and we're done.


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFa

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728446#comment-13728446
 ] 

Ted Yu commented on HBASE-9115:
---

After each call to Append.add(), order is maintained. We don't need to sort 
again.

In the normal case, comparisons in Append.add() calls would be linear to number 
of columns added.

Can you elaborate why Sorted collection is needed ?

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = res

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728441#comment-13728441
 ] 

Lars Hofhansl commented on HBASE-9115:
--

Why not do it right then and use Collections.sort before serialization or after 
deserialization (or both)?


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toByt

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728442#comment-13728442
 ] 

Lars Hofhansl commented on HBASE-9115:
--

Or use a Sorted collection in the first place?

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 =

[jira] [Commented] (HBASE-9095) AssignmentManager's handleRegion should respect the single threaded nature of the processing

2013-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728443#comment-13728443
 ] 

Hadoop QA commented on HBASE-9095:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595700/9095-1.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6585//console

This message is automatically generated.

> AssignmentManager's handleRegion should respect the single threaded nature of 
> the processing
> 
>
> Key: HBASE-9095
> URL: https://issues.apache.org/jira/browse/HBASE-9095
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.95.2
>
> Attachments: 9095-1.txt, 9095-1.txt, 9095-1.txt
>
>
> While debugging a case where a region was getting opened on a RegionServer 
> and then closed soon after (and then never re-opened anywhere thereafter), it 
> seemed like the processing in handleRegion to do with deletion of ZK nodes 
> should be non-asynchronous. This achieves two things:
> 1. The synchronous deletion prevents more than one processing on the same 
> event data twice. Assuming that we do get more than one notification (on 
> let's say, region OPENED event), the subsequent processing(s) in handleRegion 
> for the same znode would end up with a zookeeper node not found exception. 
> The return value of the data read would be null and that's already handled. 
> If it is asynchronous, it leads to issues like - master opens a region on a 
> certain RegionServer and soon after it sends that RegionServer a close for 
> the same region, and then the znode is deleted.
> 2. The deletion is currently handled in an executor service. This is 
> problematic since by design the events for a given region should be processed 
> in order. By delegating a part of the processing to executor service we are 
> somewhat violating this

[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728439#comment-13728439
 ] 

Ted Yu commented on HBASE-9115:
---

Addendum reduces additional comparisons.

bq. nobody will Append to a million columns of the same row.
High number of columns of the same row should be supported.

Plan to integrate addendum if QA came back clean.

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBy

[jira] [Commented] (HBASE-6580) Deprecate HTablePool in favor of HConnection.getTable(...)

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728435#comment-13728435
 ] 

Lars Hofhansl commented on HBASE-6580:
--

I'll work on some test changes to use the new APIs, and spend some time on the 
JavaDoc in HCM.
Going to commit to all branches (including 0.94) early next week.

> Deprecate HTablePool in favor of HConnection.getTable(...)
> --
>
> Key: HBASE-6580
> URL: https://issues.apache.org/jira/browse/HBASE-6580
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 6580-trunk.txt, HBASE-6580_v1.patch, HBASE-6580_v2.patch
>
>
> Update:
> I now propose deprecating HTablePool and instead introduce a getTable method 
> on HConnection and allow HConnection to manage the ThreadPool.
> Initial proposal:
> Here I propose a very simple TablePool.
> It could be called LightHTablePool (or something - if you have a better name).
> Internally it would maintain an HConnection and an Executor service and each 
> invocation of getTable(...) would create a new HTable and close() would just 
> close it.
> In testing I find this more light weight than HTablePool and easier to 
> monitor in terms of resources used.
> It would hardly be more than a few dozen lines of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9120) ClassFinder logs errors that are not

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728433#comment-13728433
 ] 

Lars Hofhansl commented on HBASE-9120:
--

lgtm

> ClassFinder logs errors that are not
> 
>
> Key: HBASE-9120
> URL: https://issues.apache.org/jira/browse/HBASE-9120
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.94.10
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.94.11
>
> Attachments: HBASE-9120.patch
>
>
> ClassFinder logs error messages that are not actionable, so they just cause 
> distraction

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728432#comment-13728432
 ] 

Lars Hofhansl edited comment on HBASE-7709 at 8/3/13 4:10 AM:
--

In fact we will have the following setup:

A <\-> B, C <\-> D, E <-> F, ... (where these are all pairs of DR clusters. We 
keep them both as master so that a failover for other reasons, even just as 
exercise does not need further configuration).
We sometime migrate an entire cluster, say A. In that case we'd also replicate 
A -> C. Currently we can't do that, because the data from A would bounce 
between C and D forever.


  was (Author: lhofhansl):
In fact we will have the following setup:

A <-> B, C <-> D, E <-> F, ... (where these are all pairs of DR clusters. We 
keep them both as master so that a failover for other reasons, even just as 
exercise does not need further configuration).
We sometime migrate an entire cluster, say A. In that case we'd also replicate 
A -> C. Currently we can't do that, because the data from A would bounce 
between C and D forever.

  
> Infinite loop possible in Master/Master replication
> ---
>
> Key: HBASE-7709
> URL: https://issues.apache.org/jira/browse/HBASE-7709
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.6, 0.95.1
>Reporter: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
>
>  We just discovered the following scenario:
> # Cluster A and B are setup in master/master replication
> # By accident we had Cluster C replicate to Cluster A.
> Now all edit originating from C will be bouncing between A and B. Forever!
> The reason is that when the edit come in from C the cluster ID is already set 
> and won't be reset.
> We have a couple of options here:
> # Optionally only support master/master (not cycles of more than two 
> clusters). In that case we can always reset the cluster ID in the 
> ReplicationSource. That means that now cycles > 2 will have the data cycle 
> forever. This is the only option that requires no changes in the HLog format.
> # Instead of a single cluster id per edit maintain a (unordered) set of 
> cluster id that have seen this edit. Then in ReplicationSource we drop any 
> edit that the sink has seen already. The is the cleanest approach, but it 
> might need a lot of data stored per edit if there are many clusters involved.
> # Maintain a configurable counter of the maximum cycle side we want to 
> support. Could default to 10 (even maybe even just). Store a hop-count in the 
> WAL and the ReplicationSource increases that hop-count on each hop. If we're 
> over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-02 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7709:
-

Fix Version/s: (was: 0.94.11)
   0.94.12

> Infinite loop possible in Master/Master replication
> ---
>
> Key: HBASE-7709
> URL: https://issues.apache.org/jira/browse/HBASE-7709
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.6, 0.95.1
>Reporter: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
>
>  We just discovered the following scenario:
> # Cluster A and B are setup in master/master replication
> # By accident we had Cluster C replicate to Cluster A.
> Now all edit originating from C will be bouncing between A and B. Forever!
> The reason is that when the edit come in from C the cluster ID is already set 
> and won't be reset.
> We have a couple of options here:
> # Optionally only support master/master (not cycles of more than two 
> clusters). In that case we can always reset the cluster ID in the 
> ReplicationSource. That means that now cycles > 2 will have the data cycle 
> forever. This is the only option that requires no changes in the HLog format.
> # Instead of a single cluster id per edit maintain a (unordered) set of 
> cluster id that have seen this edit. Then in ReplicationSource we drop any 
> edit that the sink has seen already. The is the cleanest approach, but it 
> might need a lot of data stored per edit if there are many clusters involved.
> # Maintain a configurable counter of the maximum cycle side we want to 
> support. Could default to 10 (even maybe even just). Store a hop-count in the 
> WAL and the ReplicationSource increases that hop-count on each hop. If we're 
> over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728432#comment-13728432
 ] 

Lars Hofhansl commented on HBASE-7709:
--

In fact we will have the following setup:

A <-> B, C <-> D, E <-> F, ... (where these are all pairs of DR clusters. We 
keep them both as master so that a failover for other reasons, even just as 
exercise does not need further configuration).
We sometime migrate an entire cluster, say A. In that case we'd also replicate 
A -> C. Currently we can't do that, because the data from A would bounce 
between C and D forever.


> Infinite loop possible in Master/Master replication
> ---
>
> Key: HBASE-7709
> URL: https://issues.apache.org/jira/browse/HBASE-7709
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.6, 0.95.1
>Reporter: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
>
>  We just discovered the following scenario:
> # Cluster A and B are setup in master/master replication
> # By accident we had Cluster C replicate to Cluster A.
> Now all edit originating from C will be bouncing between A and B. Forever!
> The reason is that when the edit come in from C the cluster ID is already set 
> and won't be reset.
> We have a couple of options here:
> # Optionally only support master/master (not cycles of more than two 
> clusters). In that case we can always reset the cluster ID in the 
> ReplicationSource. That means that now cycles > 2 will have the data cycle 
> forever. This is the only option that requires no changes in the HLog format.
> # Instead of a single cluster id per edit maintain a (unordered) set of 
> cluster id that have seen this edit. Then in ReplicationSource we drop any 
> edit that the sink has seen already. The is the cleanest approach, but it 
> might need a lot of data stored per edit if there are many clusters involved.
> # Maintain a configurable counter of the maximum cycle side we want to 
> support. Could default to 10 (even maybe even just). Store a hop-count in the 
> WAL and the ReplicationSource increases that hop-count on each hop. If we're 
> over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728430#comment-13728430
 ] 

Lars Hofhansl commented on HBASE-9115:
--

That'd be better in more cases. I guess we could just sort either before we 
serialize or after we deserialize.

Or maybe we do not need to bother since nobody will Append to a million columns 
of the same row.


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValu

[jira] [Updated] (HBASE-8496) Implement tags and the internals of how a tag should look like

2013-08-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-8496:
--

Attachment: Tag design_updated.pdf

Updated design document. Patch to follow based on this.  
The optional part of writing tags could be done in a follow up JIRA.

> Implement tags and the internals of how a tag should look like
> --
>
> Key: HBASE-8496
> URL: https://issues.apache.org/jira/browse/HBASE-8496
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.98.0, 0.95.2
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Attachments: Comparison.pdf, HBASE-8496_2.patch, HBASE-8496.patch, 
> Tag design.pdf, Tag design_updated.pdf, Tag_In_KV_Buffer_For_reference.patch
>
>
> The intent of this JIRA comes from HBASE-7897.
> This would help us to decide on the structure and format of how the tags 
> should look like. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8408) Implement namespace

2013-08-02 Thread Francis Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Liu updated HBASE-8408:
---

Attachment: HBASE-8015_13.patch

patch till page 2

> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_13.patch, HBASE-8015_1.patch, HBASE-8015_2.patch, 
> HBASE-8015_3.patch, HBASE-8015_4.patch, HBASE-8015_5.patch, 
> HBASE-8015_6.patch, HBASE-8015_7.patch, HBASE-8015_8.patch, 
> HBASE-8015_9.patch, HBASE-8015.patch, TestNamespaceMigration.tgz, 
> TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9083) Downstreamers have to include a load of runtime dependencies

2013-08-02 Thread Michael Webster (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728426#comment-13728426
 ] 

Michael Webster commented on HBASE-9083:


I can work on this.  Correct me if I am wrong, but this change should consist 
of removing dependencies HBase doesn't directly use and let maven figure out 
what to pull in for zk and the rest?  Sorry if that is a restatement of 
previous comments, I just wanted to make sure I understand.

> Downstreamers have to include a load of runtime dependencies
> 
>
> Key: HBASE-9083
> URL: https://issues.apache.org/jira/browse/HBASE-9083
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: stack
>  Labels: noob
> Fix For: 0.98.0
>
>
> Here is example from 0.95.  Downstream project includes hbase-client ONLY.  
> To run the downstream project, here are the runtime dependencies currently.  
> This is hadoop1.
> {code}
>  java -cp 
> target/client-1.0-SNAPSHOT.jar:/Users/stack/.m2/repository/org/apache/hbase/hbase-client/0.95.2-hadoop1-SNAPSHOT/hbase-client-0.95.2-hadoop1-SNAPSHOT.jar:/Users/stack/.m2/repository/org/apache/hbase/hbase-common/0.95.2-hadoop1-SNAPSHOT/hbase-common-0.95.2-hadoop1-SNAPSHOT.jar:/Users/stack/.m2/repository/org/apache/hadoop/hadoop-core/1.1.2/hadoop-core-1.1.2.jar:/Users/stack/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/Users/stack/.m2/repository/com/google/protobuf/protobuf-java/2.4.1/protobuf-java-2.4.1.jar:/Users/stack/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/Users/stack/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/Users/stack/.m2/repository/org/apache/hbase/hbase-protocol/0.95.2-hadoop1-SNAPSHOT/hbase-protocol-0.95.2-hadoop1-SNAPSHOT.jar:/Users/stack/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/Users/stack/.m2/repository/org/slf4j/slf4j-api/1.6.4/slf4j-api-1.6.4.jar:/Users/stack/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar:/Users/stack/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/Users/stack/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/Users/stack/.m2/repository/org/cloudera/htrace/htrace/1.50/htrace-1.50.jar:/Users/stack/.m2/repository/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.jar:/Users/stack/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar
>   org.hbase.downstream.Client
> {code}
> Thats:
> {code}
> hbase-client
> base-common
> hbase-protocol
> hadoop-core
> commons-logging
> protobuf
> commons-lang
> commons-configuration
> zookeeper
> slf4j-api (AND commons-logging!)
> guava
> jackson-mapper-asl
> jackson-core-asl
> htrace
> slf4j-log4j12
> slf4j
> {code}
> Most of the above come in because of hadoop and zk (zk wants slf4j).
> Can we shed any of these?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728409#comment-13728409
 ] 

Hudson commented on HBASE-9087:
---

SUCCESS: Integrated in HBase-TRUNK #4336 (See 
[https://builds.apache.org/job/HBase-TRUNK/4336/])
HBASE-9087 Handlers being blocked during reads (eclark: rev 1509886)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728406#comment-13728406
 ] 

Hudson commented on HBASE-9087:
---

SUCCESS: Integrated in hbase-0.95 #398 (See 
[https://builds.apache.org/job/hbase-0.95/398/])
HBASE-9087 Handlers being blocked during reads (eclark: rev 1509887)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728401#comment-13728401
 ] 

Hudson commented on HBASE-9087:
---

SUCCESS: Integrated in HBase-0.94 #1092 (See 
[https://builds.apache.org/job/HBase-0.94/1092/])
HBASE-9087 Handlers being blocked during reads (Elliott) (larsh: rev 1509922)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728399#comment-13728399
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in HBase-0.94 #1092 (See 
[https://builds.apache.org/job/HBase-0.94/1092/])
HBASE-9115 HTableInterface.append operation may overwrites values (Ted Yu) 
(larsh: rev 1509921)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBy

[jira] [Commented] (HBASE-8949) hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize of a table

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728400#comment-13728400
 ] 

Hudson commented on HBASE-8949:
---

SUCCESS: Integrated in HBase-0.94 #1092 (See 
[https://builds.apache.org/job/HBase-0.94/1092/])
HBASE-8949 hbase.mapreduce.hfileoutputformat.blocksize should configure with 
blocksize of a table (rajeshbabu) (larsh: rev 1509923)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
* /hbase/branches/0.94/src/main/resources/hbase-default.xml


> hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize 
> of a table
> --
>
> Key: HBASE-8949
> URL: https://issues.apache.org/jira/browse/HBASE-8949
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.94.11
>
> Attachments: HBASE-8949_94_2.patch, HBASE-8949_94.patch, 
> HBASE-8949_trunk_2.patch, HBASE-8949_trunk.patch
>
>
> While initializing mapreduce job we are not configuring 
> hbase.mapreduce.hfileoutputformat.blocksize, so hfiles are always creating 
> with 64kb (default)block size even though tables has different block size.
> We need to configure it with block size from table descriptor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8322) Re-enable hbase checksums by default

2013-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728396#comment-13728396
 ] 

Hadoop QA commented on HBASE-8322:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595682/HBASE-8322-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestKeepDeletes.testRanges(TestKeepDeletes.java:554)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6583//console

This message is automatically generated.

> Re-enable hbase checksums by default
> 
>
> Key: HBASE-8322
> URL: https://issues.apache.org/jira/browse/HBASE-8322
> Project: HBase
>  Issue Type: Improvement
>  Components: Filesystem Integration
>Reporter: Enis Soztutar
>Assignee: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.98.0, 0.95.2
>
> Attachments: hbase-8322_v1.patch, HBASE-8322-v2.patch
>
>
> Double checksumming issues in HBase level checksums(HBASE-5074) has been 
> fixed in HBASE-6868. However, that patch also disables hbase checksums by 
> default. I think we should re-enable it by default, and document the 
> interaction with shortcircuit reads. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8949) hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize of a table

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728391#comment-13728391
 ] 

Hudson commented on HBASE-8949:
---

SUCCESS: Integrated in HBase-0.94-security #243 (See 
[https://builds.apache.org/job/HBase-0.94-security/243/])
HBASE-8949 hbase.mapreduce.hfileoutputformat.blocksize should configure with 
blocksize of a table (rajeshbabu) (larsh: rev 1509923)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
* /hbase/branches/0.94/src/main/resources/hbase-default.xml


> hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize 
> of a table
> --
>
> Key: HBASE-8949
> URL: https://issues.apache.org/jira/browse/HBASE-8949
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.94.11
>
> Attachments: HBASE-8949_94_2.patch, HBASE-8949_94.patch, 
> HBASE-8949_trunk_2.patch, HBASE-8949_trunk.patch
>
>
> While initializing mapreduce job we are not configuring 
> hbase.mapreduce.hfileoutputformat.blocksize, so hfiles are always creating 
> with 64kb (default)block size even though tables has different block size.
> We need to configure it with block size from table descriptor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728390#comment-13728390
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in HBase-0.94-security #243 (See 
[https://builds.apache.org/job/HBase-0.94-security/243/])
HBASE-9115 HTableInterface.append operation may overwrites values (Ted Yu) 
(larsh: rev 1509921)
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> 

[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728392#comment-13728392
 ] 

Hudson commented on HBASE-9087:
---

SUCCESS: Integrated in HBase-0.94-security #243 (See 
[https://builds.apache.org/job/HBase-0.94-security/243/])
HBASE-9087 Handlers being blocked during reads (Elliott) (larsh: rev 1509922)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9121) Add tracing into interesting parts of HBase

2013-08-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728383#comment-13728383
 ] 

Nick Dimiduk commented on HBASE-9121:
-

Now you're talking! I guess I don't understand the HTrace business then. Why 
not s/htrace/zipkin?

> Add tracing into interesting parts of HBase
> ---
>
> Key: HBASE-9121
> URL: https://issues.apache.org/jira/browse/HBASE-9121
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-9121-PRE.patch, Zipkin - Trace 536c785021658f1d.png
>
>
> Add tracing to zk, hfile reader, and hlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7183) print WARN message if hbase.replication.sizeOfLogQueue is too big

2013-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728381#comment-13728381
 ] 

Hadoop QA commented on HBASE-7183:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595681/HBASE_7183-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6582//console

This message is automatically generated.

> print WARN message if hbase.replication.sizeOfLogQueue is too big
> -
>
> Key: HBASE-7183
> URL: https://issues.apache.org/jira/browse/HBASE-7183
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Sho Shimauchi
>  Labels: noob
> Attachments: HBASE_7183.patch, HBASE_7183-v2.patch
>
>
> A metric hbase.replication.sizeOfLogQueue may become big when replication is 
> delaying.
> It would be useful if HBase prints WARN log which tells 
> hbase.replication.sizeOfLogQueue is too big.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8408) Implement namespace

2013-08-02 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728378#comment-13728378
 ] 

Francis Liu commented on HBASE-8408:


I'm at page 2 right now. Will upload new patch for testing when it's done. 
Weird hadoopqa did not ran patch 12 I submitted last night. Luckily I ran the 
tests myself as well so far one failure is left zombie test: 
TestDistributedLogSplitting. Weird problem the cluster won't startup, seems 
previous cluster didn't shutdown as well. The class restarts the cluster after 
every method. Hopefully just some cleanup missing.

> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_1.patch, HBASE-8015_2.patch, HBASE-8015_3.patch, 
> HBASE-8015_4.patch, HBASE-8015_5.patch, HBASE-8015_6.patch, 
> HBASE-8015_7.patch, HBASE-8015_8.patch, HBASE-8015_9.patch, HBASE-8015.patch, 
> TestNamespaceMigration.tgz, TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9115:
--

Status: Patch Available  (was: Reopened)

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValue(Bytes.toBytes(cFamily), 
> column3);
> if (resultForColumn1 

[jira] [Reopened] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reopened HBASE-9115:
---


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValue(Bytes.toBytes(cFamily), 
> column3);
> if (resultForColumn1 == null || resultForColumn2 == null || 
> r

[jira] [Updated] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9115:
--

Attachment: 9115-trunk.addendum

How about this addendum ?

It reduces additional comparisons to linear scale.

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.addendum, 
> 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = re

[jira] [Updated] (HBASE-8565) stop-hbase.sh clean up: backup master

2013-08-02 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-8565:


Status: Patch Available  (was: Open)

> stop-hbase.sh clean up: backup master
> -
>
> Key: HBASE-8565
> URL: https://issues.apache.org/jira/browse/HBASE-8565
> Project: HBase
>  Issue Type: Bug
>  Components: master, scripts
>Affects Versions: 0.95.0, 0.94.7
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-8565-v1-0.94.patch, HBASE-8565-v1-trunk.patch
>
>
> In stop-hbase.sh:
> {code}
>   # TODO: store backup masters in ZooKeeper and have the primary send them a 
> shutdown message
>   # stop any backup masters
>   "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
> --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
> {code}
> After HBASE-5213, stop-hbase.sh -> hbase master stop will bring down the 
> backup master too via the cluster status znode.
> We should not need the above code anymore.
> Another issue happens when the current master died and the backup master 
> became the active master.
> {code}
> nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \
>--config "${HBASE_CONF_DIR}" \
>master stop "$@" > "$logout" 2>&1 < /dev/null &
> waitForProcessEnd `cat $pid` 'stop-master-command'
> {code}
> We can still issue 'hbase-stop.sh' from the old master.
> stop-hbase.sh -> hbase master stop -> look for active master -> request 
> shutdown
> This process still works.
> But the waitForProcessEnd statement will not work since the local master pid 
> is not relevant anymore.
> What is the best way in the this case?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-8565) stop-hbase.sh clean up: backup master

2013-08-02 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He reassigned HBASE-8565:
---

Assignee: Jerry He

> stop-hbase.sh clean up: backup master
> -
>
> Key: HBASE-8565
> URL: https://issues.apache.org/jira/browse/HBASE-8565
> Project: HBase
>  Issue Type: Bug
>  Components: master, scripts
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-8565-v1-0.94.patch, HBASE-8565-v1-trunk.patch
>
>
> In stop-hbase.sh:
> {code}
>   # TODO: store backup masters in ZooKeeper and have the primary send them a 
> shutdown message
>   # stop any backup masters
>   "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
> --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
> {code}
> After HBASE-5213, stop-hbase.sh -> hbase master stop will bring down the 
> backup master too via the cluster status znode.
> We should not need the above code anymore.
> Another issue happens when the current master died and the backup master 
> became the active master.
> {code}
> nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \
>--config "${HBASE_CONF_DIR}" \
>master stop "$@" > "$logout" 2>&1 < /dev/null &
> waitForProcessEnd `cat $pid` 'stop-master-command'
> {code}
> We can still issue 'hbase-stop.sh' from the old master.
> stop-hbase.sh -> hbase master stop -> look for active master -> request 
> shutdown
> This process still works.
> But the waitForProcessEnd statement will not work since the local master pid 
> is not relevant anymore.
> What is the best way in the this case?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8565) stop-hbase.sh clean up: backup master

2013-08-02 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728365#comment-13728365
 ] 

Jerry He commented on HBASE-8565:
-

Attached an initial patch.

> stop-hbase.sh clean up: backup master
> -
>
> Key: HBASE-8565
> URL: https://issues.apache.org/jira/browse/HBASE-8565
> Project: HBase
>  Issue Type: Bug
>  Components: master, scripts
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-8565-v1-0.94.patch, HBASE-8565-v1-trunk.patch
>
>
> In stop-hbase.sh:
> {code}
>   # TODO: store backup masters in ZooKeeper and have the primary send them a 
> shutdown message
>   # stop any backup masters
>   "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
> --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
> {code}
> After HBASE-5213, stop-hbase.sh -> hbase master stop will bring down the 
> backup master too via the cluster status znode.
> We should not need the above code anymore.
> Another issue happens when the current master died and the backup master 
> became the active master.
> {code}
> nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \
>--config "${HBASE_CONF_DIR}" \
>master stop "$@" > "$logout" 2>&1 < /dev/null &
> waitForProcessEnd `cat $pid` 'stop-master-command'
> {code}
> We can still issue 'hbase-stop.sh' from the old master.
> stop-hbase.sh -> hbase master stop -> look for active master -> request 
> shutdown
> This process still works.
> But the waitForProcessEnd statement will not work since the local master pid 
> is not relevant anymore.
> What is the best way in the this case?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8565) stop-hbase.sh clean up: backup master

2013-08-02 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-8565:


Attachment: HBASE-8565-v1-0.94.patch
HBASE-8565-v1-trunk.patch

> stop-hbase.sh clean up: backup master
> -
>
> Key: HBASE-8565
> URL: https://issues.apache.org/jira/browse/HBASE-8565
> Project: HBase
>  Issue Type: Bug
>  Components: master, scripts
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-8565-v1-0.94.patch, HBASE-8565-v1-trunk.patch
>
>
> In stop-hbase.sh:
> {code}
>   # TODO: store backup masters in ZooKeeper and have the primary send them a 
> shutdown message
>   # stop any backup masters
>   "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
> --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
> {code}
> After HBASE-5213, stop-hbase.sh -> hbase master stop will bring down the 
> backup master too via the cluster status znode.
> We should not need the above code anymore.
> Another issue happens when the current master died and the backup master 
> became the active master.
> {code}
> nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \
>--config "${HBASE_CONF_DIR}" \
>master stop "$@" > "$logout" 2>&1 < /dev/null &
> waitForProcessEnd `cat $pid` 'stop-master-command'
> {code}
> We can still issue 'hbase-stop.sh' from the old master.
> stop-hbase.sh -> hbase master stop -> look for active master -> request 
> shutdown
> This process still works.
> But the waitForProcessEnd statement will not work since the local master pid 
> is not relevant anymore.
> What is the best way in the this case?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9121) Add tracing into interesting parts of HBase

2013-08-02 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728364#comment-13728364
 ] 

Elliott Clark commented on HBASE-9121:
--

[~ndimiduk] https://github.com/twitter/zipkin/pull/274 

:-)

bq.Can't you put up a picture that would give others a clue as to why we should 
check this in?
Sure can.

> Add tracing into interesting parts of HBase
> ---
>
> Key: HBASE-9121
> URL: https://issues.apache.org/jira/browse/HBASE-9121
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-9121-PRE.patch, Zipkin - Trace 536c785021658f1d.png
>
>
> Add tracing to zk, hfile reader, and hlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9121) Add tracing into interesting parts of HBase

2013-08-02 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9121:
-

Attachment: Zipkin - Trace 536c785021658f1d.png

> Add tracing into interesting parts of HBase
> ---
>
> Key: HBASE-9121
> URL: https://issues.apache.org/jira/browse/HBASE-9121
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-9121-PRE.patch, Zipkin - Trace 536c785021658f1d.png
>
>
> Add tracing to zk, hfile reader, and hlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728359#comment-13728359
 ] 

Hudson commented on HBASE-9087:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #650 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/650/])
HBASE-9087 Handlers being blocked during reads (eclark: rev 1509886)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728357#comment-13728357
 ] 

Lars Hofhansl commented on HBASE-9115:
--

In that case the loop is going the entire list

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValue(Bytes.t

[jira] [Commented] (HBASE-9121) Add tracing into interesting parts of HBase

2013-08-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728350#comment-13728350
 ] 

Nick Dimiduk commented on HBASE-9121:
-

This tracing stuff is useful for users too. Have you looked at 
[Zipkin|http://twitter.github.io/zipkin/] as a tracing tool that users might 
use in production?

> Add tracing into interesting parts of HBase
> ---
>
> Key: HBASE-9121
> URL: https://issues.apache.org/jira/browse/HBASE-9121
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-9121-PRE.patch
>
>
> Add tracing to zk, hfile reader, and hlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8408) Implement namespace

2013-08-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728349#comment-13728349
 ] 

stack commented on HBASE-8408:
--

I finished my review.  I think this is really close.  How long to address the 
reviews [~toffer]?  Suggest you keep running patches against hadoopqa in the 
meantime so problematic tests shine through.

> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_1.patch, HBASE-8015_2.patch, HBASE-8015_3.patch, 
> HBASE-8015_4.patch, HBASE-8015_5.patch, HBASE-8015_6.patch, 
> HBASE-8015_7.patch, HBASE-8015_8.patch, HBASE-8015_9.patch, HBASE-8015.patch, 
> TestNamespaceMigration.tgz, TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728348#comment-13728348
 ] 

Ted Yu commented on HBASE-9115:
---

bq. We're doing n^2 sorting here
For most users of append(), the patch doesn't change much because each KeyValue 
added should have come in sorted order.


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Byt

[jira] [Commented] (HBASE-8224) Publish hbase build against h1 and h2 adding '-hadoop1' or '-hadoop2' to version string

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728337#comment-13728337
 ] 

Hudson commented on HBASE-8224:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #215 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/215/])
HBASE-8224 Publish hbase build against h1 and h2 adding '-hadoop1' or 
'-hadoop2' to version string (stack: rev 1509811)
* /hbase/branches/0.95/hbase-client/pom.xml
* /hbase/branches/0.95/hbase-common/pom.xml
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/util/JVM.java
* /hbase/branches/0.95/hbase-examples/pom.xml
* /hbase/branches/0.95/hbase-hadoop1-compat/pom.xml
* /hbase/branches/0.95/hbase-hadoop2-compat/pom.xml
* /hbase/branches/0.95/hbase-it/pom.xml
* /hbase/branches/0.95/hbase-prefix-tree/pom.xml
* /hbase/branches/0.95/hbase-server/pom.xml
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/package-info.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/thrift/HThreadedSelectorServerArgs.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java
* /hbase/branches/0.95/pom.xml


> Publish hbase build against h1 and h2 adding '-hadoop1' or '-hadoop2' to 
> version string
> ---
>
> Key: HBASE-8224
> URL: https://issues.apache.org/jira/browse/HBASE-8224
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.0, 0.95.2
>
> Attachments: 8224-adding.classifiers.txt, 8224.gen.script.txt, 
> 8224.gen.scriptv3.txt, 8224.gen.scriptv3.txt, 8224v5.txt, 
> hbase-8224-proto1.patch
>
>
> So we can publish both the hadoop1 and the hadoop2 jars to a maven 
> repository, and so we can publish two packages, one for hadoop1 and one for 
> hadoop2, given how maven works, our only alternative (to the best of my 
> knowledge and after consulting others) is by amending the version string to 
> include hadoop1 or hadoop2.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728339#comment-13728339
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #215 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/215/])
HBASE-9115 HTableInterface.append operation may overwrites values (Ted Yu) 
(tedyu: rev 1509854)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
>  

[jira] [Updated] (HBASE-8949) hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize of a table

2013-08-02 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-8949:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94

> hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize 
> of a table
> --
>
> Key: HBASE-8949
> URL: https://issues.apache.org/jira/browse/HBASE-8949
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.94.11
>
> Attachments: HBASE-8949_94_2.patch, HBASE-8949_94.patch, 
> HBASE-8949_trunk_2.patch, HBASE-8949_trunk.patch
>
>
> While initializing mapreduce job we are not configuring 
> hbase.mapreduce.hfileoutputformat.blocksize, so hfiles are always creating 
> with 64kb (default)block size even though tables has different block size.
> We need to configure it with block size from table descriptor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728340#comment-13728340
 ] 

Hudson commented on HBASE-9087:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #215 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/215/])
HBASE-9087 Handlers being blocked during reads (eclark: rev 1509887)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8983) HBaseConnection#deleteAllConnections does not always delete

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728338#comment-13728338
 ] 

Hudson commented on HBASE-8983:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #215 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/215/])
HBASE-8983 HBaseConnection#deleteAllConnections does not always delete (Nicolas 
Liochon via JD) (jdcryans: rev 1509845)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java


> HBaseConnection#deleteAllConnections does not always delete
> ---
>
> Key: HBASE-8983
> URL: https://issues.apache.org/jira/browse/HBASE-8983
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.95.1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.95.2
>
> Attachments: 8983.v1.patch, 8983.v2.patch, 8983.v3.patch, 8983-v4.txt
>
>
> Cf; mailing list 
> http://search-hadoop.com/m/wurpu1s8Fhs/liochon&subj=Re+Connection+reference+counting

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9119) hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize of a table

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728336#comment-13728336
 ] 

Hudson commented on HBASE-9119:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #215 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/215/])
HBASE-9119 hbase.mapreduce.hfileoutputformat.blocksize should configure with 
blocksize of a table (stack: rev 1509836)
* /hbase/branches/0.95/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java


> hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize 
> of a table
> --
>
> Key: HBASE-9119
> URL: https://issues.apache.org/jira/browse/HBASE-9119
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.95.2
>
>
> Forward port the HBASE-8949 0.94 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9031) ImmutableBytesWritable.toString() should downcast the bytes before converting to hex string

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728341#comment-13728341
 ] 

Hudson commented on HBASE-9031:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #215 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/215/])
HBASE-9031 ImmutableBytesWritable.toString() should downcast the bytes before 
converting to hex string (stack: rev 1509840)
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java


> ImmutableBytesWritable.toString() should downcast the bytes before converting 
> to hex string
> ---
>
> Key: HBASE-9031
> URL: https://issues.apache.org/jira/browse/HBASE-9031
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.95.1, 0.94.9
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Minor
> Fix For: 0.98.0, 0.95.2
>
> Attachments: HBASE-9031.patch, HBASE-9031.patch, HBASE-9031.patch
>
>
> The attached patch addresses few issues.
> # We need only (3*this.length) capacity in ByteBuffer and not 
> (3*this.bytes.length).
> # Do not calculate (offset + length) at every iteration.
> # No test is required at every iteration to add space (' ') before every byte 
> other than the first one. Uses {{sb.substring(1)}} instead.
> # Finally and most importantly (the original issue of this report), downcast 
> the promoted int (the parameter to {{Integer.toHexString()}}) to byte range.
> Without #4, the byte array \{54,125,64, -1, -45\} is transformed to "36 7d 40 
>  ffd3" instead of "36 7d 40 ff d3".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9087:
-

Fix Version/s: (was: 0.94.12)
   0.94.11

> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728331#comment-13728331
 ] 

Lars Hofhansl commented on HBASE-9087:
--

Committed to 0.94 as well.

> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9121) Add tracing into interesting parts of HBase

2013-08-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728326#comment-13728326
 ] 

stack commented on HBASE-9121:
--

Can't you put up a picture that would give others a clue as to why we should 
check this in? [~eclark]

> Add tracing into interesting parts of HBase
> ---
>
> Key: HBASE-9121
> URL: https://issues.apache.org/jira/browse/HBASE-9121
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-9121-PRE.patch
>
>
> Add tracing to zk, hfile reader, and hlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-9115.
--

   Resolution: Fixed
Fix Version/s: 0.98.0

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.toBytes(cFamily), 
> column2);
> byte [] resultForColumn3 = result.getValue(Bytes.toBytes(cFamily), 
> column3);
> if (resultForColumn1 ==

[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728324#comment-13728324
 ] 

Lars Hofhansl commented on HBASE-9087:
--

Please don't mark an issue fixed if it has not been committed to all branches. 
Can either leave it open or remove the (in this case) the 0.94.11 tag.

> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9098) During recovery use ZK as the source of truth for region state

2013-08-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728318#comment-13728318
 ] 

stack commented on HBASE-9098:
--

Patch lgtm.  I skimmed it.  +1 if works for u fellas.

> During recovery use ZK as the source of truth for region state 
> ---
>
> Key: HBASE-9098
> URL: https://issues.apache.org/jira/browse/HBASE-9098
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.0
>Reporter: Devaraj Das
>Assignee: Jeffrey Zhong
>Priority: Blocker
> Fix For: 0.95.2
>
> Attachments: hbase-9098.patch, hbase-9098-v1.patch
>
>
> In HLogSplitter:locateRegionAndRefreshLastFlushedSequenceId(HConnection, 
> byte[], byte[], String), we talk to the replayee regionserver to figure out 
> whether a region is in recovery or not. We should look at ZK only for this 
> piece of information (since that is the source of truth for recovery 
> otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9099) logReplay could trigger double region assignment

2013-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728316#comment-13728316
 ] 

Hadoop QA commented on HBASE-9099:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12595688/hbase-9099-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6581//console

This message is automatically generated.

> logReplay could trigger double region assignment
> 
>
> Key: HBASE-9099
> URL: https://issues.apache.org/jira/browse/HBASE-9099
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9099.patch, hbase-9099-v1.patch
>
>
> The symptom is the first region assignment submitted in SSH is in progress 
> while when am.waitOnRegionToClearRegionsInTransition times out we will 
> re-submitted another SSH which will invoke another region assignment for the 
> region. It will cause the region get stuck in RIT status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728315#comment-13728315
 ] 

Lars Hofhansl commented on HBASE-9115:
--

Well... We're doing n^2 sorting here. But changing the collection type (from 
List to SortedSet) and fixing protobufs is probably not worth it.

> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part12));
> Result result = table.append(a);
> byte [] resultForColumn1 = result.getValue(Bytes.toBytes(cFamily), 
> column1);
> byte [] resultForColumn2 = result.getValue(Bytes.to

[jira] [Updated] (HBASE-9121) Add tracing into interesting parts of HBase

2013-08-02 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9121:
-

Attachment: HBASE-9121-PRE.patch

Stack was asking what the tracing patch looked like.  It still needs a new 
version of HTrace to be released.  That should be coming shortly.  Here's a 
patch (that won't work without HTrace from my github).

> Add tracing into interesting parts of HBase
> ---
>
> Key: HBASE-9121
> URL: https://issues.apache.org/jira/browse/HBASE-9121
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-9121-PRE.patch
>
>
> Add tracing to zk, hfile reader, and hlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8663) a HBase Shell command to list the tables replicated from current cluster

2013-08-02 Thread Demai Ni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Demai Ni updated HBASE-8663:


Attachment: HBASE-8663-trunk-v2.patch

many thanks for JD's comments. I updated the patch accordingly. 

About the failure from HadoopQA, I didn't find out the root cause. The failure 
was due to a table-aleady-exists error when TestAdmin trying to create a new 
table for RPCtimeout testing. This patch adds a new method in ReplicationAdmin 
and modify some ruby code to use the new method. The logic doesn't cross path. 
Maybe some other reasons? anyway, sorry that I have to 'abuse' the testing 
process one more time. hopefully, the problem goes away.. 

> a HBase Shell command to list the tables replicated from current cluster
> 
>
> Key: HBASE-8663
> URL: https://issues.apache.org/jira/browse/HBASE-8663
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, shell
> Environment: clusters setup as Master and Slave for replication of 
> tables 
>Reporter: Demai Ni
>Assignee: Demai Ni
>Priority: Critical
> Attachments: HBASE-8663.PATCH, HBASE-8663-trunk-v0.patch, 
> HBASE-8663-trunk-v1.patch, HBASE-8663-trunk-v2.patch, HBASE-8663-v2.PATCH
>
>
> Thanks for the discussion and very good suggestions,I'd reduce the scope of 
> this jira to only display the tables replicated from current cluster. Since 
> currently no good(accurate and consistent) way to flag a table on slave 
> cluster, this jira will not cover such scenario. Instead, the patch will be 
> flexible enough to adapt such scenario and a follow up JIRA will be opened to 
> address such situation. 
> The shell command and output will be like. Since all replication is 'global', 
> so no need to display the cluster name here. In the future, the command will 
> be extended for other scenarios, such as 1) replicated only to selected peers 
> or 2) indicate table:colfam on slave side
> {code: title=hbase shell command:list_replicated_tables |borderStyle=solid}
> hbase(main):001:0> list_replicated_tables
> TABLE:COLUMNFAMILY   ReplicationType  
>  
>  t1_dn:cf1   GLOBAL   
>  
>  t2_dn:cf2   GLOBAL   
>  
>  usertable:familyGLOBAL   
>  
> 3 row(s) in 0.4110 seconds
> hbase(main):003:0> list_replicated_tables "dn"
> TABLE:COLUMNFAMILY   ReplicationType  
>  
>  t1_dn:cf1   GLOBAL   
>  
>  t2_dn:cf2   GLOBAL   
>  
> 2 row(s) in 0.0280 seconds
> {code} 
> -- The original JIRA description, keep as the history of 
> discussion  ---
> This jira is to provide a hbase shell command which can give user can 
> overview of the tables/columnfamilies currently being replicated. The 
> information will help system administrator for design and planning, and also 
> help application programmer to know which tables/columns should be 
> watchout(for example, not to modify a replicated columnfamily on the slave 
> cluster)
> Currently there is no easy way to tell which table(s)/columnfamily(ies) 
> replicated from or to a particular cluster. 
>   
> On Master Cluster, an indirect method can be used by combining two steps: 1) 
> $describe 'usertable'  and 2)  $list_peers to map the REPLICATION_SCOPE to 
> target(aka slave) cluster   
>   
> On slave cluster, this is no existing API/methods to list all the tables 
> replicated to this cluster.
> Here is an example, and prototype for Master cluster
> {code: title=hbase shell command:list_replicated_tables |borderStyle=solid}
> hbase(main):001:0> list_replicated_tables
>  TABLE  COLUMNFAMILY   TARGET_CLUSTER
>  scores  coursehdtest017.svl.ibm.com:2181:/hbase
>  t3_dn   cf1   hdtest017.svl.ibm.com:2181:/hbase
>  usertable   familyhdtest017.svl.ibm.com:2181:/hbase
> 3 row(s) in 0.3380 seconds
> {code}
> -- end of original description 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: ht

[jira] [Updated] (HBASE-9112) Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on TableMapper

2013-08-02 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9112:


Affects Version/s: (was: 0.2.0)
   0.94.6.1

> Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on 
> TableMapper
> -
>
> Key: HBASE-9112
> URL: https://issues.apache.org/jira/browse/HBASE-9112
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, mapreduce
>Affects Versions: 0.94.6.1
> Environment: CDH-4.3.0-1.cdh4.3.0.p0.22
>Reporter: Debanjan Bhattacharyya
>
> When using custom TableInputFormat in TableMapReduceUtil.initTableMapperJob 
> in the following way
> TableMapReduceUtil.initTableMapperJob("mytable", 
>   MyScan, 
>   MyMapper.class,
>   MyKey.class, 
>   MyValue.class, 
>   myJob,true,  
> MyTableInputFormat.class);
> I get error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.mapreduce.TableMapper
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> If I do not use the last two parameters, there is no error.
> What is going wrong here?
> Thanks
> Regards

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9093) Hbase client API: Restore the writeToWal method

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728296#comment-13728296
 ] 

Hudson commented on HBASE-9093:
---

SUCCESS: Integrated in HBase-TRUNK #4335 (See 
[https://builds.apache.org/job/HBase-TRUNK/4335/])
HBASE-9093 Hbase client API: Restore the writeToWal method; REVERT (stack: rev 
1509853)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestPutWriteToWal.java


> Hbase client API: Restore the writeToWal method
> ---
>
> Key: HBASE-9093
> URL: https://issues.apache.org/jira/browse/HBASE-9093
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Usability
>Affects Versions: 0.95.0
>Reporter: Hari Shreedharan
> Fix For: 0.95.2
>
> Attachments: HBASE-9093.patch, HBASE-9093.patch, HBASE-9093.patch
>
>
> The writeToWal is used by downstream projects like Flume to disable writes to 
> WAL to improve performance when durability is not strictly required. But 
> renaming this method to setDurability forces us to use reflection to support 
> hbase versions < 95 - which in turn hits performance, as this method needs to 
> be called on every single write. I recommend adding the old method back as 
> deprecated and removing it once hbase-95/96 becomes the popular version used 
> in prod.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9031) ImmutableBytesWritable.toString() should downcast the bytes before converting to hex string

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728300#comment-13728300
 ] 

Hudson commented on HBASE-9031:
---

SUCCESS: Integrated in HBase-TRUNK #4335 (See 
[https://builds.apache.org/job/HBase-TRUNK/4335/])
HBASE-9031 ImmutableBytesWritable.toString() should downcast the bytes before 
converting to hex string (stack: rev 1509842)
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java


> ImmutableBytesWritable.toString() should downcast the bytes before converting 
> to hex string
> ---
>
> Key: HBASE-9031
> URL: https://issues.apache.org/jira/browse/HBASE-9031
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.95.1, 0.94.9
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Minor
> Fix For: 0.98.0, 0.95.2
>
> Attachments: HBASE-9031.patch, HBASE-9031.patch, HBASE-9031.patch
>
>
> The attached patch addresses few issues.
> # We need only (3*this.length) capacity in ByteBuffer and not 
> (3*this.bytes.length).
> # Do not calculate (offset + length) at every iteration.
> # No test is required at every iteration to add space (' ') before every byte 
> other than the first one. Uses {{sb.substring(1)}} instead.
> # Finally and most importantly (the original issue of this report), downcast 
> the promoted int (the parameter to {{Integer.toHexString()}}) to byte range.
> Without #4, the byte array \{54,125,64, -1, -45\} is transformed to "36 7d 40 
>  ffd3" instead of "36 7d 40 ff d3".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9112) Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on TableMapper

2013-08-02 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9112:


Component/s: mapreduce

> Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on 
> TableMapper
> -
>
> Key: HBASE-9112
> URL: https://issues.apache.org/jira/browse/HBASE-9112
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, mapreduce
>Affects Versions: 0.2.0
> Environment: CDH-4.3.0-1.cdh4.3.0.p0.22
>Reporter: Debanjan Bhattacharyya
>
> When using custom TableInputFormat in TableMapReduceUtil.initTableMapperJob 
> in the following way
> TableMapReduceUtil.initTableMapperJob("mytable", 
>   MyScan, 
>   MyMapper.class,
>   MyKey.class, 
>   MyValue.class, 
>   myJob,true,  
> MyTableInputFormat.class);
> I get error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.mapreduce.TableMapper
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> If I do not use the last two parameters, there is no error.
> What is going wrong here?
> Thanks
> Regards

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8983) HBaseConnection#deleteAllConnections does not always delete

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728298#comment-13728298
 ] 

Hudson commented on HBASE-8983:
---

SUCCESS: Integrated in HBase-TRUNK #4335 (See 
[https://builds.apache.org/job/HBase-TRUNK/4335/])
HBASE-8983 HBaseConnection#deleteAllConnections does not always delete (Nicolas 
Liochon via JD) (jdcryans: rev 1509846)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java


> HBaseConnection#deleteAllConnections does not always delete
> ---
>
> Key: HBASE-8983
> URL: https://issues.apache.org/jira/browse/HBASE-8983
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.95.1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.95.2
>
> Attachments: 8983.v1.patch, 8983.v2.patch, 8983.v3.patch, 8983-v4.txt
>
>
> Cf; mailing list 
> http://search-hadoop.com/m/wurpu1s8Fhs/liochon&subj=Re+Connection+reference+counting

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9119) hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize of a table

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728297#comment-13728297
 ] 

Hudson commented on HBASE-9119:
---

SUCCESS: Integrated in HBase-TRUNK #4335 (See 
[https://builds.apache.org/job/HBase-TRUNK/4335/])
HBASE-9119 hbase.mapreduce.hfileoutputformat.blocksize should configure with 
blocksize of a table (stack: rev 1509835)
* /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java


> hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize 
> of a table
> --
>
> Key: HBASE-9119
> URL: https://issues.apache.org/jira/browse/HBASE-9119
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.95.2
>
>
> Forward port the HBASE-8949 0.94 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728299#comment-13728299
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in HBase-TRUNK #4335 (See 
[https://builds.apache.org/job/HBase-TRUNK/4335/])
HBASE-9115 HTableInterface.append operation may overwrites values (Ted Yu) 
(tedyu: rev 1509849)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), colum

[jira] [Created] (HBASE-9124) _acl_ table should be migrated to system namespace

2013-08-02 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-9124:


 Summary: _acl_ table should be migrated to system namespace
 Key: HBASE-9124
 URL: https://issues.apache.org/jira/browse/HBASE-9124
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar


_acl_ table is an (optional) system table by definition. We can migrate it to 
the system namespace. 

We should also handle pre-existing data in _acl_ table. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9115) HTableInterface.append operation may overwrites values

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728289#comment-13728289
 ] 

Hudson commented on HBASE-9115:
---

SUCCESS: Integrated in hbase-0.95 #397 (See 
[https://builds.apache.org/job/hbase-0.95/397/])
HBASE-9115 HTableInterface.append operation may overwrites values (Ted Yu) 
(tedyu: rev 1509854)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> HTableInterface.append operation may overwrites values
> --
>
> Key: HBASE-9115
> URL: https://issues.apache.org/jira/browse/HBASE-9115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.10
> Environment: MAC OS X 10.8.4, Hbase in the pseudo-distributed mode, 
> hadoop v1.2.0, Hbase Java API based client.
> *hdfs-site.xml*:
> {code:xml} 
> 
>  
>  dfs.replication
>  1
>  
> 
> dfs.support.append
> true
> 
> 
> {code}
> *hbase-site.xml*:
> {code:xml} 
> 
>   
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
> 
> hbase.cluster.distributed
> true
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
> dfs.support.append
> true
> 
> 
> {code} 
>Reporter: Aleksandr B
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.95.2, 0.94.11
>
> Attachments: 9115-0.94.txt, 9115-0.94-v2.txt, 9115-trunk.txt
>
>
> I use Hbase Java API and I try to append values Bytes.toBytes("one two") and 
> Bytes.toBytes(" three") in 3 columns.
> Only for 2 out of these 3 columns the result is "one two three".
> *Output from the hbase shell:*
> {noformat} 
> hbase(main):008:0* scan "mytesttable"
> ROWCOLUMN+CELL
> 
>  mytestRowKey  column=TestA:dlbytes, 
> timestamp=1375436156140, value=one two three  
>
>  mytestRowKey  column=TestA:tbytes, 
> timestamp=1375436156140, value=one two three  
> 
>  mytestRowKey  column=TestA:ulbytes, 
> timestamp=1375436156140, value= three 
>
> 1 row(s) in 0.0280 seconds
> {noformat}
> *My test code:*
> {code:title=Database.java|borderStyle=solid}
> import static org.junit.Assert.*;
> import java.io.IOException;
>  
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Append;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.junit.Test;
> ...
> @Test
> public void testAppend() throws IOException {
> byte [] rowKey = Bytes.toBytes("mytestRowKey");
> byte [] column1 = Bytes.toBytes("ulbytes");
> byte [] column2 = Bytes.toBytes("dlbytes");
> byte [] column3 = Bytes.toBytes("tbytes");
> String part11 = "one two";
> String part12 = " three";
> String cFamily = "TestA";
> String TABLE = "mytesttable";
> Configuration conf = HBaseConfiguration.create();
> HTablePool pool = new HTablePool(conf, 10);
> HBaseAdmin admin = new HBaseAdmin(conf);
> 
> if(admin.tableExists(TABLE)){
> admin.disableTable(TABLE);
> admin.deleteTable(TABLE);
> }
> 
> HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE);
> HColumnDescriptor hcd = new HColumnDescriptor(cFamily);
> hcd.setMaxVersions(1);
> tableDescriptor.addFamily(hcd);
> admin.createTable(tableDescriptor);
> HTableInterface table = pool.getTable(TABLE);
> 
> Append a = new Append(rowKey);
> a.setReturnResults(false);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part11));
> a.add(Bytes.toBytes(cFamily), column3, Bytes.toBytes(part11));
> table.append(a);
> a = new Append(rowKey);
> a.add(Bytes.toBytes(cFamily), column1, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFamily), column2, Bytes.toBytes(part12));
> a.add(Bytes.toBytes(cFa

[jira] [Commented] (HBASE-9119) hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize of a table

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728287#comment-13728287
 ] 

Hudson commented on HBASE-9119:
---

SUCCESS: Integrated in hbase-0.95 #397 (See 
[https://builds.apache.org/job/hbase-0.95/397/])
HBASE-9119 hbase.mapreduce.hfileoutputformat.blocksize should configure with 
blocksize of a table (stack: rev 1509836)
* /hbase/branches/0.95/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java


> hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize 
> of a table
> --
>
> Key: HBASE-9119
> URL: https://issues.apache.org/jira/browse/HBASE-9119
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.95.2
>
>
> Forward port the HBASE-8949 0.94 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8983) HBaseConnection#deleteAllConnections does not always delete

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728288#comment-13728288
 ] 

Hudson commented on HBASE-8983:
---

SUCCESS: Integrated in hbase-0.95 #397 (See 
[https://builds.apache.org/job/hbase-0.95/397/])
HBASE-8983 HBaseConnection#deleteAllConnections does not always delete (Nicolas 
Liochon via JD) (jdcryans: rev 1509845)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java


> HBaseConnection#deleteAllConnections does not always delete
> ---
>
> Key: HBASE-8983
> URL: https://issues.apache.org/jira/browse/HBASE-8983
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.95.1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 0.98.0, 0.95.2
>
> Attachments: 8983.v1.patch, 8983.v2.patch, 8983.v3.patch, 8983-v4.txt
>
>
> Cf; mailing list 
> http://search-hadoop.com/m/wurpu1s8Fhs/liochon&subj=Re+Connection+reference+counting

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9087) Handlers being blocked during reads

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728291#comment-13728291
 ] 

Lars Hofhansl commented on HBASE-9087:
--

Meh... I'm just gonna commit this to 0.94 as well.

> Handlers being blocked during reads
> ---
>
> Key: HBASE-9087
> URL: https://issues.apache.org/jira/browse/HBASE-9087
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 0.94.7, 0.95.1
>Reporter: Pablo Medina
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.2, 0.94.12
>
> Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch
>
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading 
> rows. They are blocked during changedReaderObserver registration.
> Lars Hofhansl suggests to change the implementation of changedReaderObserver 
> from CopyOnWriteList to ConcurrentHashMap.
> Here is a stack trace: 
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000 
> nid=0x2244 waiting on condition [0x7ff51fefd000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc5c13ae8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
> at 
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138)
> at 
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3755)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9031) ImmutableBytesWritable.toString() should downcast the bytes before converting to hex string

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728290#comment-13728290
 ] 

Hudson commented on HBASE-9031:
---

SUCCESS: Integrated in hbase-0.95 #397 (See 
[https://builds.apache.org/job/hbase-0.95/397/])
HBASE-9031 ImmutableBytesWritable.toString() should downcast the bytes before 
converting to hex string (stack: rev 1509840)
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java


> ImmutableBytesWritable.toString() should downcast the bytes before converting 
> to hex string
> ---
>
> Key: HBASE-9031
> URL: https://issues.apache.org/jira/browse/HBASE-9031
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.95.1, 0.94.9
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Minor
> Fix For: 0.98.0, 0.95.2
>
> Attachments: HBASE-9031.patch, HBASE-9031.patch, HBASE-9031.patch
>
>
> The attached patch addresses few issues.
> # We need only (3*this.length) capacity in ByteBuffer and not 
> (3*this.bytes.length).
> # Do not calculate (offset + length) at every iteration.
> # No test is required at every iteration to add space (' ') before every byte 
> other than the first one. Uses {{sb.substring(1)}} instead.
> # Finally and most importantly (the original issue of this report), downcast 
> the promoted int (the parameter to {{Integer.toHexString()}}) to byte range.
> Without #4, the byte array \{54,125,64, -1, -45\} is transformed to "36 7d 40 
>  ffd3" instead of "36 7d 40 ff d3".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9075) [0.94] Backport HBASE-5760 Unit tests should write only under /target to 0.94

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728285#comment-13728285
 ] 

Hudson commented on HBASE-9075:
---

SUCCESS: Integrated in HBase-0.94 #1091 (See 
[https://builds.apache.org/job/HBase-0.94/1091/])
HBASE-9075 [0.94] Backport HBASE-5760 Unit tests should write only under 
/target to 0.94 (addendum patch to fix Hadoop2 build) (enis: rev 1509873)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/MapreduceTestingShim.java


> [0.94] Backport HBASE-5760 Unit tests should write only under /target to 0.94
> -
>
> Key: HBASE-9075
> URL: https://issues.apache.org/jira/browse/HBASE-9075
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.94.11
>
> Attachments: hbase-9075_addendum.patch, hbase-9075_v1.patch
>
>
> Backporting HBASE-5760 is a good idea. 0.94 tests mess up the root level 
> directory a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5760) Unit tests should write only under /target

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728286#comment-13728286
 ] 

Hudson commented on HBASE-5760:
---

SUCCESS: Integrated in HBase-0.94 #1091 (See 
[https://builds.apache.org/job/HBase-0.94/1091/])
HBASE-9075 [0.94] Backport HBASE-5760 Unit tests should write only under 
/target to 0.94 (addendum patch to fix Hadoop2 build) (enis: rev 1509873)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/MapreduceTestingShim.java


> Unit tests should write only under /target
> --
>
> Key: HBASE-5760
> URL: https://issues.apache.org/jira/browse/HBASE-5760
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.95.2
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Minor
> Fix For: 0.95.0
>
> Attachments: HBASE-5760_v1.patch
>
>
> Some of the unit test runs result in files under $hbase_home/test, 
> $hbase_home/build, or $hbase_home/. We should ensure that all tests use 
> target as their data location.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728280#comment-13728280
 ] 

Lars Hofhansl commented on HBASE-7709:
--

It would allow a A -> B <-> C scenario, which is currently not possible.
At the same time it would break setups like A -> B -> C -> A


> Infinite loop possible in Master/Master replication
> ---
>
> Key: HBASE-7709
> URL: https://issues.apache.org/jira/browse/HBASE-7709
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.6, 0.95.1
>Reporter: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
>
>  We just discovered the following scenario:
> # Cluster A and B are setup in master/master replication
> # By accident we had Cluster C replicate to Cluster A.
> Now all edit originating from C will be bouncing between A and B. Forever!
> The reason is that when the edit come in from C the cluster ID is already set 
> and won't be reset.
> We have a couple of options here:
> # Optionally only support master/master (not cycles of more than two 
> clusters). In that case we can always reset the cluster ID in the 
> ReplicationSource. That means that now cycles > 2 will have the data cycle 
> forever. This is the only option that requires no changes in the HLog format.
> # Instead of a single cluster id per edit maintain a (unordered) set of 
> cluster id that have seen this edit. Then in ReplicationSource we drop any 
> edit that the sink has seen already. The is the cleanest approach, but it 
> might need a lot of data stored per edit if there are many clusters involved.
> # Maintain a configurable counter of the maximum cycle side we want to 
> support. Could default to 10 (even maybe even just). Store a hop-count in the 
> WAL and the ReplicationSource increases that hop-count on each hop. If we're 
> over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728280#comment-13728280
 ] 

Lars Hofhansl edited comment on HBASE-7709 at 8/2/13 11:51 PM:
---

It would allow a A \-> B <-> C scenario, which is currently not possible.
At the same time it would break setups like A -> B -> C -> A


  was (Author: lhofhansl):
It would allow a A -> B <-> C scenario, which is currently not possible.
At the same time it would break setups like A -> B -> C -> A

  
> Infinite loop possible in Master/Master replication
> ---
>
> Key: HBASE-7709
> URL: https://issues.apache.org/jira/browse/HBASE-7709
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.6, 0.95.1
>Reporter: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
>
>  We just discovered the following scenario:
> # Cluster A and B are setup in master/master replication
> # By accident we had Cluster C replicate to Cluster A.
> Now all edit originating from C will be bouncing between A and B. Forever!
> The reason is that when the edit come in from C the cluster ID is already set 
> and won't be reset.
> We have a couple of options here:
> # Optionally only support master/master (not cycles of more than two 
> clusters). In that case we can always reset the cluster ID in the 
> ReplicationSource. That means that now cycles > 2 will have the data cycle 
> forever. This is the only option that requires no changes in the HLog format.
> # Instead of a single cluster id per edit maintain a (unordered) set of 
> cluster id that have seen this edit. Then in ReplicationSource we drop any 
> edit that the sink has seen already. The is the cleanest approach, but it 
> might need a lot of data stored per edit if there are many clusters involved.
> # Maintain a configurable counter of the maximum cycle side we want to 
> support. Could default to 10 (even maybe even just). Store a hop-count in the 
> WAL and the ReplicationSource increases that hop-count on each hop. If we're 
> over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7325) Replication reacts slowly on a lightly-loaded cluster

2013-08-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728274#comment-13728274
 ] 

Lars Hofhansl commented on HBASE-7325:
--

If you feel strongly, fine by me. We can always jack up sleepforretries.

> Replication reacts slowly on a lightly-loaded cluster
> -
>
> Key: HBASE-7325
> URL: https://issues.apache.org/jira/browse/HBASE-7325
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gabriel Reid
>Priority: Minor
> Attachments: HBASE-7325.patch, HBASE-7325.v2.patch
>
>
> ReplicationSource uses a backing-off algorithm to sleep for an increasing 
> duration when an error is encountered in the replication run loop. However, 
> this backing-off is also performed when there is nothing found to replicate 
> in the HLog.
> Assuming default settings (1 second base retry sleep time, and maximum 
> multiplier of 10), this means that replication takes up to 10 seconds to 
> occur when there is a break of about 55 seconds without anything being 
> written. As there is no error condition, and there is apparently no 
> substantial load on the regionserver in this situation, it would probably 
> make more sense to not back off in non-error situations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-02 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728275#comment-13728275
 ] 

Jean-Daniel Cryans commented on HBASE-7709:
---

{quote}
So I'd like to introduce a config option: hbase.enable.cyclic.replication. The 
default is "true" to maintain the current functionality.
If set to false we'd reset the cluster id at each source and hence would only 
support master-master replication (cycles involving more that 2 nodes would 
lead to infinite loops).
{quote}

This seems like a lose-lose. The current functionality has the problem that 
7709 is about and setting the config to false would just make it worse?

> Infinite loop possible in Master/Master replication
> ---
>
> Key: HBASE-7709
> URL: https://issues.apache.org/jira/browse/HBASE-7709
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.6, 0.95.1
>Reporter: Lars Hofhansl
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
>
>  We just discovered the following scenario:
> # Cluster A and B are setup in master/master replication
> # By accident we had Cluster C replicate to Cluster A.
> Now all edit originating from C will be bouncing between A and B. Forever!
> The reason is that when the edit come in from C the cluster ID is already set 
> and won't be reset.
> We have a couple of options here:
> # Optionally only support master/master (not cycles of more than two 
> clusters). In that case we can always reset the cluster ID in the 
> ReplicationSource. That means that now cycles > 2 will have the data cycle 
> forever. This is the only option that requires no changes in the HLog format.
> # Instead of a single cluster id per edit maintain a (unordered) set of 
> cluster id that have seen this edit. Then in ReplicationSource we drop any 
> edit that the sink has seen already. The is the cleanest approach, but it 
> might need a lot of data stored per edit if there are many clusters involved.
> # Maintain a configurable counter of the maximum cycle side we want to 
> support. Could default to 10 (even maybe even just). Store a hop-count in the 
> WAL and the ReplicationSource increases that hop-count on each hop. If we're 
> over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8408) Implement namespace

2013-08-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728270#comment-13728270
 ] 

Ted Yu commented on HBASE-8408:
---

Paging through review board was somehow very slow.

{code}
+for(Iterator iter = allRegions.keySet().iterator();
+iter.hasNext();) {
+  if (HTableDescriptor.isSystemTable(iter.next().getTableName())) {
+iter.remove();
+  }
+}
{code}
I see 3 places in the patch where the above construct is used. It would be nice 
to extract into a util method.
In TableNamespaceManager:
{code}
+  zkNamespaceManager.update(ns);
+}
+scanner.close();
{code}
Can you place the close() call in finally block ?

In upsert(NamespaceDescriptor ns), IOException from table.put(p) is not 
considered fatal ?

Should we disallow splitting Namespace table ?

> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_1.patch, HBASE-8015_2.patch, HBASE-8015_3.patch, 
> HBASE-8015_4.patch, HBASE-8015_5.patch, HBASE-8015_6.patch, 
> HBASE-8015_7.patch, HBASE-8015_8.patch, HBASE-8015_9.patch, HBASE-8015.patch, 
> TestNamespaceMigration.tgz, TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8408) Implement namespace

2013-08-02 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728267#comment-13728267
 ] 

Francis Liu commented on HBASE-8408:


{quote}
Francis Liu So, regards system tables, they are all treated like meta table? 
They all get assigned out before user tables? Or does meta go out first before 
everything else and other system tables are a new tier of assigning?
{quote}
Currently system tables is a new tier. Meta first, then system, then users. Tho 
it's missing the open handler queue and wal. Let's add it as a follow on patch.

{quote}
(I wonder if the meta log should become a system tables log now?)
{quote}
That would be the next state. We need to add the notion of priorities into 
system table which we need anyway. As long as we keep the table sizes of system 
tables minimal. Else we risk slowing down MTTR. Though that should be a general 
principle for system tables in any case?

> Implement namespace
> ---
>
> Key: HBASE-8408
> URL: https://issues.apache.org/jira/browse/HBASE-8408
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_11.patch, HBASE-8015_12.patch, 
> HBASE-8015_1.patch, HBASE-8015_2.patch, HBASE-8015_3.patch, 
> HBASE-8015_4.patch, HBASE-8015_5.patch, HBASE-8015_6.patch, 
> HBASE-8015_7.patch, HBASE-8015_8.patch, HBASE-8015_9.patch, HBASE-8015.patch, 
> TestNamespaceMigration.tgz, TestNamespaceUpgrade.tgz
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9095) AssignmentManager's handleRegion should respect the single threaded nature of the processing

2013-08-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728265#comment-13728265
 ] 

stack commented on HBASE-9095:
--

Since these handlers no longer run in executors, are there executors we start 
in the RS that are now unused?  If so, should these be removed as part of this 
patch?  Otherwise, nice one lads.

> AssignmentManager's handleRegion should respect the single threaded nature of 
> the processing
> 
>
> Key: HBASE-9095
> URL: https://issues.apache.org/jira/browse/HBASE-9095
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.95.2
>
> Attachments: 9095-1.txt, 9095-1.txt, 9095-1.txt
>
>
> While debugging a case where a region was getting opened on a RegionServer 
> and then closed soon after (and then never re-opened anywhere thereafter), it 
> seemed like the processing in handleRegion to do with deletion of ZK nodes 
> should be non-asynchronous. This achieves two things:
> 1. The synchronous deletion prevents more than one processing on the same 
> event data twice. Assuming that we do get more than one notification (on 
> let's say, region OPENED event), the subsequent processing(s) in handleRegion 
> for the same znode would end up with a zookeeper node not found exception. 
> The return value of the data read would be null and that's already handled. 
> If it is asynchronous, it leads to issues like - master opens a region on a 
> certain RegionServer and soon after it sends that RegionServer a close for 
> the same region, and then the znode is deleted.
> 2. The deletion is currently handled in an executor service. This is 
> problematic since by design the events for a given region should be processed 
> in order. By delegating a part of the processing to executor service we are 
> somewhat violating this contract since there is no guarantee of the ordering 
> in the executor service executions...
> Thanks to [~jeffreyz] and [~enis] for the discussions on this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9095) AssignmentManager's handleRegion should respect the single threaded nature of the processing

2013-08-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728266#comment-13728266
 ] 

Devaraj Das commented on HBASE-9095:


Thanks folks for the reviews. [~stack] the failure is a real one. I can 
reproduce it.. The problem is that some tests rely on the executor service as a 
way to register listeners, and the AM executes the listener as part of running 
the handler code within the executor. Removing the executor invocation makes 
those tests hang (for example, TestZKBasedOpenCloseRegion#testCloseRegion). I 
am trying to see how to make such tests work.

> AssignmentManager's handleRegion should respect the single threaded nature of 
> the processing
> 
>
> Key: HBASE-9095
> URL: https://issues.apache.org/jira/browse/HBASE-9095
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.95.2
>
> Attachments: 9095-1.txt, 9095-1.txt, 9095-1.txt
>
>
> While debugging a case where a region was getting opened on a RegionServer 
> and then closed soon after (and then never re-opened anywhere thereafter), it 
> seemed like the processing in handleRegion to do with deletion of ZK nodes 
> should be non-asynchronous. This achieves two things:
> 1. The synchronous deletion prevents more than one processing on the same 
> event data twice. Assuming that we do get more than one notification (on 
> let's say, region OPENED event), the subsequent processing(s) in handleRegion 
> for the same znode would end up with a zookeeper node not found exception. 
> The return value of the data read would be null and that's already handled. 
> If it is asynchronous, it leads to issues like - master opens a region on a 
> certain RegionServer and soon after it sends that RegionServer a close for 
> the same region, and then the znode is deleted.
> 2. The deletion is currently handled in an executor service. This is 
> problematic since by design the events for a given region should be processed 
> in order. By delegating a part of the processing to executor service we are 
> somewhat violating this contract since there is no guarantee of the ordering 
> in the executor service executions...
> Thanks to [~jeffreyz] and [~enis] for the discussions on this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7325) Replication reacts slowly on a lightly-loaded cluster

2013-08-02 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728263#comment-13728263
 ] 

Jean-Daniel Cryans commented on HBASE-7325:
---

[~lhofhansl] I'd like to commit this. Any comments regarding what I said in 
https://issues.apache.org/jira/browse/HBASE-7325?focusedCommentId=13724423&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13724423
 ?

> Replication reacts slowly on a lightly-loaded cluster
> -
>
> Key: HBASE-7325
> URL: https://issues.apache.org/jira/browse/HBASE-7325
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gabriel Reid
>Priority: Minor
> Attachments: HBASE-7325.patch, HBASE-7325.v2.patch
>
>
> ReplicationSource uses a backing-off algorithm to sleep for an increasing 
> duration when an error is encountered in the replication run loop. However, 
> this backing-off is also performed when there is nothing found to replicate 
> in the HLog.
> Assuming default settings (1 second base retry sleep time, and maximum 
> multiplier of 10), this means that replication takes up to 10 seconds to 
> occur when there is a break of about 55 seconds without anything being 
> written. As there is no error condition, and there is apparently no 
> substantial load on the regionserver in this situation, it would probably 
> make more sense to not back off in non-error situations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9075) [0.94] Backport HBASE-5760 Unit tests should write only under /target to 0.94

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728260#comment-13728260
 ] 

Hudson commented on HBASE-9075:
---

SUCCESS: Integrated in HBase-0.94-security #242 (See 
[https://builds.apache.org/job/HBase-0.94-security/242/])
HBASE-9075 [0.94] Backport HBASE-5760 Unit tests should write only under 
/target to 0.94 (addendum patch to fix Hadoop2 build) (enis: rev 1509873)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/MapreduceTestingShim.java


> [0.94] Backport HBASE-5760 Unit tests should write only under /target to 0.94
> -
>
> Key: HBASE-9075
> URL: https://issues.apache.org/jira/browse/HBASE-9075
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.94.11
>
> Attachments: hbase-9075_addendum.patch, hbase-9075_v1.patch
>
>
> Backporting HBASE-5760 is a good idea. 0.94 tests mess up the root level 
> directory a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5760) Unit tests should write only under /target

2013-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728261#comment-13728261
 ] 

Hudson commented on HBASE-5760:
---

SUCCESS: Integrated in HBase-0.94-security #242 (See 
[https://builds.apache.org/job/HBase-0.94-security/242/])
HBASE-9075 [0.94] Backport HBASE-5760 Unit tests should write only under 
/target to 0.94 (addendum patch to fix Hadoop2 build) (enis: rev 1509873)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/MapreduceTestingShim.java


> Unit tests should write only under /target
> --
>
> Key: HBASE-5760
> URL: https://issues.apache.org/jira/browse/HBASE-5760
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.95.2
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Minor
> Fix For: 0.95.0
>
> Attachments: HBASE-5760_v1.patch
>
>
> Some of the unit test runs result in files under $hbase_home/test, 
> $hbase_home/build, or $hbase_home/. We should ensure that all tests use 
> target as their data location.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9095) AssignmentManager's handleRegion should respect the single threaded nature of the processing

2013-08-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13728258#comment-13728258
 ] 

stack commented on HBASE-9095:
--

Please rerun this patch a few times before committing.  We got a zombie above.  
I've not seen that in a long time.  Let me rerun for you now.

> AssignmentManager's handleRegion should respect the single threaded nature of 
> the processing
> 
>
> Key: HBASE-9095
> URL: https://issues.apache.org/jira/browse/HBASE-9095
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.95.2
>
> Attachments: 9095-1.txt, 9095-1.txt, 9095-1.txt
>
>
> While debugging a case where a region was getting opened on a RegionServer 
> and then closed soon after (and then never re-opened anywhere thereafter), it 
> seemed like the processing in handleRegion to do with deletion of ZK nodes 
> should be non-asynchronous. This achieves two things:
> 1. The synchronous deletion prevents more than one processing on the same 
> event data twice. Assuming that we do get more than one notification (on 
> let's say, region OPENED event), the subsequent processing(s) in handleRegion 
> for the same znode would end up with a zookeeper node not found exception. 
> The return value of the data read would be null and that's already handled. 
> If it is asynchronous, it leads to issues like - master opens a region on a 
> certain RegionServer and soon after it sends that RegionServer a close for 
> the same region, and then the znode is deleted.
> 2. The deletion is currently handled in an executor service. This is 
> problematic since by design the events for a given region should be processed 
> in order. By delegating a part of the processing to executor service we are 
> somewhat violating this contract since there is no guarantee of the ordering 
> in the executor service executions...
> Thanks to [~jeffreyz] and [~enis] for the discussions on this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   4   >