[jira] [Updated] (HBASE-5792) HLog Performance Evaluation Tool

2012-04-18 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5792:
-

Fix Version/s: 0.94.0

 HLog Performance Evaluation Tool
 

 Key: HBASE-5792
 URL: https://issues.apache.org/jira/browse/HBASE-5792
 Project: HBase
  Issue Type: Test
  Components: wal
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: performance, wal
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5792-v0.patch, HBASE-5792-v1.patch, 
 HBASE-5792-v2.patch, verify.txt, verify.txt


 Related to HDFS-3280 and the HBase WAL slowdown on 0.23+
 It would be nice to have a simple tool like HFilePerformanceEvaluation, ...
 to be able to check easily the HLog performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-18 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5782:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96 (including test)

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: Lars Hofhansl
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-lars-v2.txt, 5782-sketch.txt, 5782-v3.txt, 
 5782.txt, 5782.unfinished-stack.txt, 5782.unittest.txt, HBASE-5782.patch, 
 hbase-5782.txt


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5787) Table owner can't disable/delete its own table

2012-04-18 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5787:
-

Fix Version/s: (was: 0.94.0)
   0.94.1

I would like to see some input from the folks who did the security 
implementation first. Andrew? Gary? Any comments.

 Table owner can't disable/delete its own table
 --

 Key: HBASE-5787
 URL: https://issues.apache.org/jira/browse/HBASE-5787
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: acl, security
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5787-tests-wrong-names.patch, HBASE-5787-v0.patch, 
 HBASE-5787-v1.patch


 An user with CREATE privileges can create a table, but can not disable it, 
 because disable operation require ADMIN privileges. Also if a table is 
 already disabled, anyone can remove it.
 {code}
 public void preDeleteTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   requirePermission(Permission.Action.CREATE);
 }
 public void preDisableTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   /* TODO: Allow for users with global CREATE permission and the table owner 
 */
   requirePermission(Permission.Action.ADMIN);
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5545) region can't be opened for a long time. Because the creating File failed.

2012-04-18 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5545:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96

 region can't be opened for a long time. Because the creating File failed.
 -

 Key: HBASE-5545
 URL: https://issues.apache.org/jira/browse/HBASE-5545
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.6
Reporter: gaojinchao
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.90.7, 0.92.2, 0.94.0

 Attachments: HBASE-5545.patch, HBASE-5545.patch


 Scenario:
 
 1. File is created 
 2. But while writing data, all datanodes might have crashed. So writing data 
 will fail.
 3. Now even if close is called in finally block, close also will fail and 
 throw the Exception because writing data failed.
 4. After this if RS try to create the same file again, then 
 AlreadyBeingCreatedException will come.
 Suggestion to handle this scenario.
 ---
 1. Check for the existence of the file, if exists delete the file and create 
 new file. 
 Here delete call for the file will not check whether the file is open or 
 closed.
 Overwrite Option:
 
 1. Overwrite option will be applicable if you are trying to overwrite a 
 closed file.
 2. If the file is not closed, then even with overwrite option Same 
 AlreadyBeingCreatedException will be thrown.
 This is the expected behaviour to avoid the Multiple clients writing to same 
 file.
 Region server logs:
 org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to 
 create file /hbase/test1/12c01902324218d14b17a5880f24f64b/.tmp/.regioninfo 
 for 
 DFSClient_hb_rs_158-1-131-48,20020,1331107668635_1331107669061_-252463556_25 
 on client 158.1.132.19 because current leaseholder is trying to recreate file.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1570)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1440)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1382)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:658)
 at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:547)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1137)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1133)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1131)
 at org.apache.hadoop.ipc.Client.call(Client.java:961)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:245)
 at $Proxy6.create(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at $Proxy6.create(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.init(DFSClient.java:3643)
 at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:778)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:364)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:630)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:611)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:518)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:424)
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340)
 at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2672)
 at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2658)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:330)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:116)
 at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 [2012-03-07 20:51:45,858] [WARN ] 
 [RS_OPEN_REGION-158-1-131-48,20020,1331107668635-23] 
 [com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker 131] Retrying the 
 method 

[jira] [Updated] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-17 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5782:
-

Attachment: 5782-lars-v2.txt

In the interest of keeping this simple, here's another simple patch.

This patch assumes that:
# It is only the appends that need to happen in order.
# The sync is the time consuming operation.

So a special lock is only held for enforce the ordering of the appends (appends 
are single threaded) and syncs can happen in parallel without this lock held.

This should give us the maximum concurrency possible, while keeping the change 
small and palletable for 0.94.

Please let me know whether I am smoking something.

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-lars-v2.txt, 5782-sketch.txt, 5782.txt, 
 5782.unfinished-stack.txt, HBASE-5782.patch, hbase-5782.txt


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5545) region can't be opened for a long time. Because the creating File failed.

2012-04-17 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5545:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 region can't be opened for a long time. Because the creating File failed.
 -

 Key: HBASE-5545
 URL: https://issues.apache.org/jira/browse/HBASE-5545
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.6
Reporter: gaojinchao
Assignee: gaojinchao
 Fix For: 0.90.7, 0.92.2, 0.94.0

 Attachments: HBASE-5545.patch


 Scenario:
 
 1. File is created 
 2. But while writing data, all datanodes might have crashed. So writing data 
 will fail.
 3. Now even if close is called in finally block, close also will fail and 
 throw the Exception because writing data failed.
 4. After this if RS try to create the same file again, then 
 AlreadyBeingCreatedException will come.
 Suggestion to handle this scenario.
 ---
 1. Check for the existence of the file, if exists delete the file and create 
 new file. 
 Here delete call for the file will not check whether the file is open or 
 closed.
 Overwrite Option:
 
 1. Overwrite option will be applicable if you are trying to overwrite a 
 closed file.
 2. If the file is not closed, then even with overwrite option Same 
 AlreadyBeingCreatedException will be thrown.
 This is the expected behaviour to avoid the Multiple clients writing to same 
 file.
 Region server logs:
 org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to 
 create file /hbase/test1/12c01902324218d14b17a5880f24f64b/.tmp/.regioninfo 
 for 
 DFSClient_hb_rs_158-1-131-48,20020,1331107668635_1331107669061_-252463556_25 
 on client 158.1.132.19 because current leaseholder is trying to recreate file.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1570)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1440)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1382)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:658)
 at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:547)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1137)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1133)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1131)
 at org.apache.hadoop.ipc.Client.call(Client.java:961)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:245)
 at $Proxy6.create(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at $Proxy6.create(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.init(DFSClient.java:3643)
 at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:778)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:364)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:630)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:611)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:518)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:424)
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340)
 at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2672)
 at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2658)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:330)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:116)
 at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 [2012-03-07 20:51:45,858] [WARN ] 
 [RS_OPEN_REGION-158-1-131-48,20020,1331107668635-23] 
 [com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker 131] Retrying the 
 method call: public abstract void 
 

[jira] [Updated] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-17 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5782:
-

Attachment: 5782-v3.txt

This is would I would commit to 0.94. (plus a test that either Stack or I would 
write).

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: Lars Hofhansl
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-lars-v2.txt, 5782-sketch.txt, 5782-v3.txt, 
 5782.txt, 5782.unfinished-stack.txt, HBASE-5782.patch, hbase-5782.txt


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5782) Not all the regions are getting assigned after the log splitting.

2012-04-16 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5782:
-

Attachment: 5782.txt

Simple patch to ensure only one thread flushes the log.
Don't hate me, just throwing this out there.


 Not all the regions are getting assigned after the log splitting.
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782.txt, HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) HServerLoad$RegionLoad breaks 0.92-0.94 compatibility

2012-04-16 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5795:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

One down.

 HServerLoad$RegionLoad breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Zhihong Yu
 Fix For: 0.94.0, 0.96.0

 Attachments: 5795-v2.txt, 5795-v3.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-16 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5780:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

RC1 was sunk, so this is against 0.94.0 again.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, 
 TestReplicationPeer-Security-output.log, TestReplicationPeer-output.log, 
 testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-16 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5780:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, 
 TestReplicationPeer-Security-output.log, TestReplicationPeer-output.log, 
 testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5256) Use WritableUtils.readVInt() in RegionLoad.readFields()

2012-04-15 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5256:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

This was committed, marking it accordingly.

 Use WritableUtils.readVInt() in RegionLoad.readFields()
 ---

 Key: HBASE-5256
 URL: https://issues.apache.org/jira/browse/HBASE-5256
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Mubarak Seyed
 Fix For: 0.94.0

 Attachments: HBASE-5256.trunk.v1.patch


 Currently in.readInt() is used in RegionLoad.readFields()
 More metrics would be added to RegionLoad in the future, we should utilize 
 WritableUtils.readVInt() to reduce the amount of data exchanged between 
 Master and region servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-15 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5795:
-

Fix Version/s: 0.94.0

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0

 Attachments: 5794.txt, 5795-v1.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-15 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5795:
-

Attachment: (was: 5794.txt)

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0

 Attachments: 5795-v1.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly

2012-04-14 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-2214:
-

Fix Version/s: (was: 0.94.0)
   0.94.1

 Do HBASE-1996 -- setting size to return in scan rather than count of rows -- 
 properly
 -

 Key: HBASE-2214
 URL: https://issues.apache.org/jira/browse/HBASE-2214
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Ferdy Galema
 Fix For: 0.94.1

 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt


 The notion that you set size rather than row count specifying how many rows a 
 scanner should return in each cycle was raised over in hbase-1966.  Its a 
 good one making hbase regular though the data under it may vary.  
 HBase-1966 was committed but the patch was constrained by the fact that it 
 needed to not change RPC interface.  This issue is about doing hbase-1966 for 
 0.21 in a clean, unconstrained way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5782) Not all the regions are getting assigned after the log splitting.

2012-04-14 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5782:
-

Fix Version/s: 0.94.0

Assigning to 0.94.0 for now

 Not all the regions are getting assigned after the log splitting.
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Priority: Critical
 Fix For: 0.94.0


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5778) Turn on WAL compression by default

2012-04-13 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5778:
-

Fix Version/s: (was: 0.94.0)
   0.94.1

 Turn on WAL compression by default
 --

 Key: HBASE-5778
 URL: https://issues.apache.org/jira/browse/HBASE-5778
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Daniel Cryans
Assignee: Lars Hofhansl
Priority: Blocker
 Fix For: 0.96.0, 0.94.1

 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch


 I ran some tests to verify if WAL compression should be turned on by default.
 For a use case where it's not very useful (values two order of magnitude 
 bigger than the keys), the insert time wasn't different and the CPU usage 15% 
 higher (150% CPU usage VS 130% when not compressing the WAL).
 When values are smaller than the keys, I saw a 38% improvement for the insert 
 run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
 WAL compression accounts for all the additional CPU usage, it might just be 
 that we're able to insert faster and we spend more time in the MemStore per 
 second (because our MemStores are bad when they contain tens of thousands of 
 values).
 Those are two extremes, but it shows that for the price of some CPU we can 
 save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
 CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-13 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5677:
-

Fix Version/s: (was: 0.96.0)
   (was: 0.94.0)

Removed 0.94 and 0.96 from Fix Version/s

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2

 Attachments: 5677-proposal.txt, 5677-proposal.txt, 5677-proposal.txt, 
 HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, 
 surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4608) HLog Compression

2012-04-13 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4608:
-

Release Note: 
Adds a custom dictionary-based compression on WAL.  Off by default.  To enable, 
set hbase.regionserver.wal.enablecompression to true in hbase-site.xml.
Note that replication is currently broken when WAL compression is enabled.

  was:Adds a custom dictionary-based compression on WAL.  Off by default.  To 
enable, set hbase.regionserver.wal.enablecompression to true in hbase-site.xml.


 HLog Compression
 

 Key: HBASE-4608
 URL: https://issues.apache.org/jira/browse/HBASE-4608
 Project: HBase
  Issue Type: New Feature
Reporter: Li Pi
Assignee: Li Pi
 Fix For: 0.94.0

 Attachments: 4608-v19.txt, 4608-v20.txt, 4608-v22.txt, 4608v1.txt, 
 4608v13.txt, 4608v13.txt, 4608v14.txt, 4608v15.txt, 4608v16.txt, 4608v17.txt, 
 4608v18.txt, 4608v23.txt, 4608v24.txt, 4608v25.txt, 4608v27.txt, 4608v29.txt, 
 4608v30.txt, 4608v5.txt, 4608v6.txt, 4608v7.txt, 4608v8fixed.txt, 
 hbase-4608-v28-delta.txt, hbase-4608-v28.txt, hbase-4608-v28.txt


 The current bottleneck to HBase write speed is replicating the WAL appends 
 across different datanodes. We can speed up this process by compressing the 
 HLog. Current plan involves using a dictionary to compress table name, region 
 id, cf name, and possibly other bits of repeated data. Also, HLog format may 
 be changed in other ways to produce a smaller HLog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5717) Scanner metrics are only reported if you get to the end of a scanner

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5717:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96.
Thanks for the patch Ian.
Thanks for the review Ted.

 Scanner metrics are only reported if you get to the end of a scanner
 

 Key: HBASE-5717
 URL: https://issues.apache.org/jira/browse/HBASE-5717
 Project: HBase
  Issue Type: Bug
  Components: client, metrics
Reporter: Ian Varley
Assignee: Ian Varley
Priority: Minor
 Fix For: 0.94.0, 0.96.0

 Attachments: 5717-v4.patch, ClientScanner_HBASE_5717-v2.patch, 
 ClientScanner_HBASE_5717-v3.patch, ClientScanner_HBASE_5717.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 When you turn on Scanner Metrics, the metrics are currently only made 
 available if you run over all records available in the scanner. If you stop 
 iterating before the end, the values are never flushed into the metrics 
 object (in the Scan attribute).
 Will supply a patch with fix and test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5741) ImportTsv does not check for table existence

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5741:
-

Fix Version/s: (was: 0.94.0)
   0.94.1

Moving to 0.94.1, since we're still discussing (again, sorry about the late 
chime in)

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0, 0.94.1

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5774) Add documentation for WALPlayer to HBase reference guide.

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5774:
-

Attachment: 5774.txt

How's this.

 Add documentation for WALPlayer to HBase reference guide.
 -

 Key: HBASE-5774
 URL: https://issues.apache.org/jira/browse/HBASE-5774
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 5774.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) M/R tool to replay WAL files

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: 5604-v10.txt

all arc lint problems fixed.

 M/R tool to replay WAL files
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 5604-v10.txt, 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, 
 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3443) ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-3443:
-

Fix Version/s: 0.96.0

 ICV optimization to look in memstore first and then store files (HBASE-3082) 
 does not work when deletes are in the mix
 --

 Key: HBASE-3443
 URL: https://issues.apache.org/jira/browse/HBASE-3443
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.0, 0.90.1, 0.90.2, 0.90.3, 0.90.4, 0.90.5, 0.90.6, 
 0.92.0, 0.92.1
Reporter: Kannan Muthukkaruppan
Assignee: Lars Hofhansl
Priority: Critical
  Labels: corruption
 Fix For: 0.96.0

 Attachments: 3443.txt


 For incrementColumnValue() HBASE-3082 adds an optimization to check memstores 
 first, and only if not present in the memstore then check the store files. In 
 the presence of deletes, the above optimization is not reliable.
 If the column is marked as deleted in the memstore, one should not look 
 further into the store files. But currently, the code does so.
 Sample test code outline:
 {code}
 admin.createTable(desc)
 table = HTable.new(conf, tableName)
 table.incrementColumnValue(Bytes.toBytes(row), cf1name, 
 Bytes.toBytes(column), 5);
 admin.flush(tableName)
 sleep(2)
 del = Delete.new(Bytes.toBytes(row))
 table.delete(del)
 table.incrementColumnValue(Bytes.toBytes(row), cf1name, 
 Bytes.toBytes(column), 5);
 get = Get.new(Bytes.toBytes(row))
 keyValues = table.get(get).raw()
 keyValues.each do |keyValue|
   puts Expect 5; Got Value=#{Bytes.toLong(keyValue.getValue())};
 end
 {code}
 The above prints:
 {code}
 Expect 5; Got Value=10
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3443) ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-3443:
-

Release Note: 
This is a correctness fix and will incur a 10-20% performance penalty for ICV 
and Increment operations. Other operations are not affected.


 ICV optimization to look in memstore first and then store files (HBASE-3082) 
 does not work when deletes are in the mix
 --

 Key: HBASE-3443
 URL: https://issues.apache.org/jira/browse/HBASE-3443
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.0, 0.90.1, 0.90.2, 0.90.3, 0.90.4, 0.90.5, 0.90.6, 
 0.92.0, 0.92.1
Reporter: Kannan Muthukkaruppan
Assignee: Lars Hofhansl
Priority: Critical
  Labels: corruption
 Fix For: 0.96.0

 Attachments: 3443.txt


 For incrementColumnValue() HBASE-3082 adds an optimization to check memstores 
 first, and only if not present in the memstore then check the store files. In 
 the presence of deletes, the above optimization is not reliable.
 If the column is marked as deleted in the memstore, one should not look 
 further into the store files. But currently, the code does so.
 Sample test code outline:
 {code}
 admin.createTable(desc)
 table = HTable.new(conf, tableName)
 table.incrementColumnValue(Bytes.toBytes(row), cf1name, 
 Bytes.toBytes(column), 5);
 admin.flush(tableName)
 sleep(2)
 del = Delete.new(Bytes.toBytes(row))
 table.delete(del)
 table.incrementColumnValue(Bytes.toBytes(row), cf1name, 
 Bytes.toBytes(column), 5);
 get = Get.new(Bytes.toBytes(row))
 keyValues = table.get(get).raw()
 keyValues.each do |keyValue|
   puts Expect 5; Got Value=#{Bytes.toLong(keyValue.getValue())};
 end
 {code}
 The above prints:
 {code}
 Expect 5; Got Value=10
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3443) ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-3443:
-

Fix Version/s: 0.94.0

a'right, committed to 0.94 aswell, and added release notes.

 ICV optimization to look in memstore first and then store files (HBASE-3082) 
 does not work when deletes are in the mix
 --

 Key: HBASE-3443
 URL: https://issues.apache.org/jira/browse/HBASE-3443
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.0, 0.90.1, 0.90.2, 0.90.3, 0.90.4, 0.90.5, 0.90.6, 
 0.92.0, 0.92.1
Reporter: Kannan Muthukkaruppan
Assignee: Lars Hofhansl
Priority: Critical
  Labels: corruption
 Fix For: 0.94.0, 0.96.0

 Attachments: 3443.txt


 For incrementColumnValue() HBASE-3082 adds an optimization to check memstores 
 first, and only if not present in the memstore then check the store files. In 
 the presence of deletes, the above optimization is not reliable.
 If the column is marked as deleted in the memstore, one should not look 
 further into the store files. But currently, the code does so.
 Sample test code outline:
 {code}
 admin.createTable(desc)
 table = HTable.new(conf, tableName)
 table.incrementColumnValue(Bytes.toBytes(row), cf1name, 
 Bytes.toBytes(column), 5);
 admin.flush(tableName)
 sleep(2)
 del = Delete.new(Bytes.toBytes(row))
 table.delete(del)
 table.incrementColumnValue(Bytes.toBytes(row), cf1name, 
 Bytes.toBytes(column), 5);
 get = Get.new(Bytes.toBytes(row))
 keyValues = table.get(get).raw()
 keyValues.each do |keyValue|
   puts Expect 5; Got Value=#{Bytes.toLong(keyValue.getValue())};
 end
 {code}
 The above prints:
 {code}
 Expect 5; Got Value=10
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) M/R tool to replay WAL files

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: 5604-v11.txt

Patch that includes the documentation from HBASE-5774.

 M/R tool to replay WAL files
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 5604-v10.txt, 5604-v11.txt, 5604-v4.txt, 5604-v6.txt, 
 5604-v7.txt, 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5775) ZKUtil doesn't handle deleteRecurisively cleanly

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5775:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96.

 ZKUtil doesn't handle deleteRecurisively cleanly
 

 Key: HBASE-5775
 URL: https://issues.apache.org/jira/browse/HBASE-5775
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.94.0, 0.96.0

 Attachments: java_HBASE-5775.patch


 ZKUtil.deleteNodeRecursively()'s contract says that it handles deletion of 
 the node and all its children. However, nothing is mentioned as to what 
 happens if the node you are attempting to delete doesn't actually exist. 
 Turns out, it throws a null pointer exception. I
 'm proposing that we change the code s.t. it handles the case where the 
 parent is already gone and exits cleanly, rather than failing horribly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) M/R tool to replay WAL files

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Fix Version/s: 0.96.0
   0.94.0

 M/R tool to replay WAL files
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0, 0.96.0

 Attachments: 5604-v10.txt, 5604-v11.txt, 5604-v4.txt, 5604-v6.txt, 
 5604-v7.txt, 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) M/R tool to replay WAL files

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96.
Thanks for the reviews Stack and Ted!

 M/R tool to replay WAL files
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0, 0.96.0

 Attachments: 5604-v10.txt, 5604-v11.txt, 5604-v4.txt, 5604-v6.txt, 
 5604-v7.txt, 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5677:
-

Status: Open  (was: Patch Available)

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5677-proposal.txt, 5677-proposal.txt, 
 HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, 
 surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5677:
-

Attachment: 5677-proposal.txt

Reattaching for new test run.

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5677-proposal.txt, 5677-proposal.txt, 
 HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, 
 surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5677:
-

Status: Patch Available  (was: Open)

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5677-proposal.txt, 5677-proposal.txt, 
 HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, 
 surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5677:
-

Attachment: 5677-proposal.txt

One more time

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5677-proposal.txt, 5677-proposal.txt, 5677-proposal.txt, 
 HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, 
 surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5778) Turn on WAL compression by default

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5778:
-

Status: Patch Available  (was: Reopened)

Running through HadoopQA to see if there are other problems left.

 Turn on WAL compression by default
 --

 Key: HBASE-5778
 URL: https://issues.apache.org/jira/browse/HBASE-5778
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Daniel Cryans
Assignee: Lars Hofhansl
Priority: Blocker
 Fix For: 0.94.0, 0.96.0

 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch


 I ran some tests to verify if WAL compression should be turned on by default.
 For a use case where it's not very useful (values two order of magnitude 
 bigger than the keys), the insert time wasn't different and the CPU usage 15% 
 higher (150% CPU usage VS 130% when not compressing the WAL).
 When values are smaller than the keys, I saw a 38% improvement for the insert 
 run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
 WAL compression accounts for all the additional CPU usage, it might just be 
 that we're able to insert faster and we spend more time in the MemStore per 
 second (because our MemStores are bad when they contain tens of thousands of 
 values).
 Those are two extremes, but it shows that for the price of some CPU we can 
 save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
 CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5778) Turn on WAL compression by default

2012-04-12 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5778:
-

Attachment: 5778-addendum.txt

Fixes the issues I found. It's not too surprising that a compressed HLog is a 
bit more suseptible to corruption as there is less redundancy.

 Turn on WAL compression by default
 --

 Key: HBASE-5778
 URL: https://issues.apache.org/jira/browse/HBASE-5778
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Daniel Cryans
Assignee: Lars Hofhansl
Priority: Blocker
 Fix For: 0.94.0, 0.96.0

 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch


 I ran some tests to verify if WAL compression should be turned on by default.
 For a use case where it's not very useful (values two order of magnitude 
 bigger than the keys), the insert time wasn't different and the CPU usage 15% 
 higher (150% CPU usage VS 130% when not compressing the WAL).
 When values are smaller than the keys, I saw a 38% improvement for the insert 
 run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
 WAL compression accounts for all the additional CPU usage, it might just be 
 that we're able to insert faster and we spend more time in the MemStore per 
 second (because our MemStores are bad when they contain tens of thousands of 
 values).
 Those are two extremes, but it shows that for the price of some CPU we can 
 save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
 CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) M/R tools to replay WAL files

2012-04-10 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: 5604-v9.txt

Added data parsing, if you hate it, I'll pull. Might be useful, as the date is 
human readable so the operator can double check.
Includes a simple data parsing test.


 M/R tools to replay WAL files
 -

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, 5604-v8.txt, 
 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5615) the master never does balance because of balancing the parent region

2012-04-10 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5615:
-

Fix Version/s: (was: 0.94.0)
   0.94.1

Moving to 0.94.1 at Ram's recommendation.

 the master never does balance because of balancing the parent region
 

 Key: HBASE-5615
 URL: https://issues.apache.org/jira/browse/HBASE-5615
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7
Reporter: xufeng
Assignee: xufeng
Priority: Critical
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: 5615-trunk.txt, HBASE-5615-90.patch, HBASE-5615.patch, 
 NoPatched-surefire-report-5615-90.html, Patched_surefire-report-5615-90.html


 the master never do balance becauseof when master do rebuildUserRegions(),it 
 will add the parent region into  AssignmentManager#servers,
 if balancer let the parent region to move,the parent will in RIT forever.thus 
 balance will never be executed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) M/R tool to replay WAL files

2012-04-10 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Summary: M/R tool to replay WAL files  (was: M/R tools to replay WAL files)

Any objections to the latest patch?

 M/R tool to replay WAL files
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, 5604-v8.txt, 
 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.

2012-04-09 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: 5604-v7.txt

This should be close to what I would like to commit.
Added a simple end-to-end test for WALPlayer.

 HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
 Attachments: 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.

2012-04-09 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Status: Patch Available  (was: Open)

 HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
 Attachments: 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.

2012-04-09 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Release Note: 
Tool to replay WAL files using a M/R job.

The WAL can be replayed for a set of tables or all tables, and a timerange can 
be provided (in milliseconds).
The WAL is filtered to this set of tables, the output can optionally be mapped 
to another set of tables.

WAL replay can also generate HFiles for later bulk importing, in that case the 
WAL is replayed for a single table only.


 HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) M/R tools to replay WAL files

2012-04-09 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: 5604-v8.txt

Addressed Ted's comment (except for Put.heapSize()). Also actually included the 
WALPlayer end-to-end test this time.

 M/R tools to replay WAL files
 -

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, 5604-v8.txt, 
 HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5656) LoadIncrementalHFiles createTable should detect and set compression algorithm

2012-04-08 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5656:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 0.92, 0.94, and 0.96.

 LoadIncrementalHFiles createTable should detect and set compression algorithm
 -

 Key: HBASE-5656
 URL: https://issues.apache.org/jira/browse/HBASE-5656
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.92.1
Reporter: Cosmin Lehene
Assignee: Cosmin Lehene
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: 5656-simple.txt, HBASE-5656-0.92.patch, 
 HBASE-5656-0.92.patch, HBASE-5656-0.92.patch, HBASE-5656-0.92.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 LoadIncrementalHFiles doesn't set compression when creating the the table.
 This can be detected from the files within each family dir. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.

2012-04-08 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: 5604-v6.txt

Supports multiple tables or all tables now (unless bulk output is used).
Added more tests.

This is still just for parking, I plan to add some high level end-to-end tests.

 HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
 Attachments: 5604-v4.txt, 5604-v6.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5746) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums (0.96)

2012-04-07 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5746:
-

Attachment: 5720-trunk-v2.txt

Ted's proposed patch for trunk.

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums (0.96)
 -

 Key: HBASE-5746
 URL: https://issues.apache.org/jira/browse/HBASE-5746
 Project: HBase
  Issue Type: Sub-task
  Components: io, regionserver
Reporter: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 5720-trunk-v2.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5720) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums

2012-04-07 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5720:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Comitted to 0.94 only.

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums
 --

 Key: HBASE-5720
 URL: https://issues.apache.org/jira/browse/HBASE-5720
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.94.0
Reporter: Matt Corgan
Assignee: Matt Corgan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5720-trunk-v2.txt, 5720-trunk.txt, 5720v4.txt, 
 5720v4.txt, 5720v4.txt, HBASE-5720-v1.patch, HBASE-5720-v2.patch, 
 HBASE-5720-v3.patch


 When reading a .92 HFile without checksums, encoding it, and storing in the 
 block cache, the HFileDataBlockEncoderImpl always allocates a dummy header 
 appropriate for checksums even though there are none.  This corrupts the 
 byte[].
 Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case 
 which I think is the desired behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5711) Tests are failing with incorrect data directory permissions.

2012-04-06 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5711:
-

Fix Version/s: (was: 0.94.0)
   0.94.1

We had this problem forever. I posted a message on this to the mailing on 
October 27th.
This is not in the way of an RC1 for 0.94.0.

 Tests are failing with incorrect data directory permissions.
 

 Key: HBASE-5711
 URL: https://issues.apache.org/jira/browse/HBASE-5711
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Fix For: 0.92.2, 0.94.1

 Attachments: HBASE-5711.patch


 When we run some tests in Hbase (TestAdmin), it is failing with following 
 error.
 {quote}
 Starting DataNode 0 with dfs.data.dir: 
 E:\Repositories\Hbase\target\test-data\5ff23198-892e-4f1c-8022-b3d9969fcf0b\dfscluster_0ecc6984-1925-4870-ac7c-439fceede4cb\dfs\data\data1,E:\Repositories\Hbase\target\test-data\5ff23198-892e-4f1c-8022-b3d9969fcf0b\dfscluster_0ecc6984-1925-4870-ac7c-439fceede4cb\dfs\data\data2
 2012-04-04 18:04:51,036 WARN  [main] impl.MetricsSystemImpl(137): Metrics 
 system not started: Cannot locate configuration: tried 
 hadoop-metrics2-datanode.properties, hadoop-metrics2.properties
 2012-04-04 18:04:51,255 WARN  [main] datanode.DataNode(1548): Invalid 
 directory in dfs.data.dir: Incorrect permission for 
 E:/Repositories/Hbase/target/test-data/5ff23198-892e-4f1c-8022-b3d9969fcf0b/dfscluster_0ecc6984-1925-4870-ac7c-439fceede4cb/dfs/data/data1,
  expected: rwxr-xr-x, while actual: rwx--
 2012-04-04 18:04:51,411 WARN  [main] datanode.DataNode(1548): Invalid 
 directory in dfs.data.dir: Incorrect permission for 
 E:/Repositories/Hbase/target/test-data/5ff23198-892e-4f1c-8022-b3d9969fcf0b/dfscluster_0ecc6984-1925-4870-ac7c-439fceede4cb/dfs/data/data2,
  expected: rwxr-xr-x, while actual: rwx--
 2012-04-04 18:04:51,411 ERROR [main] datanode.DataNode(1554): All directories 
 in dfs.data.dir are invalid.
 2012-04-04 18:04:51,411 INFO  [main] hbase.HBaseTestingUtility(684): Shutting 
 down minicluster
 2012-04-04 18:04:51,646 WARN  [main] hbase.HBaseTestingUtility(696): Failed 
 delete of 
 E:\Repositories\Hbase\target\test-data\5ff23198-892e-4f1c-8022-b3d9969fcf0b\dfscluster_0ecc6984-1925-4870-ac7c-439fceede4cb
 2012-04-04 18:04:51,646 INFO  [main] hbase.HBaseTestingUtility(700): 
 Minicluster is down
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5721) Update bundled hadoop to be 1.0.2 (it was just released)

2012-04-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5721:
-

Fix Version/s: 0.94.0

Let's do it for 0.94 as well.

 Update bundled hadoop to be 1.0.2 (it was just released)
 

 Key: HBASE-5721
 URL: https://issues.apache.org/jira/browse/HBASE-5721
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Fix For: 0.94.0

 Attachments: 1.0.2.txt, 5721.txt, 5721.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5710) NPE in MiniCluster during metadata scan for a pre-split table with multiple column families

2012-04-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5710:
-

Fix Version/s: (was: 0.94.0)

 NPE in MiniCluster during metadata scan for a pre-split table with multiple 
 column families
 ---

 Key: HBASE-5710
 URL: https://issues.apache.org/jira/browse/HBASE-5710
 Project: HBase
  Issue Type: Bug
  Components: test, util
Affects Versions: 0.94.0
 Environment: MiniCluster
Reporter: James Taylor
Priority: Minor

 In the MiniCluster test environment, an NPE occurs while scanning regions
 of a pre-split table with multiple column families. Without this working
 in the test environment, you cannot write unit tests for these types of
 scenarios.
 Add the following to TestMetaScanner to repro:
@Test
public void testMultiFamilyMultiRegionMetaScanner() throws Exception {
  LOG.info(Starting testMetaScanner);
  final byte[] TABLENAME = Bytes.toBytes(testMetaScanner);
  final byte[] FAMILY1 = Bytes.toBytes(family1);
  final byte[] FAMILY2 = Bytes.toBytes(family2);
  TEST_UTIL.createTable(TABLENAME, new byte[][] {FAMILY1,FAMILY2});
  Configuration conf = TEST_UTIL.getConfiguration();
  HTable table = new HTable(conf, TABLENAME);
  TEST_UTIL.createMultiRegions(conf, table, FAMILY1,
  new byte[][]{
HConstants.EMPTY_START_ROW,
Bytes.toBytes(region_a),
Bytes.toBytes(region_b)});
  TEST_UTIL.createMultiRegions(conf, table, FAMILY2,
  new byte[][]{
HConstants.EMPTY_START_ROW,
Bytes.toBytes(region_a),
Bytes.toBytes(region_b)});
  // Make sure all the regions are deployed
  TEST_UTIL.countRows(table);
  // This fails with an NPE currently
  MetaScanner.allTableRegions(conf, TABLENAME, false).keySet();
  table.close();
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.

2012-04-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: HLog-5604-v2.txt

This is what I have so far. Pretty simple.

It's not finished, and I only sporadically get to work on this.

Did some testing, both playing directly into a table and writing to HFiles 
worked fine.
I have not tested time based filtering, yet.

Repeat: This is not ready, I just need to store this somewhere :)


 HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
 Attachments: HLog-5604-v2.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3968) HLog Pretty Printer

2012-04-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-3968:
-

Attachment: (was: HBASE-3968.patch)

 HLog Pretty Printer
 ---

 Key: HBASE-3968
 URL: https://issues.apache.org/jira/browse/HBASE-3968
 Project: HBase
  Issue Type: New Feature
  Components: io, regionserver, util
Reporter: Nicolas Spiegelberg
Assignee: Riley Patterson
Priority: Minor
  Labels: hbase
 Fix For: 0.90.4


 We currently have a rudimentary way to print HLog data, but it is limited and 
 currently prints key-only information. We need extend this functionality, 
 similar to how we developed HFile's pretty printer. Ideas for functionality:
 - filter by sequence_id
 - filter by row / region
 - option to print values in addition to key info
 - option to print output in JSON format (so scripts can easily parse for 
 analysis)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5604) HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.

2012-04-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5604:
-

Attachment: HLog-5604-v3.txt

 HLog replay tool that generates HFiles for use by LoadIncrementalHFiles.
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
 Attachments: HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5720) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums

2012-04-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5720:
-

Attachment: 5720v4.txt

Trying again.

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums
 --

 Key: HBASE-5720
 URL: https://issues.apache.org/jira/browse/HBASE-5720
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.94.0
Reporter: Matt Corgan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5720v4.txt, 5720v4.txt, 5720v4.txt, HBASE-5720-v1.patch, 
 HBASE-5720-v2.patch, HBASE-5720-v3.patch


 When reading a .92 HFile without checksums, encoding it, and storing in the 
 block cache, the HFileDataBlockEncoderImpl always allocates a dummy header 
 appropriate for checksums even though there are none.  This corrupts the 
 byte[].
 Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case 
 which I think is the desired behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5720) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums

2012-04-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5720:
-

Status: Patch Available  (was: Open)

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums
 --

 Key: HBASE-5720
 URL: https://issues.apache.org/jira/browse/HBASE-5720
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.94.0
Reporter: Matt Corgan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5720v4.txt, 5720v4.txt, 5720v4.txt, HBASE-5720-v1.patch, 
 HBASE-5720-v2.patch, HBASE-5720-v3.patch


 When reading a .92 HFile without checksums, encoding it, and storing in the 
 block cache, the HFileDataBlockEncoderImpl always allocates a dummy header 
 appropriate for checksums even though there are none.  This corrupts the 
 byte[].
 Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case 
 which I think is the desired behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-04-04 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5689:
-

Attachment: HBASE-5689.patch

Reattaching for test run.

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5689-testcase.patch, HBASE-5689.patch, HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-04-04 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5689:
-

Status: Patch Available  (was: Open)

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5689-testcase.patch, HBASE-5689.patch, HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)

2012-04-02 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Attachment: 5682-all-v3.txt

Patch that removes the log statement Stack mentioned (had it in there for 
earlier debugging, forgot to remove it).

Also adds a simple test with an HConnection that is created before the 
mini-cluster is started to prove that initialization is indeed lazy.
(can't test with stopping and restarting the minicluster as new random ports 
are used each time).

 Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 
 only)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5682-all-v2.txt, 5682-all-v3.txt, 5682-all.txt, 
 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)

2012-04-02 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Attachment: 5682-all-v4.txt

I think this is as good as we can get in 0.94.
# Removed the exception handling from ensureZookeeperTrackers none of these 
methods throw.
# added getZookeeperWatcher to two methods that just need a ZKW.

The key is that an HConnection will never be left in a permanently useless 
state. Can file another jira for better timeouts.

 Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 
 only)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5682-all-v2.txt, 5682-all-v3.txt, 5682-all-v4.txt, 
 5682-all.txt, 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-04-02 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-3134:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96.
Thanks for the patch Teruyoshi.
Thanks for the reviews.

 [replication] Add the ability to enable/disable streams
 ---

 Key: HBASE-3134
 URL: https://issues.apache.org/jira/browse/HBASE-3134
 Project: HBase
  Issue Type: New Feature
  Components: replication
Reporter: Jean-Daniel Cryans
Assignee: Teruyoshi Zenmyo
Priority: Minor
  Labels: replication
 Fix For: 0.94.0

 Attachments: 3134-v2.txt, 3134-v3.txt, 3134-v4.txt, 3134.txt, 
 HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch


 This jira was initially in the scope of HBASE-2201, but was pushed out since 
 it has low value compared to the required effort (and when want to ship 
 0.90.0 rather soonish).
 We need to design a way to enable/disable replication streams in a 
 determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5656) LoadIncrementalHFiles createTable should detect and set compression algorithm

2012-04-02 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5656:
-

Attachment: 5656-simple.txt

How about this? Super simple.

 LoadIncrementalHFiles createTable should detect and set compression algorithm
 -

 Key: HBASE-5656
 URL: https://issues.apache.org/jira/browse/HBASE-5656
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.92.1
Reporter: Cosmin Lehene
Assignee: Cosmin Lehene
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: 5656-simple.txt, HBASE-5656-0.92.patch, 
 HBASE-5656-0.92.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 LoadIncrementalHFiles doesn't set compression when creating the the table.
 This can be detected from the files within each family dir. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5671) hbase.metrics.showTableName should be true by default

2012-04-01 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5671:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 hbase.metrics.showTableName should be true by default
 -

 Key: HBASE-5671
 URL: https://issues.apache.org/jira/browse/HBASE-5671
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5671_v1.patch


 HBASE-4768 added per-cf metrics and a new configuration option 
 hbase.metrics.showTableName. We should switch the conf option to true by 
 default, since it is not intuitive (at least to me) to aggregate per-cf 
 across tables by default, and it seems confusing to report on cf's without 
 table names. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5638) Backport to 0.90 and 0.92 - NPE reading ZK config in HBase

2012-04-01 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5638:
-

Fix Version/s: (was: 0.94.1)
   0.96.0

 Backport to 0.90 and 0.92 - NPE reading ZK config in HBase
 --

 Key: HBASE-5638
 URL: https://issues.apache.org/jira/browse/HBASE-5638
 Project: HBase
  Issue Type: Sub-task
  Components: zookeeper
Affects Versions: 0.90.6, 0.92.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5633-0.90.patch, HBASE-5633-0.92.patch, 
 HBASE-5638-0.90-v1.patch, HBASE-5638-0.90-v2.patch, HBASE-5638-0.92-v1.patch, 
 HBASE-5638-0.92-v2.patch, HBASE-5638-trunk-v1.patch, HBASE-5638-trunk-v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5671) hbase.metrics.showTableName should be true by default

2012-04-01 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5671:
-

Fix Version/s: (was: 0.96.0)

 hbase.metrics.showTableName should be true by default
 -

 Key: HBASE-5671
 URL: https://issues.apache.org/jira/browse/HBASE-5671
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.94.0

 Attachments: HBASE-5671_v1.patch


 HBASE-4768 added per-cf metrics and a new configuration option 
 hbase.metrics.showTableName. We should switch the conf option to true by 
 default, since it is not intuitive (at least to me) to aggregate per-cf 
 across tables by default, and it seems confusing to report on cf's without 
 table names. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5671) hbase.metrics.showTableName should be true by default

2012-04-01 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5671:
-

Fix Version/s: 0.96.0

 hbase.metrics.showTableName should be true by default
 -

 Key: HBASE-5671
 URL: https://issues.apache.org/jira/browse/HBASE-5671
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5671_v1.patch


 HBASE-4768 added per-cf metrics and a new configuration option 
 hbase.metrics.showTableName. We should switch the conf option to true by 
 default, since it is not intuitive (at least to me) to aggregate per-cf 
 across tables by default, and it seems confusing to report on cf's without 
 table names. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 
 only)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0

 Attachments: 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5689:
-

Fix Version/s: 0.94.0

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5689-simplified.txt, 5689-testcase.patch, 
 HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5690) compression does not work in Store.java of 0.94

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5690:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 compression does not work in Store.java of 0.94
 ---

 Key: HBASE-5690
 URL: https://issues.apache.org/jira/browse/HBASE-5690
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
 Environment: all
Reporter: honghua zhu
Assignee: honghua zhu
Priority: Critical
 Fix For: 0.94.0

 Attachments: Store.patch


 HBASE-5442 The store.createWriterInTmp method missing compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-3134:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 [replication] Add the ability to enable/disable streams
 ---

 Key: HBASE-3134
 URL: https://issues.apache.org/jira/browse/HBASE-3134
 Project: HBase
  Issue Type: New Feature
  Components: replication
Reporter: Jean-Daniel Cryans
Assignee: Teruyoshi Zenmyo
Priority: Minor
  Labels: replication
 Fix For: 0.94.0

 Attachments: 3134-v2.txt, 3134-v3.txt, 3134-v4.txt, 3134.txt, 
 HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch


 This jira was initially in the scope of HBASE-2201, but was pushed out since 
 it has low value compared to the required effort (and when want to ship 
 0.90.0 rather soonish).
 We need to design a way to enable/disable replication streams in a 
 determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5097) RegionObserver implementation whose preScannerOpen and postScannerOpen Impl return null can stall the system initialization through NPE

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5097:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 RegionObserver implementation whose preScannerOpen and postScannerOpen Impl 
 return null can stall the system initialization through NPE
 ---

 Key: HBASE-5097
 URL: https://issues.apache.org/jira/browse/HBASE-5097
 Project: HBase
  Issue Type: Bug
  Components: coprocessors
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5097.patch, HBASE-5097_1.patch, HBASE-5097_2.patch


 In HRegionServer.java openScanner()
 {code}
   r.prepareScanner(scan);
   RegionScanner s = null;
   if (r.getCoprocessorHost() != null) {
 s = r.getCoprocessorHost().preScannerOpen(scan);
   }
   if (s == null) {
 s = r.getScanner(scan);
   }
   if (r.getCoprocessorHost() != null) {
 s = r.getCoprocessorHost().postScannerOpen(scan, s);
   }
 {code}
 If we dont have implemention for postScannerOpen the RegionScanner is null 
 and so throwing nullpointer 
 {code}
 java.lang.NullPointerException
   at 
 java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:881)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.addScanner(HRegionServer.java:2282)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2272)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
 {code}
 Making this defect as blocker.. Pls feel free to change the priority if am 
 wrong.  Also correct me if my way of trying out coprocessors without 
 implementing postScannerOpen is wrong.  Am just a learner.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5656) LoadIncrementalHFiles createTable should detect and set compression algorithm

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5656:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 LoadIncrementalHFiles createTable should detect and set compression algorithm
 -

 Key: HBASE-5656
 URL: https://issues.apache.org/jira/browse/HBASE-5656
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.92.1
Reporter: Cosmin Lehene
Assignee: Cosmin Lehene
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5656-0.92.patch, HBASE-5656-0.92.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 LoadIncrementalHFiles doesn't set compression when creating the the table.
 This can be detected from the files within each family dir. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5670) Have Mutation implement the Row interface.

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5670:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 Have Mutation implement the Row interface.
 --

 Key: HBASE-5670
 URL: https://issues.apache.org/jira/browse/HBASE-5670
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.0, 0.96.0

 Attachments: 5670-0.94.txt, 5670-trunk.txt, 5670-trunk.txt


 In HBASE-4347 I factored some code from Put/Delete/Append in Mutation.
 In a discussion with a co-worker I noticed that Put/Delete/Append still 
 implement the Row interface, but Mutation does not.
 In a trivial change I would like to move that interface up to Mutation, along 
 with changing HTable.batch(ListRow) to HTable.batch(List? extends Row) 
 (HConnection.processBatch takes List? extends Row already anyway), so that 
 HTable.batch can be used with a list of Mutations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5084) Allow different HTable instances to share one ExecutorService

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5084:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 Allow different HTable instances to share one ExecutorService
 -

 Key: HBASE-5084
 URL: https://issues.apache.org/jira/browse/HBASE-5084
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Lars Hofhansl
 Fix For: 0.94.0

 Attachments: 5084-0.94.txt, 5084-trunk.txt


 This came out of Lily 1.1.1 release:
 Use a shared ExecutorService for all HTable instances, leading to better (or 
 actual) thread reuse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4398) If HRegionPartitioner is used in MapReduce, client side configurations are overwritten by hbase-site.xml.

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4398:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 If HRegionPartitioner is used in MapReduce, client side configurations are 
 overwritten by hbase-site.xml.
 -

 Key: HBASE-4398
 URL: https://issues.apache.org/jira/browse/HBASE-4398
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Takuya Ueshin
Assignee: Takuya Ueshin
 Fix For: 0.92.2, 0.94.0

 Attachments: HBASE-4398.patch


 If HRegionPartitioner is used in MapReduce, client side configurations are 
 overwritten by hbase-site.xml.
 We can reproduce the problem by the following instructions:
 {noformat}
 - Add HRegionPartitioner.class to the 4th argument of
 TableMapReduceUtil#initTableReducerJob()
 at line around 133
 in src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
 - Change or remove hbase.zookeeper.property.clientPort property
 in hbase-site.xml ( for example, changed to 12345 ).
 - run testMultiRegionTable()
 {noformat}
 Then I got error messages as following:
 {noformat}
 2011-09-12 22:28:51,020 DEBUG [Thread-832] zookeeper.ZKUtil(93): hconnection 
 opening connection to ZooKeeper with ensemble (localhost:12345)
 2011-09-12 22:28:51,022 INFO  [Thread-832] 
 zookeeper.RecoverableZooKeeper(89): The identifier of this process is 
 43200@imac.local
 2011-09-12 22:28:51,123 WARN  [Thread-832] 
 zookeeper.RecoverableZooKeeper(161): Possibly transient ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/master
 2011-09-12 22:28:51,123 INFO  [Thread-832] 
 zookeeper.RecoverableZooKeeper(173): The 1 times to retry ZooKeeper after 
 sleeping 1000 ms
  =
 2011-09-12 22:29:02,418 ERROR [Thread-832] mapreduce.HRegionPartitioner(125): 
 java.io.IOException: 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@2e54e48d
  closed
 2011-09-12 22:29:02,422 WARN  [Thread-832] mapred.LocalJobRunner$Job(256): 
 job_local_0001
 java.lang.NullPointerException
at 
 org.apache.hadoop.hbase.mapreduce.HRegionPartitioner.setConf(HRegionPartitioner.java:128)
at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.init(MapTask.java:527)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
 {noformat}
 I think HTable should connect to ZooKeeper at port 21818 configured at client 
 side instead of 12345 in hbase-site.xml
 and It might be caused by HBaseConfiguration.addHbaseResources(conf); in 
 HRegionPartitioner#setConf(Configuration).
 And this might mean that all of client side configurations, also configured 
 in hbase-site.xml, are overwritten caused by this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5669) AggregationClient fails validation for open stoprow scan

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5669:
-

Fix Version/s: (was: 0.94.1)
   0.94.0

 AggregationClient fails validation for open stoprow scan
 

 Key: HBASE-5669
 URL: https://issues.apache.org/jira/browse/HBASE-5669
 Project: HBase
  Issue Type: Bug
  Components: coprocessors
Affects Versions: 0.92.1
 Environment: n/a
Reporter: Brian Rogers
Assignee: Mubarak Seyed
Priority: Minor
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5669.trunk.v1.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 AggregationClient.validateParameters throws an exception when the Scan has a 
 valid startrow but an unset endrow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Attachment: 5682-all.txt

Here's a patch that always attempts reconnecting to ZK when a ZK connection is 
needed.

 Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 
 only)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0

 Attachments: 5682-all.txt, 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Attachment: 5682-all-v2.txt

Found the problem.
The ClusterId could be remain null permanently if 
HConnection.getZookeeperWatcher() was called. That would initialize 
HConnectionImplementation.zookeeper, and hence not reset clusterid in 
ensureZookeeperTrackers.
TestZookeeper.testClientSessionExpired does that.

Also in TestZookeeper.testClientSessionExpired the state might be CONNECTING 
rather than CONNECTED depending on timing.

Upon inspection I also made clusterId, rootRegionTracker, masterAddressTracker, 
and zooKeeper volatile, because they can be modified by a different thread, but 
are not exclusively accessed in a synchronized block (exiting problem).

New patch that fixes the problem, passes all tests.

TestZookeeper seems to have good coverage. If I can think of more tests, I'll 
add them there.

 Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 
 only)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0

 Attachments: 5682-all-v2.txt, 5682-all.txt, 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)

2012-03-31 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Priority: Critical  (was: Major)

Upped to critical. Without this the HBase client is pretty much useless in an 
AppServer setting where client can outlive the HBase cluster and ZK ensemble.
(Testing within the Salesforce AppServer is how I noticed the problem 
initially.)


 Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 
 only)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5682-all-v2.txt, 5682-all.txt, 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5084) Allow different HTable instances to share one ExecutorService

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5084:
-

Fix Version/s: 0.94.1

Going to work on this for 0.94.x.

 Allow different HTable instances to share one ExecutorService
 -

 Key: HBASE-5084
 URL: https://issues.apache.org/jira/browse/HBASE-5084
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Lars Hofhansl
 Fix For: 0.94.1


 This came out of Lily 1.1.1 release:
 Use a shared ExecutorService for all HTable instances, leading to better (or 
 actual) thread reuse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5084) Allow different HTable instances to share one ExecutorService

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5084:
-

Status: Patch Available  (was: Open)

 Allow different HTable instances to share one ExecutorService
 -

 Key: HBASE-5084
 URL: https://issues.apache.org/jira/browse/HBASE-5084
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5084-trunk.txt


 This came out of Lily 1.1.1 release:
 Use a shared ExecutorService for all HTable instances, leading to better (or 
 actual) thread reuse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5084) Allow different HTable instances to share one ExecutorService

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5084:
-

Attachment: 5084-trunk.txt

 Allow different HTable instances to share one ExecutorService
 -

 Key: HBASE-5084
 URL: https://issues.apache.org/jira/browse/HBASE-5084
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5084-trunk.txt


 This came out of Lily 1.1.1 release:
 Use a shared ExecutorService for all HTable instances, leading to better (or 
 actual) thread reuse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (port to 0.94)

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Fix Version/s: (was: 0.90.6)
   0.94.1

 Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (port to 
 0.94)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Sub-task
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.1


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (port to 0.94)

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Attachment: 5682.txt

Here's a patch.
Please have a careful look. I can upload to RB too.

The idea is that if this is an unmanaged Connection (see HBASE-5399), a ZK 
connection is re-establish whenever needed (if it was lost before).

This patch is somewhat more complicated than I'd like, because I did not want 
to change the behavior for managed (default) connections.
If we like I can make this the default behavior... Seems much more robust than 
the current behavior.

I tested this manually, and the connection (if created with 
HConnectionManager.createConnection, and hence unmanaged) recovers from loosing 
both the HBase and ZK connections.

(Interestingly in plain HBase 0.94 the client *never* recovers from this - even 
with the default connection behavior.)


 Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (port to 
 0.94)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Sub-task
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5084) Allow different HTable instances to share one ExecutorService

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5084:
-

Status: Open  (was: Patch Available)

 Allow different HTable instances to share one ExecutorService
 -

 Key: HBASE-5084
 URL: https://issues.apache.org/jira/browse/HBASE-5084
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5084-0.94.txt, 5084-trunk.txt


 This came out of Lily 1.1.1 release:
 Use a shared ExecutorService for all HTable instances, leading to better (or 
 actual) thread reuse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (for 0.94 only)

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Issue Type: Improvement  (was: Sub-task)
Parent: (was: HBASE-5153)

 Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (for 0.94 
 only)
 ---

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (for 0.94 only)

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Summary: Add retry logic in 
HConnectionImplementation#resetZooKeeperTrackers (for 0.94 only)  (was: Add 
retry logic in HConnectionImplementation#resetZooKeeperTrackers (port to 0.94))

 Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (for 0.94 
 only)
 ---

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Sub-task
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (for 0.94 only)

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Attachment: 5682-v2.txt

Slightly better patch.
No need to wait waitForRootRegion in ZK longer than RecoverableZookeeper tries.

With recovery is clean. The client can control via timeouts how soon it would 
it would a connection problem up to the application layer.

Since this patch is only against 0.94 I'll run tests locally.

 Add retry logic in HConnectionImplementation#resetZooKeeperTrackers (for 0.94 
 only)
 ---

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)

2012-03-30 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5682:
-

Summary: Allow HConnectionImplementation to recover from ZK connection loss 
(for 0.94 only)  (was: Add retry logic in 
HConnectionImplementation#resetZooKeeperTrackers (for 0.94 only))

 Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 
 only)
 --

 Key: HBASE-5682
 URL: https://issues.apache.org/jira/browse/HBASE-5682
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.1

 Attachments: 5682-v2.txt, 5682.txt


 Just realized that without this HBASE-4805 is broken.
 I.e. there's no point keeping a persistent HConnection around if it can be 
 rendered permanently unusable if the ZK connection is lost temporarily.
 Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to 
 backport)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5670) Have Mutation implement the Row interface.

2012-03-29 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5670:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 Have Mutation implement the Row interface.
 --

 Key: HBASE-5670
 URL: https://issues.apache.org/jira/browse/HBASE-5670
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.96.0, 0.94.1

 Attachments: 5670-0.94.txt, 5670-trunk.txt, 5670-trunk.txt


 In HBASE-4347 I factored some code from Put/Delete/Append in Mutation.
 In a discussion with a co-worker I noticed that Put/Delete/Append still 
 implement the Row interface, but Mutation does not.
 In a trivial change I would like to move that interface up to Mutation, along 
 with changing HTable.batch(ListRow) to HTable.batch(List? extends Row) 
 (HConnection.processBatch takes List? extends Row already anyway), so that 
 HTable.batch can be used with a list of Mutations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5671) hbase.metrics.showTableName should be true by default

2012-03-29 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5671:
-

Fix Version/s: 0.96.0

 hbase.metrics.showTableName should be true by default
 -

 Key: HBASE-5671
 URL: https://issues.apache.org/jira/browse/HBASE-5671
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-5671_v1.patch


 HBASE-4768 added per-cf metrics and a new configuration option 
 hbase.metrics.showTableName. We should switch the conf option to true by 
 default, since it is not intuitive (at least to me) to aggregate per-cf 
 across tables by default, and it seems confusing to report on cf's without 
 table names. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5670) Have Mutation implement the Row interface.

2012-03-28 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5670:
-

Fix Version/s: 0.94.1

 Have Mutation implement the Row interface.
 --

 Key: HBASE-5670
 URL: https://issues.apache.org/jira/browse/HBASE-5670
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.1


 In HBASE-4347 I factored some code from Put/Delete/Append in Mutation.
 In a discussion with a co-worker I noticed that Put/Delete/Append still 
 implement the Row interface, but Mutation does not.
 In a trivial change I would like to move that interface up to Mutation, along 
 with changing HTable.batch(ListRow) to HTable.batch(List? extends Row) 
 (HConnection.processBatch takes List? extends Row already anyway), so that 
 HTable.batch can be used with a list of Mutations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5670) Have Mutation implement the Row interface.

2012-03-28 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5670:
-

Attachment: 5670-0.94.txt

Simple patch that does just that. Mutation implements Row, and HTable.batch 
takes List? extends Row

 Have Mutation implement the Row interface.
 --

 Key: HBASE-5670
 URL: https://issues.apache.org/jira/browse/HBASE-5670
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.1

 Attachments: 5670-0.94.txt, 5670-trunk.txt


 In HBASE-4347 I factored some code from Put/Delete/Append in Mutation.
 In a discussion with a co-worker I noticed that Put/Delete/Append still 
 implement the Row interface, but Mutation does not.
 In a trivial change I would like to move that interface up to Mutation, along 
 with changing HTable.batch(ListRow) to HTable.batch(List? extends Row) 
 (HConnection.processBatch takes List? extends Row already anyway), so that 
 HTable.batch can be used with a list of Mutations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5670) Have Mutation implement the Row interface.

2012-03-28 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5670:
-

Attachment: 5670-trunk.txt

Same for trunk

 Have Mutation implement the Row interface.
 --

 Key: HBASE-5670
 URL: https://issues.apache.org/jira/browse/HBASE-5670
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.1

 Attachments: 5670-0.94.txt, 5670-trunk.txt


 In HBASE-4347 I factored some code from Put/Delete/Append in Mutation.
 In a discussion with a co-worker I noticed that Put/Delete/Append still 
 implement the Row interface, but Mutation does not.
 In a trivial change I would like to move that interface up to Mutation, along 
 with changing HTable.batch(ListRow) to HTable.batch(List? extends Row) 
 (HConnection.processBatch takes List? extends Row already anyway), so that 
 HTable.batch can be used with a list of Mutations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5670) Have Mutation implement the Row interface.

2012-03-28 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5670:
-

Status: Patch Available  (was: Open)

 Have Mutation implement the Row interface.
 --

 Key: HBASE-5670
 URL: https://issues.apache.org/jira/browse/HBASE-5670
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.1

 Attachments: 5670-0.94.txt, 5670-trunk.txt


 In HBASE-4347 I factored some code from Put/Delete/Append in Mutation.
 In a discussion with a co-worker I noticed that Put/Delete/Append still 
 implement the Row interface, but Mutation does not.
 In a trivial change I would like to move that interface up to Mutation, along 
 with changing HTable.batch(ListRow) to HTable.batch(List? extends Row) 
 (HConnection.processBatch takes List? extends Row already anyway), so that 
 HTable.batch can be used with a list of Mutations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5670) Have Mutation implement the Row interface.

2012-03-28 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5670:
-

Attachment: 5670-trunk.txt

Attaching again to see if I can get a clean run.

 Have Mutation implement the Row interface.
 --

 Key: HBASE-5670
 URL: https://issues.apache.org/jira/browse/HBASE-5670
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Trivial
 Fix For: 0.94.1

 Attachments: 5670-0.94.txt, 5670-trunk.txt, 5670-trunk.txt


 In HBASE-4347 I factored some code from Put/Delete/Append in Mutation.
 In a discussion with a co-worker I noticed that Put/Delete/Append still 
 implement the Row interface, but Mutation does not.
 In a trivial change I would like to move that interface up to Mutation, along 
 with changing HTable.batch(ListRow) to HTable.batch(List? extends Row) 
 (HConnection.processBatch takes List? extends Row already anyway), so that 
 HTable.batch can be used with a list of Mutations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5659) TestAtomicOperation.testMultiRowMutationMultiThreads is still failing occasionally

2012-03-27 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5659:
-

Fix Version/s: (was: 0.94.0)

 TestAtomicOperation.testMultiRowMutationMultiThreads is still failing 
 occasionally
 --

 Key: HBASE-5659
 URL: https://issues.apache.org/jira/browse/HBASE-5659
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0


 See run here: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/1318//testReport/org.apache.hadoop.hbase.regionserver/TestAtomicOperation/testMultiRowMutationMultiThreads/
 {quote}
 2012-03-27 04:36:12,627 DEBUG [Thread-118] regionserver.StoreScanner(499): 
 Storescanner.peek() is changed where before = 
 rowB/colfamily11:qual1/7202/Put/vlen=6/ts=7922,and after = 
 rowB/colfamily11:qual1/7199/DeleteColumn/vlen=0/ts=0
 2012-03-27 04:36:12,629 INFO  [Thread-121] regionserver.HRegion(1558): 
 Finished memstore flush of ~2.9k/2952, currentsize=1.6k/1640 for region 
 testtable,,1332822963417.7cd30e219714cfc5e91f69def66e7f81. in 14ms, 
 sequenceid=7927, compaction requested=true
 2012-03-27 04:36:12,629 DEBUG [Thread-126] 
 regionserver.TestAtomicOperation$2(362): flushing
 2012-03-27 04:36:12,630 DEBUG [Thread-126] regionserver.HRegion(1426): 
 Started memstore flush for 
 testtable,,1332822963417.7cd30e219714cfc5e91f69def66e7f81., current region 
 memstore size 1.9k
 2012-03-27 04:36:12,630 DEBUG [Thread-126] regionserver.HRegion(1474): 
 Finished snapshotting 
 testtable,,1332822963417.7cd30e219714cfc5e91f69def66e7f81., commencing wait 
 for mvcc, flushsize=1968
 2012-03-27 04:36:12,630 DEBUG [Thread-126] regionserver.HRegion(1484): 
 Finished snapshotting, commencing flushing stores
 2012-03-27 04:36:12,630 DEBUG [Thread-126] util.FSUtils(153): Creating 
 file=/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/target/test-data/b9091c3c-961e-4035-850a-83ad14d517cc/TestAtomicOperationtestMultiRowMutationMultiThreads/testtable/7cd30e219714cfc5e91f69def66e7f81/.tmp/61954619003e469baf1a34be5ff2ec57
  with permission=rwxrwxrwx
 2012-03-27 04:36:12,631 DEBUG [Thread-126] hfile.HFileWriterV2(143): 
 Initialized with CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false]
 2012-03-27 04:36:12,631 INFO  [Thread-126] 
 regionserver.StoreFile$Writer(997): Delete Family Bloom filter type for 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/target/test-data/b9091c3c-961e-4035-850a-83ad14d517cc/TestAtomicOperationtestMultiRowMutationMultiThreads/testtable/7cd30e219714cfc5e91f69def66e7f81/.tmp/61954619003e469baf1a34be5ff2ec57:
  CompoundBloomFilterWriter
 2012-03-27 04:36:12,632 INFO  [Thread-126] 
 regionserver.StoreFile$Writer(1220): NO General Bloom and NO DeleteFamily was 
 added to HFile 
 (/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/target/test-data/b9091c3c-961e-4035-850a-83ad14d517cc/TestAtomicOperationtestMultiRowMutationMultiThreads/testtable/7cd30e219714cfc5e91f69def66e7f81/.tmp/61954619003e469baf1a34be5ff2ec57)
  
 2012-03-27 04:36:12,632 INFO  [Thread-126] regionserver.Store(770): Flushed , 
 sequenceid=7934, memsize=1.9k, into tmp file 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/target/test-data/b9091c3c-961e-4035-850a-83ad14d517cc/TestAtomicOperationtestMultiRowMutationMultiThreads/testtable/7cd30e219714cfc5e91f69def66e7f81/.tmp/61954619003e469baf1a34be5ff2ec57
 2012-03-27 04:36:12,632 DEBUG [Thread-126] regionserver.Store(795): Renaming 
 flushed file at 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/target/test-data/b9091c3c-961e-4035-850a-83ad14d517cc/TestAtomicOperationtestMultiRowMutationMultiThreads/testtable/7cd30e219714cfc5e91f69def66e7f81/.tmp/61954619003e469baf1a34be5ff2ec57
  to 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/target/test-data/b9091c3c-961e-4035-850a-83ad14d517cc/TestAtomicOperationtestMultiRowMutationMultiThreads/testtable/7cd30e219714cfc5e91f69def66e7f81/colfamily11/61954619003e469baf1a34be5ff2ec57
 2012-03-27 04:36:12,634 INFO  [Thread-126] regionserver.Store(818): Added 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/target/test-data/b9091c3c-961e-4035-850a-83ad14d517cc/TestAtomicOperationtestMultiRowMutationMultiThreads/testtable/7cd30e219714cfc5e91f69def66e7f81/colfamily11/61954619003e469baf1a34be5ff2ec57,
  entries=12, sequenceid=7934, filesize=1.3k
 2012-03-27 04:36:12,642 DEBUG [Thread-118] 
 regionserver.TestAtomicOperation$2(392): []
 Exception in thread Thread-118 junit.framework.AssertionFailedError at 
 junit.framework.Assert.fail(Assert.java:48)
 

[jira] [Updated] (HBASE-5656) LoadIncrementalHFiles createTable should detect and set compression algorithm

2012-03-27 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5656:
-

Fix Version/s: (was: 0.94.0)
   0.94.1

It seems this has been like this. Moving out of 0.94.0. Will pull back in if I 
need to do another RC (which is likely)

 LoadIncrementalHFiles createTable should detect and set compression algorithm
 -

 Key: HBASE-5656
 URL: https://issues.apache.org/jira/browse/HBASE-5656
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.92.1
Reporter: Cosmin Lehene
Assignee: Cosmin Lehene
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5656-0.92.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 LoadIncrementalHFiles doesn't set compression when creating the the table.
 This can be detected from the files within each family dir. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5633) NPE reading ZK config in HBase

2012-03-26 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5633:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Let's move the 0.90 and 0.92 into a sub-task, so that this issue can be kept 
closed.

 NPE reading ZK config in HBase
 --

 Key: HBASE-5633
 URL: https://issues.apache.org/jira/browse/HBASE-5633
 Project: HBase
  Issue Type: Bug
  Components: zookeeper
Reporter: Matteo Bertozzi
Priority: Minor
 Fix For: 0.94.0

 Attachments: HBASE-5633-0.90.patch, HBASE-5633-0.92.patch, 
 HBASE-5633-v1.patch, HBASE-5633-v2.patch


 If zoo.cfg contains server.* (server.0=server0:2888:3888\n) and 
 cluster.distributed property (in hbase-site.xml) is empty we get an NPE in 
 parseZooCfg().
 The easy way to reproduce the bug is running 
 org.apache.hbase.zookeeper.TestHQuorumPeer with hbase-site.xml containing:
 {code}
 property
   namehbase.cluster.distributed/name
   value/value
 /property
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5615) the master never does balance because of balancing the parent region

2012-03-26 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5615:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 the master never does balance because of balancing the parent region
 

 Key: HBASE-5615
 URL: https://issues.apache.org/jira/browse/HBASE-5615
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7
Reporter: xufeng
Assignee: xufeng
Priority: Critical
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5615-trunk.txt, HBASE-5615-90.patch, HBASE-5615.patch, 
 NoPatched-surefire-report-5615-90.html, Patched_surefire-report-5615-90.html


 the master never do balance becauseof when master do rebuildUserRegions(),it 
 will add the parent region into  AssignmentManager#servers,
 if balancer let the parent region to move,the parent will in RIT forever.thus 
 balance will never be executed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5623) Race condition when rolling the HLog and hlogFlush

2012-03-26 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5623:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96.

 Race condition when rolling the HLog and hlogFlush
 --

 Key: HBASE-5623
 URL: https://issues.apache.org/jira/browse/HBASE-5623
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5623-suggestion.txt, 5623-v7.txt, 5623-v8.txt, 5623.txt, 
 5623v2.txt, HBASE-5623_v0.patch, HBASE-5623_v4.patch, HBASE-5623_v5.patch, 
 HBASE-5623_v6-alt.patch, HBASE-5623_v6-alt.patch


 When doing a ycsb test with a large number of handlers 
 (regionserver.handler.count=60), I get the following exceptions:
 {code}
 Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
 java.lang.NullPointerException
   at 
 org.apache.hadoop.io.SequenceFile$Writer.getLength(SequenceFile.java:1099)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.getLength(SequenceFileLogWriter.java:314)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1291)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1388)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192)
   at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400)
   at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351)
   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:920)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:152)
   at $Proxy1.multi(Unknown Source)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1691)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1689)
   at 
 org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:214)
 {code}
 and 
 {code}
   java.lang.NullPointerException
   at 
 org.apache.hadoop.io.SequenceFile$Writer.checkAndWriteSync(SequenceFile.java:1026)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1068)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1035)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.append(SequenceFileLogWriter.java:279)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.hlogFlush(HLog.java:1237)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1271)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1391)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400)
   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351)
 {code}
 It seems the root cause of the issue is that we open a new log writer and 
 close the old one at HLog#rollWriter() holding the updateLock, but the other 
 threads doing syncer() calls
 {code} 
 logSyncerThread.hlogFlush(this.writer);
 {code}
 without holding the updateLock. LogSyncer only synchronizes against 
 concurrent appends and flush(), but not on the passed writer, which can be 
 closed already by rollWriter(). In this case, since 
 SequenceFile#Writer.close() sets it's out field as null, we get the NPE. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

  1   2   3   4   5   6   >