[jira] [Updated] (HBASE-5833) 0.92 build has been failing pretty consistently on TestMasterFailover....

2012-04-20 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5833:
--

Comment: was deleted

(was: In build #380, TestMasterFailover hung again:
{code}
Running org.apache.hadoop.hbase.master.TestMasterFailover
Running org.apache.hadoop.hbase.master.TestClockSkewDetection
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec
{code})

 0.92 build has been failing pretty consistently on TestMasterFailover
 -

 Key: HBASE-5833
 URL: https://issues.apache.org/jira/browse/HBASE-5833
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.92.2

 Attachments: 5833.txt


 Trunk seems fine but 0.92 fails on this test pretty regularly.  Running it 
 local it seems to hang for me.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-20 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5824:
--

Attachment: 5824-addendum-v2.txt

Proposed addendum that matches the original intent of the JIRA.
There is no strong reason for the change in HTable.doPut(final ListPut puts)

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: 5824-addendum-v2.txt, hbase-5824.patch, 
 hbase-5824_v2.patch, hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5845) Single Put should use RetriesExhaustedWithDetailsException in case any exception

2012-04-20 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5845:
--

Description: In HBASE-5824, attempt was made to handle single Put execution 
separately. Put has two exception paths thereafter. It's better to keep one 
exception for easy exception handling.  (was: Due to change in HBASE-5824.  Put 
has two exception paths now.  It's better to stay the same for easy exception 
handling.)

 Single Put should use RetriesExhaustedWithDetailsException in case any 
 exception
 

 Key: HBASE-5845
 URL: https://issues.apache.org/jira/browse/HBASE-5845
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor

 In HBASE-5824, attempt was made to handle single Put execution separately. 
 Put has two exception paths thereafter. It's better to keep one exception for 
 easy exception handling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5635) If getTaskList() returns null, splitlogWorker would go down and it won't serve any requests

2012-04-20 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5635:
--

Summary: If getTaskList() returns null, splitlogWorker would go down and it 
won't serve any requests  (was: If getTaskList() returns null splitlogWorker is 
down. It wont serve any requests. )

 If getTaskList() returns null, splitlogWorker would go down and it won't 
 serve any requests
 ---

 Key: HBASE-5635
 URL: https://issues.apache.org/jira/browse/HBASE-5635
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.92.1
Reporter: Kristam Subba Swathi
 Attachments: HBASE-5635.1.patch, HBASE-5635.2.patch, 
 HBASE-5635._trunk.patch, HBASE-5635.patch, HBASE-5635_0.94.patch


 During the hlog split operation if all the zookeepers are down ,then the 
 paths will be returned as null and the splitworker thread wil be exited
 Now this regionserver wil not be able to acquire any other tasks since the 
 splitworker thread is exited
 Please find the attached code for more details
 {code}
 private ListString getTaskList() {
 for (int i = 0; i  zkretries; i++) {
   try {
 return (ZKUtil.listChildrenAndWatchForNewChildren(this.watcher,
 this.watcher.splitLogZNode));
   } catch (KeeperException e) {
 LOG.warn(Could not get children of znode  +
 this.watcher.splitLogZNode, e);
 try {
   Thread.sleep(1000);
 } catch (InterruptedException e1) {
   LOG.warn(Interrupted while trying to get task list ..., e1);
   Thread.currentThread().interrupt();
   return null;
 }
   }
 }
 {code}
 in the org.apache.hadoop.hbase.regionserver.SplitLogWorker 
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5809) Avoid move api to take the destination server same as the source server.

2012-04-20 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5809:
--

Comment: was deleted

(was: I think the failure was due to testRegionTransitionOperations not 
checking whether the source and destination servers are the same:
{code}
master.move(firstGoodPair.getKey().getEncodedNameAsBytes(),
  Bytes.toBytes(destName));
{code}
To make the test valid, destName should be chosen to be different from source 
server.)

 Avoid move api to take the destination server same as the source server.
 

 Key: HBASE-5809
 URL: https://issues.apache.org/jira/browse/HBASE-5809
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.1
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
Priority: Minor
  Labels: client
 Fix For: 0.96.0

 Attachments: HBASE-5809.patch, HBASE-5809.patch


 In Move currently we take any destination specified and if the destination is 
 same as the source we still do unassign and assign.  Here we can have 
 problems due to RegionAlreadyInTransitionException and thus hanging the 
 region in RIT for long time.  We can avoid this scenario by not allowing the 
 move to happen in this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5809) Avoid move api to take the destination server same as the source server.

2012-04-20 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5809:
--

Attachment: 5809.addendum

The addendum fixes the incorrect comparison between server names.

 Avoid move api to take the destination server same as the source server.
 

 Key: HBASE-5809
 URL: https://issues.apache.org/jira/browse/HBASE-5809
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.1
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
Priority: Minor
  Labels: client
 Fix For: 0.96.0

 Attachments: 5809.addendum, HBASE-5809.patch, HBASE-5809.patch


 In Move currently we take any destination specified and if the destination is 
 same as the source we still do unassign and assign.  Here we can have 
 problems due to RegionAlreadyInTransitionException and thus hanging the 
 region in RIT for long time.  We can avoid this scenario by not allowing the 
 move to happen in this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5794) Jenkins builds timing out

2012-04-19 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5794:
--

Attachment: 5794-v2.txt

 Jenkins builds timing out
 -

 Key: HBASE-5794
 URL: https://issues.apache.org/jira/browse/HBASE-5794
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Attachments: 5794-v2.txt, 5794.txt, 5794.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5787) Table owner can't disable/delete its own table

2012-04-18 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5787:
--

Fix Version/s: 0.96.0
   0.94.0
   0.92.2
 Hadoop Flags: Reviewed

 Table owner can't disable/delete its own table
 --

 Key: HBASE-5787
 URL: https://issues.apache.org/jira/browse/HBASE-5787
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: acl, security
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5787-tests-wrong-names.patch, HBASE-5787-v0.patch, 
 HBASE-5787-v1.patch


 An user with CREATE privileges can create a table, but can not disable it, 
 because disable operation require ADMIN privileges. Also if a table is 
 already disabled, anyone can remove it.
 {code}
 public void preDeleteTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   requirePermission(Permission.Action.CREATE);
 }
 public void preDisableTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   /* TODO: Allow for users with global CREATE permission and the table owner 
 */
   requirePermission(Permission.Action.ADMIN);
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5732) Remove the SecureRPCEngine and merge the security-related logic in the core engine

2012-04-18 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5732:
--

Comment: was deleted

(was: We haven't put zookeeper 3.4.x as requirement for 0.96 yet.
In testing this work, please make sure zookeeper 3.3.x ensemble can be used for 
the insecure RPC.)

 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine
 --

 Key: HBASE-5732
 URL: https://issues.apache.org/jira/browse/HBASE-5732
 Project: HBase
  Issue Type: Improvement
Reporter: Devaraj Das
 Attachments: rpcengine-merge.patch


 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine. Follow up to HBASE-5727.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5821) Incorrect handling of null value in Coprocessor aggregation function min()

2012-04-18 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5821:
--

Fix Version/s: 0.94.1
   0.96.0
   0.92.2
 Hadoop Flags: Reviewed

 Incorrect handling of null value in Coprocessor aggregation function min()
 --

 Key: HBASE-5821
 URL: https://issues.apache.org/jira/browse/HBASE-5821
 Project: HBase
  Issue Type: Bug
  Components: coprocessors
Affects Versions: 0.92.1
Reporter: Maryann Xue
Assignee: Maryann Xue
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5821.patch


 Both in AggregateImplementation and AggregationClient, the evaluation of the 
 current minimum value is like:
 min = (min == null || ci.compare(result, min)  0) ? result : min;
 The LongColumnInterpreter takes null value is treated as the least value, 
 while the above expression takes min as the greater value when it is null. 
 Thus, the real minimum value gets discarded if a null value comes later.
 max() could also be wrong if a different ColumnInterpreter other than 
 LongColumnInterpreter treats null value differently (as the greatest).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5790) ZKUtil deleteRecursively should be a recoverable operation

2012-04-17 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5790:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 ZKUtil deleteRecursively should be a recoverable operation
 --

 Key: HBASE-5790
 URL: https://issues.apache.org/jira/browse/HBASE-5790
 Project: HBase
  Issue Type: Improvement
Reporter: Jesse Yates
Assignee: Jesse Yates
  Labels: zookeeper
 Fix For: 0.96.0, 0.94.1

 Attachments: java_HBASE-5790-v1.patch, java_HBASE-5790.patch


 As of 3.4.3 Zookeeper now has full, multi-operation transaction. This means 
 we can wholesale delete chunks of the zk tree and ensure that we don't have 
 any pesky recursive delete issues where we delete the children of a node, but 
 then a child joins before deletion of the parent. Even without transactions, 
 this should be the behavior, but it is possible to make it much cleaner now 
 that we have this new feature in zk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-16 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5795:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0, 0.96.0

 Attachments: 5795-v2.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5733) AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.

2012-04-16 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5733:
--

Status: Patch Available  (was: Open)

testProcessDeadServersAndRegionsInTransitionShouldNotFailWithNPE failed without 
the patch and passes with the patch.

 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.
 -

 Key: HBASE-5733
 URL: https://issues.apache.org/jira/browse/HBASE-5733
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5733.patch


 Found while going through the code...
 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE 
 as this is directly iterating the nodes from 
 listChildrenAndWatchForNewChildren with-out checking for null.
 Here also we need to handle with  null  check like other places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-16 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5795:
--

Attachment: 5795-v3.txt

Patch combining v2 and Stack's test.

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0, 0.96.0

 Attachments: 5795-v2.txt, 5795-v3.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) HServerLoad$RegionLoad breaks 0.92-0.94 compatibility

2012-04-16 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5795:
--

Assignee: Zhihong Yu  (was: stack)
 Summary: HServerLoad$RegionLoad breaks 0.92-0.94 compatibility  (was: 
hbase-3927 breaks 0.92-0.94 compatibility)

 HServerLoad$RegionLoad breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Zhihong Yu
 Fix For: 0.94.0, 0.96.0

 Attachments: 5795-v2.txt, 5795-v3.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-16 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5780:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12522856/TestReplicationPeer-output.log
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1544//console

This message is automatically generated.)

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, 
 TestReplicationPeer-Security-output.log, TestReplicationPeer-output.log, 
 testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-16 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5782:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12522884/5782-sketch.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause mvn compile goal to fail.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1547//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1547//console

This message is automatically generated.)

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-sketch.txt, 5782.txt, 5782.unfinished-stack.txt, 
 HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-16 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5782:
--

Comment: was deleted

(was: {code}
+  synchronized (flushLock) {
+ListEntry pending;

-  // write out all accumulated Entries to hdfs.
-  for (Entry e : pending) {
-writer.append(e);
+synchronized (this) {
{code}
Is the second synchronized needed ? )

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-sketch.txt, 5782.txt, 5782.unfinished-stack.txt, 
 HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-15 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5795:
--

Attachment: 5795-v1.txt

Since only deserialization needs special handling, the attached patch adds a 
private method to read 0.92 RegionLoad.

Please comment.

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 5794.txt, 5795-v1.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-15 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5795:
--

Attachment: (was: 5795-v1.txt)

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0

 Attachments: 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-15 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5795:
--

Attachment: 5795-v2.txt

Patch v1 didn't make testHServerLoadVersioning pass.

Patch v2 does.
I found that the version of RegionLoad was actually serialized twice in 0.92: 
first by VersionedWritable.write(), followed by RegionLoad.write().
In patch v2, I removed the redundant write. readFields92() consumes the second 
copy of version.

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0

 Attachments: 5795-v2.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-15 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5795:
--

Status: Patch Available  (was: Open)

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0

 Attachments: 5795-v2.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-13 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5677:
--

Attachment: (was: 5677-proposal.txt)

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2

 Attachments: 5677-proposal.txt, 5677-proposal.txt, 
 HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, 
 surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-13 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5780:
--

Comment: was deleted

(was: I don't see attachment, for now.)

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-13 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5780:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12522644/testoutput.tar.gz
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1520//console

This message is automatically generated.)

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-13 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5780:
--

Fix Version/s: 0.94.1
   0.96.0
 Hadoop Flags: Reviewed

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-13 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5780:
--

Fix Version/s: 0.92.2

0.92 builds have been failing 7 times, straight.
Trunk builds have been failing 4 times consectively.

Will integrate to 0.94 first.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5666) RegionServer doesn't retry to check if base node is available

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5666:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12521401/HBASE-5666-v5.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 3 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper
  org.apache.hadoop.hbase.client.TestInstantSchemaChangeSplit
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
  org.apache.hadoop.hbase.mapreduce.TestTableMapReduce

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1395//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1395//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1395//console

This message is automatically generated.)

 RegionServer doesn't retry to check if base node is available
 -

 Key: HBASE-5666
 URL: https://issues.apache.org/jira/browse/HBASE-5666
 Project: HBase
  Issue Type: Bug
  Components: regionserver, zookeeper
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: HBASE-5666-0.92.patch, HBASE-5666-v1.patch, 
 HBASE-5666-v2.patch, HBASE-5666-v3.patch, HBASE-5666-v4.patch, 
 HBASE-5666-v5.patch, HBASE-5666-v6.patch, HBASE-5666-v7.patch, 
 HBASE-5666-v8.patch, hbase-1-regionserver.log, hbase-2-regionserver.log, 
 hbase-3-regionserver.log, hbase-master.log, hbase-regionserver.log, 
 hbase-zookeeper.log


 I've a script that starts hbase and a couple of region servers in distributed 
 mode (hbase.cluster.distributed = true)
 {code}
 $HBASE_HOME/bin/start-hbase.sh
 $HBASE_HOME/bin/local-regionservers.sh start 1 2 3
 {code}
 but the region servers are not able to start...
 It seems that during the RS start the the znode is still not available, and 
 HRegionServer.initializeZooKeeper() check just once if the base not is 
 available.
 {code}
 2012-03-28 21:54:05,013 INFO 
 org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Check the value 
 configured in 'zookeeper.znode.parent'. There could be a mismatch with the 
 one configured in the master.
 2012-03-28 21:54:08,598 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 localhost,60202,133296824: Initialization of RS failed.  Hence aborting 
 RS.
 java.io.IOException: Received the shutdown message while waiting.
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:626)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:596)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:558)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:672)
   at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5767) Add the hbase shell table_att for any attribute

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5767:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 Add the hbase shell table_att for any attribute
 ---

 Key: HBASE-5767
 URL: https://issues.apache.org/jira/browse/HBASE-5767
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Xing Shi
Priority: Minor
 Attachments: HBASE-5767-V2.patch, HBASE-5767.patch


 Now the HTableDescriptor supports setValue(String key, String value) method, 
 but the hbase shell not support it.
 May be like this:
 {quota}
 hbase(main):003:0 alter 'test', METHOD='table_att', 'key1'='value1'
 Updating all regions with the new schema...
 1/1 regions updated.
 Done.
 0 row(s) in 1.0820 seconds
 hbase(main):005:0 describe 'test'
 DESCRIPTION   
 ENABLED  
  {NAME = 'test', key1 = 'value1', FAMILIES = [{NAME = 'f1', BLOOMFILTER 
 = 'NONE', RE true 
  PLICATION_SCOPE = '0', VERSIONS = '3', COMPRESSION = 'NONE', MIN_VERSIONS 
 = '0', TTL  
   = '2147483647', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE = 
 'true'}]} 
 1 row(s) in 0.0300 seconds
 hbase(main):007:0 alter 'test', METHOD='table_att_unset', NAME='key1'
 Updating all regions with the new schema...
 1/1 regions updated.
 Done.
 0 row(s) in 1.0860 seconds
 hbase(main):008:0 describe 'test'
 DESCRIPTION   
 ENABLED  
  {NAME = 'test', FAMILIES = [{NAME = 'f1', BLOOMFILTER = 'NONE', 
 REPLICATION_SCOPE = false
   '0', VERSIONS = '3', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL = 
 '2147483647',   
  BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE = 'true'}]}  
  
 1 row(s) in 0.0280 seconds
 {quota}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5741:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Attachments: HBase-5741-v2.patch, HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5677:
--

Attachment: 5677-proposal.txt

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5677-proposal.txt, HBASE-5677-90-v1.patch, 
 surefire-report_no_patched_v1.html, surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5677:
--

Status: Patch Available  (was: Open)

Run Lars' proposal through Hadoop QA

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5677-proposal.txt, HBASE-5677-90-v1.patch, 
 surefire-report_no_patched_v1.html, surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5741:
--

Attachment: 5741-v3.txt

Patch v3 addresses latest review comments.

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Attachments: 5741-v3.txt, HBase-5741-v2.patch, HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5741:
--

Fix Version/s: 0.96.0
   0.94.0

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.94.0, 0.96.0

 Attachments: 5741-v3.txt, HBase-5741-v2.patch, HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5741:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12522338/5741-94.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1486//console

This message is automatically generated.)

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.94.0, 0.96.0

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5741:
--

Attachment: 5741-v3.txt

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.94.0, 0.96.0

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5741:
--

Attachment: (was: 5741-v3.txt)

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.94.0, 0.96.0

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5736) ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5736:
--

Attachment: 5736-94.txt

Patch for 0.94

 ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly
 -

 Key: HBASE-5736
 URL: https://issues.apache.org/jira/browse/HBASE-5736
 Project: HBase
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.96.0

 Attachments: 5736-94.txt, HBASE-5736.D2649.1.patch, 
 HBASE-5736.D2649.2.patch, HBASE-5736.D2649.3.patch


 We have fixed similar bug in
 https://issues.apache.org/jira/browse/HBASE-5507
 It uses ByteBuffer.array() to read the ByteBuffer.
 This will ignore the offset return the whole underlying byte array.
 The bug can be triggered by using framed Transport thrift servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5736) ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5736:
--

Status: Open  (was: Patch Available)

 ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly
 -

 Key: HBASE-5736
 URL: https://issues.apache.org/jira/browse/HBASE-5736
 Project: HBase
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.96.0

 Attachments: 5736-94.txt, HBASE-5736.D2649.1.patch, 
 HBASE-5736.D2649.2.patch, HBASE-5736.D2649.3.patch


 We have fixed similar bug in
 https://issues.apache.org/jira/browse/HBASE-5507
 It uses ByteBuffer.array() to read the ByteBuffer.
 This will ignore the offset return the whole underlying byte array.
 The bug can be triggered by using framed Transport thrift servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5736) ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5736:
--

Fix Version/s: 0.94.0

Integrated patch to 0.94 branch

 ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly
 -

 Key: HBASE-5736
 URL: https://issues.apache.org/jira/browse/HBASE-5736
 Project: HBase
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0, 0.96.0

 Attachments: 5736-94.txt, HBASE-5736.D2649.1.patch, 
 HBASE-5736.D2649.2.patch, HBASE-5736.D2649.3.patch


 We have fixed similar bug in
 https://issues.apache.org/jira/browse/HBASE-5507
 It uses ByteBuffer.array() to read the ByteBuffer.
 This will ignore the offset return the whole underlying byte array.
 The bug can be triggered by using framed Transport thrift servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5717) Scanner metrics are only reported if you get to the end of a scanner

2012-04-11 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5717:
--

Fix Version/s: 0.96.0
   0.94.0
 Hadoop Flags: Reviewed

@Lars:
Do you want to integrate the patch ?

 Scanner metrics are only reported if you get to the end of a scanner
 

 Key: HBASE-5717
 URL: https://issues.apache.org/jira/browse/HBASE-5717
 Project: HBase
  Issue Type: Bug
  Components: client, metrics
Reporter: Ian Varley
Priority: Minor
 Fix For: 0.94.0, 0.96.0

 Attachments: ClientScanner_HBASE_5717-v2.patch, 
 ClientScanner_HBASE_5717-v3.patch, ClientScanner_HBASE_5717.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 When you turn on Scanner Metrics, the metrics are currently only made 
 available if you run over all records available in the scanner. If you stop 
 iterating before the end, the values are never flushed into the metrics 
 object (in the Scan attribute).
 Will supply a patch with fix and test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5748) Enable lib directory in jar file for coprocessor

2012-04-09 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5748:
--

Fix Version/s: 0.96.0
   0.94.0

 Enable lib directory in jar file for coprocessor
 

 Key: HBASE-5748
 URL: https://issues.apache.org/jira/browse/HBASE-5748
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors
Affects Versions: 0.92.1, 0.94.0
Reporter: Takuya Ueshin
Assignee: Takuya Ueshin
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5748.patch


 Hadoop MapReduce job can use external libraries in 'lib' directory in the 
 job.jar file.
 It is useful that jar files for coprocessor can use external libraries in the 
 same way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5727) secure hbase build broke because of 'HBASE-5451 Switch RPC call envelope/headers to PBs'

2012-04-09 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5727:
--

Comment: was deleted

(was: Compilation failed because unresolved conflict in 
security/src/main/java/org/apache/hadoop/hbase/ipc/SecureClient.java:
{code}
 .mine
  final DataOutputBuffer d = new DataOutputBuffer();
{code})

 secure hbase build broke because of 'HBASE-5451 Switch RPC call 
 envelope/headers to PBs'
 

 Key: HBASE-5727
 URL: https://issues.apache.org/jira/browse/HBASE-5727
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Devaraj Das
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 5727.1.patch, 5727.2.patch, 5727.patch


 If you build with the security profile -- i.e. add '-P security' on the 
 command line -- you'll see that the secure build is broke since we messed in 
 rpc.
 Assigning Deveraj to take a look.   If you can't work on this now DD, just 
 give it back to me and I'll have a go at it.  Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5727) secure hbase build broke because of 'HBASE-5451 Switch RPC call envelope/headers to PBs'

2012-04-09 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5727:
--

Attachment: AccessController.diff

The attached patch allows TestRowProcessorEndpoint to pass.

 secure hbase build broke because of 'HBASE-5451 Switch RPC call 
 envelope/headers to PBs'
 

 Key: HBASE-5727
 URL: https://issues.apache.org/jira/browse/HBASE-5727
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Devaraj Das
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 5727.1.patch, 5727.2.patch, 5727.patch, 
 AccessController.diff


 If you build with the security profile -- i.e. add '-P security' on the 
 command line -- you'll see that the secure build is broke since we messed in 
 rpc.
 Assigning Deveraj to take a look.   If you can't work on this now DD, just 
 give it back to me and I'll have a go at it.  Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-09 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5677:
--

Fix Version/s: 0.96.0
   0.94.0
   0.92.2
   0.90.7

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5677-90-v1.patch, 
 surefire-report_no_patched_v1.html, surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-04-07 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5689:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5689-testcase.patch, 5689-v4.txt, HBASE-5689.patch, 
 HBASE-5689.patch, HBASE-5689v2.patch, HBASE-5689v3.patch, HBASE-5689v3.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5739) Upgrade guava to 11.0.2

2012-04-06 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5739:
--

Status: Patch Available  (was: Open)

 Upgrade guava to 11.0.2
 ---

 Key: HBASE-5739
 URL: https://issues.apache.org/jira/browse/HBASE-5739
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.96.0

 Attachments: hbase-5739.txt


 Hadoop has upgraded to this new version of Guava. We should, too, so we don't 
 have compatibility issues running on Hadoop 2.0+

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5213) hbase master stop does not bring down backup masters

2012-04-06 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5213:
--

Attachment: 5213.jstack

jstack for the hanging test.

 hbase master stop does not bring down backup masters
 --

 Key: HBASE-5213
 URL: https://issues.apache.org/jira/browse/HBASE-5213
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.5, 0.92.0, 0.94.0, 0.96.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5213.jstack, HBASE-5213-v0-trunk.patch, 
 HBASE-5213-v1-trunk.patch, HBASE-5213-v2-90.patch, HBASE-5213-v2-92.patch, 
 HBASE-5213-v2-trunk.patch


 Typing hbase master stop produces the following message:
 stop   Start cluster shutdown; Master signals RegionServer shutdown
 It seems like backup masters should be considered part of the cluster, but 
 they are not brought down by hbase master stop.
 stop-hbase.sh does correctly bring down the backup masters.
 The same behavior is observed when a client app makes use of the client API 
 HBaseAdmin.shutdown() 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#shutdown()
  -- this isn't too surprising since I think hbase master stop just calls 
 this API.
 It seems like HBASE-1448 address this; perhaps there was a regression?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5635) If getTaskList() returns null splitlogWorker is down. It wont serve any requests.

2012-04-06 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5635:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 If getTaskList() returns null splitlogWorker is down. It wont serve any 
 requests. 
 --

 Key: HBASE-5635
 URL: https://issues.apache.org/jira/browse/HBASE-5635
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.92.1
Reporter: Kristam Subba Swathi
 Attachments: HBASE-5635.1.patch, HBASE-5635.2.patch, HBASE-5635.patch


 During the hlog split operation if all the zookeepers are down ,then the 
 paths will be returned as null and the splitworker thread wil be exited
 Now this regionserver wil not be able to acquire any other tasks since the 
 splitworker thread is exited
 Please find the attached code for more details
 {code}
 private ListString getTaskList() {
 for (int i = 0; i  zkretries; i++) {
   try {
 return (ZKUtil.listChildrenAndWatchForNewChildren(this.watcher,
 this.watcher.splitLogZNode));
   } catch (KeeperException e) {
 LOG.warn(Could not get children of znode  +
 this.watcher.splitLogZNode, e);
 try {
   Thread.sleep(1000);
 } catch (InterruptedException e1) {
   LOG.warn(Interrupted while trying to get task list ..., e1);
   Thread.currentThread().interrupt();
   return null;
 }
   }
 }
 {code}
 in the org.apache.hadoop.hbase.regionserver.SplitLogWorker 
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5720) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums

2012-04-06 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5720:
--

Attachment: 5720-trunk.txt

I was experimenting along the same direction as Lars outlined.
Patch is for reference only.

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums
 --

 Key: HBASE-5720
 URL: https://issues.apache.org/jira/browse/HBASE-5720
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.94.0
Reporter: Matt Corgan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5720-trunk.txt, 5720v4.txt, 5720v4.txt, 5720v4.txt, 
 HBASE-5720-v1.patch, HBASE-5720-v2.patch, HBASE-5720-v3.patch


 When reading a .92 HFile without checksums, encoding it, and storing in the 
 block cache, the HFileDataBlockEncoderImpl always allocates a dummy header 
 appropriate for checksums even though there are none.  This corrupts the 
 byte[].
 Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case 
 which I think is the desired behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5720) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums

2012-04-06 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5720:
--

Attachment: 5720-trunk-v2.txt

Patch for trunk v2 removes white spaces.

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums
 --

 Key: HBASE-5720
 URL: https://issues.apache.org/jira/browse/HBASE-5720
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.94.0
Reporter: Matt Corgan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5720-trunk-v2.txt, 5720-trunk.txt, 5720v4.txt, 
 5720v4.txt, 5720v4.txt, HBASE-5720-v1.patch, HBASE-5720-v2.patch, 
 HBASE-5720-v3.patch


 When reading a .92 HFile without checksums, encoding it, and storing in the 
 block cache, the HFileDataBlockEncoderImpl always allocates a dummy header 
 appropriate for checksums even though there are none.  This corrupts the 
 byte[].
 Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case 
 which I think is the desired behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5736) ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly

2012-04-06 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5736:
--

Fix Version/s: 0.96.0

 ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly
 -

 Key: HBASE-5736
 URL: https://issues.apache.org/jira/browse/HBASE-5736
 Project: HBase
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.96.0

 Attachments: HBASE-5736.D2649.1.patch, HBASE-5736.D2649.2.patch, 
 HBASE-5736.D2649.3.patch


 We have fixed similar bug in
 https://issues.apache.org/jira/browse/HBASE-5507
 It uses ByteBuffer.array() to read the ByteBuffer.
 This will ignore the offset return the whole underlying byte array.
 The bug can be triggered by using framed Transport thrift servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5721) Update bundled hadoop to be 1.0.2 (it was just released)

2012-04-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5721:
--

Attachment: 5721.txt

 Update bundled hadoop to be 1.0.2 (it was just released)
 

 Key: HBASE-5721
 URL: https://issues.apache.org/jira/browse/HBASE-5721
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Attachments: 1.0.2.txt, 5721.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5724) Row cache of KeyValue should be cleared in readFields().

2012-04-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5724:
--

Comment: was deleted

(was: How about making the following comment in test clearer ?
{code}
+   * make sure a row cache is cleared after a new value is read.
{code}
The row cache is cleared and re-read for the new value.)

 Row cache of KeyValue should be cleared in readFields().
 

 Key: HBASE-5724
 URL: https://issues.apache.org/jira/browse/HBASE-5724
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1
Reporter: Teruyoshi Zenmyo
 Attachments: HBASE-5724.txt


 KeyValue does not clear its row cache in reading new values (readFields()).
 Therefore, If a KeyValue (kv) which caches its row bytes reads another 
 KeyValue instance, kv.getRow() returns a wrong value. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5724) Row cache of KeyValue should be cleared in readFields().

2012-04-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5724:
--

Fix Version/s: 0.96.0
   0.94.0
 Hadoop Flags: Reviewed

 Row cache of KeyValue should be cleared in readFields().
 

 Key: HBASE-5724
 URL: https://issues.apache.org/jira/browse/HBASE-5724
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1
Reporter: Teruyoshi Zenmyo
Assignee: Teruyoshi Zenmyo
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5724.txt


 KeyValue does not clear its row cache in reading new values (readFields()).
 Therefore, If a KeyValue (kv) which caches its row bytes reads another 
 KeyValue instance, kv.getRow() returns a wrong value. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5736) ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly

2012-04-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5736:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly
 -

 Key: HBASE-5736
 URL: https://issues.apache.org/jira/browse/HBASE-5736
 Project: HBase
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5736.D2649.1.patch, HBASE-5736.D2649.2.patch


 We have fixed similar bug in
 https://issues.apache.org/jira/browse/HBASE-5507
 It uses ByteBuffer.array() to read the ByteBuffer.
 This will ignore the offset return the whole underlying byte array.
 The bug can be triggered by using framed Transport thrift servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-04-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5689:
--

Attachment: 5689-v4.txt

Patch v4 removes Math.abs() call.

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5689-testcase.patch, 5689-v4.txt, HBASE-5689.patch, 
 HBASE-5689.patch, HBASE-5689v2.patch, HBASE-5689v3.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3909) Add dynamic config

2012-04-04 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-3909:
--

Attachment: 3909.v1

I could create review request based on this patch.

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Fix For: 0.96.0

 Attachments: 3909-v1.patch, 3909.v1


 I'm sure this issue exists already, at least as part of the discussion around 
 making online schema edits possible, but no hard this having its own issue.  
 Ted started a conversation on this topic up on dev and Todd suggested we 
 lookd at how Hadoop did it over in HADOOP-7001

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5666) RegionServer doesn't retry to check if base node is available

2012-04-03 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5666:
--

Comment: was deleted

(was: The problem that I see with the patch v4 and the if (timeout == 0) 
special case is that exists() is different for ZooKeeper and 
RecoverableZookeeper.

RecoverableZookeeper has some internal retry logic for CONNECTIONLOSS, 
SESSIONEXPIRED, and OPERATIONTIMEOUT, to keep the code simple we can add this 
logic in ZKUtil.checkExist() in this way we can remove the special case, and 
remove the code in RecoverabeZK.)

 RegionServer doesn't retry to check if base node is available
 -

 Key: HBASE-5666
 URL: https://issues.apache.org/jira/browse/HBASE-5666
 Project: HBase
  Issue Type: Bug
  Components: regionserver, zookeeper
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: HBASE-5666-v1.patch, HBASE-5666-v2.patch, 
 HBASE-5666-v3.patch, HBASE-5666-v4.patch, hbase-1-regionserver.log, 
 hbase-2-regionserver.log, hbase-3-regionserver.log, hbase-master.log, 
 hbase-regionserver.log, hbase-zookeeper.log


 I've a script that starts hbase and a couple of region servers in distributed 
 mode (hbase.cluster.distributed = true)
 {code}
 $HBASE_HOME/bin/start-hbase.sh
 $HBASE_HOME/bin/local-regionservers.sh start 1 2 3
 {code}
 but the region servers are not able to start...
 It seems that during the RS start the the znode is still not available, and 
 HRegionServer.initializeZooKeeper() check just once if the base not is 
 available.
 {code}
 2012-03-28 21:54:05,013 INFO 
 org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Check the value 
 configured in 'zookeeper.znode.parent'. There could be a mismatch with the 
 one configured in the master.
 2012-03-28 21:54:08,598 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 localhost,60202,133296824: Initialization of RS failed.  Hence aborting 
 RS.
 java.io.IOException: Received the shutdown message while waiting.
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:626)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:596)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:558)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:672)
   at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5606) SplitLogManger async delete node hangs log splitting when ZK connection is lost

2012-04-03 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5606:
--

Fix Version/s: 0.96.0
   0.94.0
 Hadoop Flags: Reviewed

 SplitLogManger async delete node hangs log splitting when ZK connection is 
 lost 
 

 Key: HBASE-5606
 URL: https://issues.apache.org/jira/browse/HBASE-5606
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.92.0
Reporter: Gopinathan A
Assignee: Prakash Khemani
Priority: Critical
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: 
 0001-HBASE-5606-SplitLogManger-async-delete-node-hangs-lo.patch, 
 0001-HBASE-5606-SplitLogManger-async-delete-node-hangs-lo.patch


 1. One rs died, the servershutdownhandler found it out and started the 
 distributed log splitting;
 2. All tasks are failed due to ZK connection lost, so the all the tasks were 
 deleted asynchronously;
 3. Servershutdownhandler retried the log splitting;
 4. The asynchronously deletion in step 2 finally happened for new task
 5. This made the SplitLogManger in hanging state.
 This leads to .META. region not assigened for long time
 {noformat}
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(55413,79):2012-03-14 
 19:28:47,932 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: put up 
 splitlog task at znode 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(89303,79):2012-03-14 
 19:34:32,387 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: put up 
 splitlog task at znode 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 {noformat}
 {noformat}
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(80417,99):2012-03-14 
 19:34:31,196 DEBUG 
 org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback: deleted 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(89456,99):2012-03-14 
 19:34:32,497 DEBUG 
 org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback: deleted 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5606) SplitLogManger async delete node hangs log splitting when ZK connection is lost

2012-04-03 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5606:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 SplitLogManger async delete node hangs log splitting when ZK connection is 
 lost 
 

 Key: HBASE-5606
 URL: https://issues.apache.org/jira/browse/HBASE-5606
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.92.0
Reporter: Gopinathan A
Assignee: Prakash Khemani
Priority: Critical
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: 
 0001-HBASE-5606-SplitLogManger-async-delete-node-hangs-lo.patch, 
 0001-HBASE-5606-SplitLogManger-async-delete-node-hangs-lo.patch


 1. One rs died, the servershutdownhandler found it out and started the 
 distributed log splitting;
 2. All tasks are failed due to ZK connection lost, so the all the tasks were 
 deleted asynchronously;
 3. Servershutdownhandler retried the log splitting;
 4. The asynchronously deletion in step 2 finally happened for new task
 5. This made the SplitLogManger in hanging state.
 This leads to .META. region not assigened for long time
 {noformat}
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(55413,79):2012-03-14 
 19:28:47,932 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: put up 
 splitlog task at znode 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(89303,79):2012-03-14 
 19:34:32,387 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: put up 
 splitlog task at znode 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 {noformat}
 {noformat}
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(80417,99):2012-03-14 
 19:34:31,196 DEBUG 
 org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback: deleted 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 hbase-root-master-HOST-192-168-47-204.log.2012-03-14(89456,99):2012-03-14 
 19:34:32,497 DEBUG 
 org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback: deleted 
 /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5636) TestTableMapReduce doesn't work properly.

2012-04-02 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5636:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Integrated to trunk and 0.94.

Thanks for the patch, Takuya.

 TestTableMapReduce doesn't work properly.
 -

 Key: HBASE-5636
 URL: https://issues.apache.org/jira/browse/HBASE-5636
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.92.1, 0.94.0
Reporter: Takuya Ueshin
Assignee: Takuya Ueshin
 Attachments: HBASE-5636-v2.patch, HBASE-5636.patch


 No map function is called because there are no test data put before test 
 starts.
 The following three tests are in the same situation:
 - org.apache.hadoop.hbase.mapred.TestTableMapReduce
 - org.apache.hadoop.hbase.mapreduce.TestTableMapReduce
 - org.apache.hadoop.hbase.mapreduce.TestMulitthreadedTableMapper

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5636) TestTableMapReduce doesn't work properly.

2012-04-02 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5636:
--

Fix Version/s: 0.96.0
   0.94.0

 TestTableMapReduce doesn't work properly.
 -

 Key: HBASE-5636
 URL: https://issues.apache.org/jira/browse/HBASE-5636
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.92.1, 0.94.0
Reporter: Takuya Ueshin
Assignee: Takuya Ueshin
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5636-v2.patch, HBASE-5636.patch


 No map function is called because there are no test data put before test 
 starts.
 The following three tests are in the same situation:
 - org.apache.hadoop.hbase.mapred.TestTableMapReduce
 - org.apache.hadoop.hbase.mapreduce.TestTableMapReduce
 - org.apache.hadoop.hbase.mapreduce.TestMulitthreadedTableMapper

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-04-02 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5689:
--

Attachment: (was: HBASE-5689.patch)

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5689-simplified.txt, 5689-testcase.patch, 
 HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5625) Avoid byte buffer allocations when reading a value from a Result object

2012-04-02 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5625:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

 Avoid byte buffer allocations when reading a value from a Result object
 ---

 Key: HBASE-5625
 URL: https://issues.apache.org/jira/browse/HBASE-5625
 Project: HBase
  Issue Type: Improvement
  Components: client
Affects Versions: 0.92.1
Reporter: Tudor Scurtu
Assignee: Tudor Scurtu
  Labels: patch
 Fix For: 0.96.0

 Attachments: 5625.txt, 5625v2.txt, 5625v3.txt, 5625v4.txt, 5625v5.txt


 When calling Result.getValue(), an extra dummy KeyValue and its associated 
 underlying byte array are allocated, as well as a persistent buffer that will 
 contain the returned value.
 These can be avoided by reusing a static array for the dummy object and by 
 passing a ByteBuffer object as a value destination buffer to the read method.
 The current functionality is maintained, and we have added a separate method 
 call stack that employs the described changes. I will provide more details 
 with the patch.
 Running tests with a profiler, the reduction of read time seems to be of up 
 to 40%.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5663) MultithreadedTableMapper doesn't work.

2012-04-02 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5663:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 MultithreadedTableMapper doesn't work.
 --

 Key: HBASE-5663
 URL: https://issues.apache.org/jira/browse/HBASE-5663
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.0
Reporter: Takuya Ueshin
Assignee: Takuya Ueshin
 Fix For: 0.94.0, 0.96.0

 Attachments: 5663+5636.txt, HBASE-5663.patch


 MapReduce job using MultithreadedTableMapper goes down throwing the following 
 Exception:
 {noformat}
 java.io.IOException: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:260)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper.run(MultithreadedTableMapper.java:133)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at java.lang.Class.getConstructor0(Class.java:2706)
   at java.lang.Class.getConstructor(Class.java:1657)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:241)
   ... 8 more
 {noformat}
 This occured when the tasks are creating MapRunner threads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-04-02 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5689:
--

Attachment: (was: 5689-simplified.txt)

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5689-testcase.patch, HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5690) compression does not work in Store.java of 0.94

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5690:
--

Hadoop Flags: Reviewed
 Summary: compression does not work in Store.java of 0.94  (was: 
compression unavailable)

 compression does not work in Store.java of 0.94
 ---

 Key: HBASE-5690
 URL: https://issues.apache.org/jira/browse/HBASE-5690
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
 Environment: all
Reporter: honghua zhu
Priority: Critical
 Fix For: 0.94.1

 Attachments: Store.patch


 HBASE-5442 The store.createWriterInTmp method missing compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5663) MultithreadedTableMapper doesn't work.

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5663:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 MultithreadedTableMapper doesn't work.
 --

 Key: HBASE-5663
 URL: https://issues.apache.org/jira/browse/HBASE-5663
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.0
Reporter: Takuya Ueshin
 Attachments: HBASE-5663.patch


 MapReduce job using MultithreadedTableMapper goes down throwing the following 
 Exception:
 {noformat}
 java.io.IOException: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:260)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper.run(MultithreadedTableMapper.java:133)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at java.lang.Class.getConstructor0(Class.java:2706)
   at java.lang.Class.getConstructor(Class.java:1657)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:241)
   ... 8 more
 {noformat}
 This occured when the tasks are creating MapRunner threads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5689:
--

Summary: Skipping RecoveredEdits may cause data loss  (was: Skip 
RecoveredEdits may cause data loss)

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: 5689-testcase.patch, HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skip RecoveredEdits may cause data loss

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5689:
--

Affects Version/s: 0.94.0

 Skip RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: 5689-testcase.patch, HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5663) MultithreadedTableMapper doesn't work.

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5663:
--

Attachment: 5663+5636.txt

Combined patch for HBASE-5663 and HBASE-5636

 MultithreadedTableMapper doesn't work.
 --

 Key: HBASE-5663
 URL: https://issues.apache.org/jira/browse/HBASE-5663
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.0
Reporter: Takuya Ueshin
 Attachments: 5663+5636.txt, HBASE-5663.patch


 MapReduce job using MultithreadedTableMapper goes down throwing the following 
 Exception:
 {noformat}
 java.io.IOException: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:260)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper.run(MultithreadedTableMapper.java:133)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at java.lang.Class.getConstructor0(Class.java:2706)
   at java.lang.Class.getConstructor(Class.java:1657)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:241)
   ... 8 more
 {noformat}
 This occured when the tasks are creating MapRunner threads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5663) MultithreadedTableMapper doesn't work.

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5663:
--

Fix Version/s: 0.96.0
   0.94.0

 MultithreadedTableMapper doesn't work.
 --

 Key: HBASE-5663
 URL: https://issues.apache.org/jira/browse/HBASE-5663
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.0
Reporter: Takuya Ueshin
 Fix For: 0.94.0, 0.96.0

 Attachments: 5663+5636.txt, HBASE-5663.patch


 MapReduce job using MultithreadedTableMapper goes down throwing the following 
 Exception:
 {noformat}
 java.io.IOException: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:260)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper.run(MultithreadedTableMapper.java:133)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: java.lang.NoSuchMethodException: 
 org.apache.hadoop.mapreduce.Mapper$Context.init(org.apache.hadoop.conf.Configuration,
  org.apache.hadoop.mapred.TaskAttemptID, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordReader,
  
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapRecordWriter,
  org.apache.hadoop.hbase.mapreduce.TableOutputCommitter, 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$SubMapStatusReporter,
  org.apache.hadoop.hbase.mapreduce.TableSplit)
   at java.lang.Class.getConstructor0(Class.java:2706)
   at java.lang.Class.getConstructor(Class.java:1657)
   at 
 org.apache.hadoop.hbase.mapreduce.MultithreadedTableMapper$MapRunner.init(MultithreadedTableMapper.java:241)
   ... 8 more
 {noformat}
 This occured when the tasks are creating MapRunner threads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5689) Skipping RecoveredEdits may cause data loss

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5689:
--

Attachment: 5689-simplified.txt

Removing the check should be enough.

With the attached patch, TestHRegion#testDataCorrectnessReplayingRecoveredEdits 
passes.

 Skipping RecoveredEdits may cause data loss
 ---

 Key: HBASE-5689
 URL: https://issues.apache.org/jira/browse/HBASE-5689
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Attachments: 5689-simplified.txt, 5689-testcase.patch, 
 HBASE-5689.patch


 Let's see the following scenario:
 1.Region is on the server A
 2.put KV(r1-v1) to the region
 3.move region from server A to server B
 4.put KV(r2-v2) to the region
 5.move region from server B to server A
 6.put KV(r3-v3) to the region
 7.kill -9 server B and start it
 8.kill -9 server A and start it 
 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third 
 KV(r3-v3) is lost.
 Let's analyse the upper scenario from the code:
 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same 
 hlog file on server A.
 2.when we split server B's hlog file in the process of ServerShutdownHandler, 
 we create one RecoveredEdits file f1 for the region.
 2.when we split server A's hlog file in the process of ServerShutdownHandler, 
 we create another RecoveredEdits file f2 for the region.
 3.however, RecoveredEdits file f2 will be skiped when initializing region
 HRegion#replayRecoveredEditsIfAny
 {code}
  for (Path edits: files) {
   if (edits == null || !this.fs.exists(edits)) {
 LOG.warn(Null or non-existent edits file:  + edits);
 continue;
   }
   if (isZeroLengthThenDelete(this.fs, edits)) continue;
   if (checkSafeToSkip) {
 Path higher = files.higher(edits);
 long maxSeqId = Long.MAX_VALUE;
 if (higher != null) {
   // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+
   String fileName = higher.getName();
   maxSeqId = Math.abs(Long.parseLong(fileName));
 }
 if (maxSeqId = minSeqId) {
   String msg = Maximum possible sequenceid for this log is  + 
 maxSeqId
   + , skipped the whole file, path= + edits;
   LOG.debug(msg);
   continue;
 } else {
   checkSafeToSkip = false;
 }
   }
 {code}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4348) Add metrics for regions in transition

2012-03-31 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4348:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

 Add metrics for regions in transition
 -

 Key: HBASE-4348
 URL: https://issues.apache.org/jira/browse/HBASE-4348
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Himanshu Vashishtha
Priority: Minor
  Labels: noob
 Fix For: 0.96.0

 Attachments: 4348-metrics-v3.patch, 4348-v1.patch, 4348-v2.patch, 
 RITs.png, RegionInTransitions2.png, metrics-v2.patch


 The following metrics would be useful for monitoring the master:
 - the number of regions in transition
 - the number of regions in transition that have been in transition for more 
 than a minute
 - how many seconds has the oldest region-in-transition been in transition

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5655) Cap space usage of default log4j rolling policy

2012-03-30 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5655:
--

Release Note: 
This changes the default log rolling scheme from DRFA to RFA. The former rolls 
over the log on a date change trigger, while the later rolls over when the log 
file size reaches a predefined limit. The issue with DRFA is that it doesn't 
have the ability to cap the space usage, so users who are not using host-level 
log rotation might fill up their log partitions. This results in a cluster 
crash. RFA puts a size limit on the log size and therefore is a safer option in 
such scenarios. The default file size is 256MB with 20 files (total of 5GB 
logs). In case one needs to revert to the original DRFA (for some legacy tools 
etc), one can set environment variable HBASE_ROOT_LOGGER to 
ROOT_LOGGER_LEVEL,DRFA. Please refer to the hbase-env.sh for more details.


  was:
This changes the default log rolling scheme from DRFA to RFA. The former rolls 
over the log on a date change trigger, while the later rolls over when the log 
file size reaches a predefined limit. The issue with DRFA is that it doesn't 
have the ability to cap the space usage, so users who are not using host-level 
log rotation might fill up their log partitions. This results in a cluster 
crash. RFA puts a size limit on the log size and therefore is a safer option in 
such scenarios. The default file size is 256MB with 20 files (total of 5GB 
logs). In case one needs to revert to the original DRFA (for some legacy tools 
etc), one can set a env variable HBASE_ROOT_LOGGER to ROOT_LOGGER_LEVEL,DRFA. 
Please refer to the hbase-env.sh for more details.



 Cap space usage of default log4j rolling policy
 ---

 Key: HBASE-5655
 URL: https://issues.apache.org/jira/browse/HBASE-5655
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.1
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: 5655-v1.patch, HBase-5655-v2.patch, HBase-5655-v3.patch


 The current default log4j policy is to use Daily Rolling File Appender 
 (DRFA). At times, its good to have a cap on the maximum size of the logs in 
 order to limit its disk usage. Here is a proposal to set a new file appemder 
 (RFA) as the default appender. It can be configured via env so that existing 
 tools can use the current behavior of using DRFA instead. 
 This is in parallel with jira Hadoop-8149.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5667) RegexStringComparator supports java.util.regex.Pattern flags

2012-03-30 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5667:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

 RegexStringComparator supports java.util.regex.Pattern flags
 

 Key: HBASE-5667
 URL: https://issues.apache.org/jira/browse/HBASE-5667
 Project: HBase
  Issue Type: Improvement
  Components: filters
Reporter: David Arthur
Assignee: David Arthur
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-5667.diff, HBASE-5667.diff, HBASE-5667.diff


 * Add constructor that takes in a Pattern
 * Add Pattern's flags to Writable fields, and actually use them when 
 recomposing the Filter

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5667) RegexStringComparator supports java.util.regex.Pattern flags

2012-03-30 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5667:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520612/HBASE-5667.diff
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestRestartCluster
  org.apache.hadoop.hbase.client.TestMetaMigrationRemovingHTD
  
org.apache.hadoop.hbase.io.encoding.TestUpgradeFromHFileV1ToEncoding
  org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole
  org.apache.hadoop.hbase.catalog.TestCatalogTrackerOnCluster
  org.apache.hadoop.hbase.master.TestMasterFailover
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.TestMultiVersions
  
org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1352//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1352//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1352//console

This message is automatically generated.)

 RegexStringComparator supports java.util.regex.Pattern flags
 

 Key: HBASE-5667
 URL: https://issues.apache.org/jira/browse/HBASE-5667
 Project: HBase
  Issue Type: Improvement
  Components: filters
Reporter: David Arthur
Assignee: David Arthur
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-5667.diff, HBASE-5667.diff, HBASE-5667.diff


 * Add constructor that takes in a Pattern
 * Add Pattern's flags to Writable fields, and actually use them when 
 recomposing the Filter

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5655) Cap space usage of default log4j rolling policy

2012-03-30 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5655:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Cap space usage of default log4j rolling policy
 ---

 Key: HBASE-5655
 URL: https://issues.apache.org/jira/browse/HBASE-5655
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.1
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: 5655-v1.patch, HBase-5655-v2.patch, HBase-5655-v3.patch


 The current default log4j policy is to use Daily Rolling File Appender 
 (DRFA). At times, its good to have a cap on the maximum size of the logs in 
 order to limit its disk usage. Here is a proposal to set a new file appemder 
 (RFA) as the default appender. It can be configured via env so that existing 
 tools can use the current behavior of using DRFA instead. 
 This is in parallel with jira Hadoop-8149.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5655) Cap space usage of default log4j rolling policy

2012-03-29 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5655:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

Will integrate tomorrow if there is no objection.

 Cap space usage of default log4j rolling policy
 ---

 Key: HBASE-5655
 URL: https://issues.apache.org/jira/browse/HBASE-5655
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.1
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: 5655-v1.patch, HBase-5655-v2.patch, HBase-5655-v3.patch


 The current default log4j policy is to use Daily Rolling File Appender 
 (DRFA). At times, its good to have a cap on the maximum size of the logs in 
 order to limit its disk usage. Here is a proposal to set a new file appemder 
 (RFA) as the default appender. It can be configured via env so that existing 
 tools can use the current behavior of using DRFA instead. 
 This is in parallel with jira Hadoop-8149.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5636) TestTableMapReduce doesn't work properly.

2012-03-28 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5636:
--

Status: Patch Available  (was: Open)

 TestTableMapReduce doesn't work properly.
 -

 Key: HBASE-5636
 URL: https://issues.apache.org/jira/browse/HBASE-5636
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.92.1, 0.94.0
Reporter: Takuya Ueshin
 Attachments: HBASE-5636.patch


 No map function is called because there are no test data put before test 
 starts.
 The following three tests are in the same situation:
 - org.apache.hadoop.hbase.mapred.TestTableMapReduce
 - org.apache.hadoop.hbase.mapreduce.TestTableMapReduce
 - org.apache.hadoop.hbase.mapreduce.TestMulitthreadedTableMapper

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5544) Add metrics to HRegion.processRow()

2012-03-28 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5544:
--

Attachment: HBASE-5544.D2457.2.patch

Re-attaching patch v2

 Add metrics to HRegion.processRow()
 ---

 Key: HBASE-5544
 URL: https://issues.apache.org/jira/browse/HBASE-5544
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.96.0

 Attachments: HBASE-5544.D2457.1.patch, HBASE-5544.D2457.2.patch, 
 HBASE-5544.D2457.2.patch


 Add metrics of
 1. time for waiting for the lock
 2. processing time (scan time)
 3. time spent while holding the lock
 4. total call time
 5. number of failures / calls

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs

2012-03-28 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-3996:
--

Attachment: 3996-v6.txt

Patch v6 is same as Eran's patch v5, formatted to be accepted by review board.

 Support multiple tables and scanners as input to the mapper in map/reduce jobs
 --

 Key: HBASE-3996
 URL: https://issues.apache.org/jira/browse/HBASE-3996
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Eran Kutner
Assignee: Eran Kutner
 Fix For: 0.96.0

 Attachments: 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 
 3996-v6.txt, HBase-3996.patch


 It seems that in many cases feeding data from multiple tables or multiple 
 scanners on a single table can save a lot of time when running map/reduce 
 jobs.
 I propose a new MultiTableInputFormat class that would allow doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs

2012-03-28 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-3996:
--

Attachment: 3996-v7.txt

Patch v7 introduces versioning for TableSplit, using the same tactic used for 
HLogKey.

Since most of enum Version code is copied, we may want to factor the base enum 
to its own class. Would org.apache.hadoop.hbase.util be a good namespace for 
the enum class ?

 Support multiple tables and scanners as input to the mapper in map/reduce jobs
 --

 Key: HBASE-3996
 URL: https://issues.apache.org/jira/browse/HBASE-3996
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Eran Kutner
Assignee: Eran Kutner
 Fix For: 0.96.0

 Attachments: 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 
 3996-v6.txt, 3996-v7.txt, HBase-3996.patch


 It seems that in many cases feeding data from multiple tables or multiple 
 scanners on a single table can save a lot of time when running map/reduce 
 jobs.
 I propose a new MultiTableInputFormat class that would allow doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5635) If getTaskList() returns null splitlogWorker is down. It wont serve any requests.

2012-03-27 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5635:
--

Description: 
During the hlog split operation if all the zookeepers are down ,then the paths 
will be returned as null and the splitworker thread wil be exited
Now this regionserver wil not be able to acquire any other tasks since the 
splitworker thread is exited
Please find the attached code for more details
{code}
private ListString getTaskList() {
for (int i = 0; i  zkretries; i++) {
  try {
return (ZKUtil.listChildrenAndWatchForNewChildren(this.watcher,
this.watcher.splitLogZNode));
  } catch (KeeperException e) {
LOG.warn(Could not get children of znode  +
this.watcher.splitLogZNode, e);
try {
  Thread.sleep(1000);
} catch (InterruptedException e1) {
  LOG.warn(Interrupted while trying to get task list ..., e1);
  Thread.currentThread().interrupt();
  return null;
}
  }
}
{code}

in the org.apache.hadoop.hbase.regionserver.SplitLogWorker 


 



  was:
During the hlog split operation if all the zookeepers are down ,then the paths 
will be returned as null and the splitworker thread wil be exited
Now this regionserver wil not be able to acquire any other tasks since the 
splitworker thread is exited
Please find the attached code for more details
--
private ListString getTaskList() {
for (int i = 0; i  zkretries; i++) {
  try {
return (ZKUtil.listChildrenAndWatchForNewChildren(this.watcher,
this.watcher.splitLogZNode));
  } catch (KeeperException e) {
LOG.warn(Could not get children of znode  +
this.watcher.splitLogZNode, e);
try {
  Thread.sleep(1000);
} catch (InterruptedException e1) {
  LOG.warn(Interrupted while trying to get task list ..., e1);
  Thread.currentThread().interrupt();
  return null;
}
  }
}

in the org.apache.hadoop.hbase.regionserver.SplitLogWorker 


 




 If getTaskList() returns null splitlogWorker is down. It wont serve any 
 requests. 
 --

 Key: HBASE-5635
 URL: https://issues.apache.org/jira/browse/HBASE-5635
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.92.1
Reporter: Kristam Subba Swathi
 Attachments: HBASE-5635.patch


 During the hlog split operation if all the zookeepers are down ,then the 
 paths will be returned as null and the splitworker thread wil be exited
 Now this regionserver wil not be able to acquire any other tasks since the 
 splitworker thread is exited
 Please find the attached code for more details
 {code}
 private ListString getTaskList() {
 for (int i = 0; i  zkretries; i++) {
   try {
 return (ZKUtil.listChildrenAndWatchForNewChildren(this.watcher,
 this.watcher.splitLogZNode));
   } catch (KeeperException e) {
 LOG.warn(Could not get children of znode  +
 this.watcher.splitLogZNode, e);
 try {
   Thread.sleep(1000);
 } catch (InterruptedException e1) {
   LOG.warn(Interrupted while trying to get task list ..., e1);
   Thread.currentThread().interrupt();
   return null;
 }
   }
 }
 {code}
 in the org.apache.hadoop.hbase.regionserver.SplitLogWorker 
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-2600) Change how we do meta tables; from tablename+STARTROW+randomid to instead, tablename+ENDROW+randomid

2012-03-27 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-2600:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12520024/hbase-2600-root.dir.tgz
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1315//console

This message is automatically generated.)

 Change how we do meta tables; from tablename+STARTROW+randomid to instead, 
 tablename+ENDROW+randomid
 

 Key: HBASE-2600
 URL: https://issues.apache.org/jira/browse/HBASE-2600
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Alex Newman
 Attachments: 
 0001-Changed-regioninfo-format-to-use-endKey-instead-of-s.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v2.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v4.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v6.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v7.2.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8.1, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v9.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen.patch, 
 2600-trunk-01-17.txt, HBASE-2600+5217-Sun-Mar-25-2012-v3.patch, 
 HBASE-2600+5217-Sun-Mar-25-2012-v4.patch, hbase-2600-root.dir.tgz, jenkins.pdf


 This is an idea that Ryan and I have been kicking around on and off for a 
 while now.
 If regionnames were made of tablename+endrow instead of tablename+startrow, 
 then in the metatables, doing a search for the region that contains the 
 wanted row, we'd just have to open a scanner using passed row and the first 
 row found by the scan would be that of the region we need (If offlined 
 parent, we'd have to scan to the next row).
 If we redid the meta tables in this format, we'd be using an access that is 
 natural to hbase, a scan as opposed to the perverse, expensive 
 getClosestRowBefore we currently have that has to walk backward in meta 
 finding a containing region.
 This issue is about changing the way we name regions.
 If we were using scans, prewarming client cache would be near costless (as 
 opposed to what we'll currently have to do which is first a 
 getClosestRowBefore and then a scan from the closestrowbefore forward).
 Converting to the new method, we'd have to run a migration on startup 
 changing the content in meta.
 Up to this, the randomid component of a region name has been the timestamp of 
 region creation.   HBASE-2531 32-bit encoding of regionnames waaay 
 too susceptible to hash clashes proposes changing the randomid so that it 
 contains actual name of the directory in the filesystem that hosts the 
 region.  If we had this in place, I think it would help with the migration to 
 this new way of doing the meta because as is, the region name in fs is a hash 
 of regionname... changing the format of the regionname would mean we generate 
 a different hash... so we'd need hbase-2531 to be in place before we could do 
 this change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly

2012-03-27 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-2214:
--

Fix Version/s: 0.96.0
 Assignee: (was: Daniel Ploeg)

 Do HBASE-1996 -- setting size to return in scan rather than count of rows -- 
 properly
 -

 Key: HBASE-2214
 URL: https://issues.apache.org/jira/browse/HBASE-2214
 Project: HBase
  Issue Type: New Feature
Reporter: stack
  Labels: noob
 Fix For: 0.96.0

 Attachments: HBASE-2214_with_broken_TestShell.txt


 The notion that you set size rather than row count specifying how many rows a 
 scanner should return in each cycle was raised over in hbase-1966.  Its a 
 good one making hbase regular though the data under it may vary.  
 HBase-1966 was committed but the patch was constrained by the fact that it 
 needed to not change RPC interface.  This issue is about doing hbase-1966 for 
 0.21 in a clean, unconstrained way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5625) Avoid byte buffer allocations when reading a value from a Result object

2012-03-23 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5625:
--

Status: Patch Available  (was: Open)

 Avoid byte buffer allocations when reading a value from a Result object
 ---

 Key: HBASE-5625
 URL: https://issues.apache.org/jira/browse/HBASE-5625
 Project: HBase
  Issue Type: Improvement
  Components: client
Affects Versions: 0.92.1
Reporter: Tudor Scurtu
  Labels: patch
 Attachments: 5625.txt


 When calling Result.getValue(), an extra dummy KeyValue and its associated 
 underlying byte array are allocated, as well as a persistent buffer that will 
 contain the returned value.
 These can be avoided by reusing a static array for the dummy object and by 
 passing a ByteBuffer object as a value destination buffer to the read method.
 The current functionality is maintained, and we have added a separate method 
 call stack that employs the described changes. I will provide more details 
 with the patch.
 Running tests with a profiler, the reduction of read time seems to be of up 
 to 40%.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5613) ThriftServer getTableRegions does not return serverName and port

2012-03-23 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5613:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12519564/HBASE-5613.D2403.4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1270//console

This message is automatically generated.)

 ThriftServer getTableRegions does not return serverName and port
 

 Key: HBASE-5613
 URL: https://issues.apache.org/jira/browse/HBASE-5613
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5613.0.94.txt, HBASE-5613.D2403.1.patch, 
 HBASE-5613.D2403.2.patch, HBASE-5613.D2403.3.patch, HBASE-5613.D2403.4.patch, 
 HBASE-5613.D2403.5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5466) Opening a table also opens the metatable and never closes it.

2012-03-23 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5466:
--

Fix Version/s: 0.90.7

Integrated to 0.90.7 as well.

 Opening a table also opens the metatable and never closes it.
 -

 Key: HBASE-5466
 URL: https://issues.apache.org/jira/browse/HBASE-5466
 Project: HBase
  Issue Type: Bug
  Components: client
Affects Versions: 0.90.5, 0.92.0
Reporter: Ashley Taylor
Assignee: Ashley Taylor
 Fix For: 0.90.7, 0.92.1

 Attachments: MetaScanner_HBASE_5466(2).patch, 
 MetaScanner_HBASE_5466(3).patch, MetaScanner_HBASE_5466.patch


 Having upgraded to CDH3U3 version of hbase we found we had a zookeeper 
 connection leak, tracking it down we found that closing the connection will 
 only close the zookeeper connection if all calls to get the connection have 
 been closed, there is incCount and decCount in the HConnection class,
 When a table is opened it makes a call to the metascanner class which opens a 
 connection to the meta table, this table never gets closed.
 This caused the count in the HConnection class to never return to zero 
 meaning that the zookeeper connection will not close when we close all the 
 tables or calling
 HConnectionManager.deleteConnection(config, true);

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5623) Race condition when rolling the HLog and hlogFlush

2012-03-23 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5623:
--

Comment: was deleted

(was: For patch v4:
{code}
+if (this.writerRef.get() != null) {
+  this.writerRef.get().close();
{code}
Shall we save the first writerRef.get() in a variable and use it to call 
close() ?

Stack's test isn't in patch v4.)

 Race condition when rolling the HLog and hlogFlush
 --

 Key: HBASE-5623
 URL: https://issues.apache.org/jira/browse/HBASE-5623
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5623.txt, 5623v2.txt, HBASE-5623_v0.patch, 
 HBASE-5623_v4.patch


 When doing a ycsb test with a large number of handlers 
 (regionserver.handler.count=60), I get the following exceptions:
 {code}
 Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
 java.lang.NullPointerException
   at 
 org.apache.hadoop.io.SequenceFile$Writer.getLength(SequenceFile.java:1099)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.getLength(SequenceFileLogWriter.java:314)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1291)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1388)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192)
   at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400)
   at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351)
   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:920)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:152)
   at $Proxy1.multi(Unknown Source)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1691)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1689)
   at 
 org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:214)
 {code}
 and 
 {code}
   java.lang.NullPointerException
   at 
 org.apache.hadoop.io.SequenceFile$Writer.checkAndWriteSync(SequenceFile.java:1026)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1068)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1035)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.append(SequenceFileLogWriter.java:279)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.hlogFlush(HLog.java:1237)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1271)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1391)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400)
   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351)
 {code}
 It seems the root cause of the issue is that we open a new log writer and 
 close the old one at HLog#rollWriter() holding the updateLock, but the other 
 threads doing syncer() calls
 {code} 
 logSyncerThread.hlogFlush(this.writer);
 {code}
 without holding the updateLock. LogSyncer only synchronizes against 
 concurrent appends and flush(), but not on the passed writer, which can be 
 closed already by rollWriter(). In this case, since 
 SequenceFile#Writer.close() sets it's out field as null, we get the NPE. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Updated] (HBASE-5128) [uber hbck] Online automated repair of table integrity and region consistency problems

2012-03-23 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5128:
--

Attachment: 5128-trunk.addendum

Addendum for trunk.
Hadoop QA couldn't work when compilation is broken.

 [uber hbck] Online automated repair of table integrity and region consistency 
 problems
 --

 Key: HBASE-5128
 URL: https://issues.apache.org/jira/browse/HBASE-5128
 Project: HBase
  Issue Type: New Feature
  Components: hbck
Affects Versions: 0.90.5, 0.92.0, 0.94.0, 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5128-trunk.addendum, hbase-5128-0.90-v2.patch, 
 hbase-5128-0.90-v2b.patch, hbase-5128-0.90-v4.patch, 
 hbase-5128-0.92-v2.patch, hbase-5128-0.92-v4.patch, hbase-5128-0.94-v2.patch, 
 hbase-5128-0.94-v4.patch, hbase-5128-trunk-v2.patch, hbase-5128-trunk.patch, 
 hbase-5128-v3.patch, hbase-5128-v4.patch


 The current (0.90.5, 0.92.0rc2) versions of hbck detects most of region 
 consistency and table integrity invariant violations.  However with '-fix' it 
 can only automatically repair region consistency cases having to do with 
 deployment problems.  This updated version should be able to handle all cases 
 (including a new orphan regiondir case).  When complete will likely deprecate 
 the OfflineMetaRepair tool and subsume several open META-hole related issue.
 Here's the approach (from the comment of at the top of the new version of the 
 file).
 {code}
 /**
  * HBaseFsck (hbck) is a tool for checking and repairing region consistency 
 and
  * table integrity.  
  * 
  * Region consistency checks verify that META, region deployment on
  * region servers and the state of data in HDFS (.regioninfo files) all are in
  * accordance. 
  * 
  * Table integrity checks verify that that all possible row keys can resolve 
 to
  * exactly one region of a table.  This means there are no individual 
 degenerate
  * or backwards regions; no holes between regions; and that there no 
 overlapping
  * regions. 
  * 
  * The general repair strategy works in these steps.
  * 1) Repair Table Integrity on HDFS. (merge or fabricate regions)
  * 2) Repair Region Consistency with META and assignments
  * 
  * For table integrity repairs, the tables their region directories are 
 scanned
  * for .regioninfo files.  Each table's integrity is then verified.  If there 
  * are any orphan regions (regions with no .regioninfo files), or holes, new 
  * regions are fabricated.  Backwards regions are sidelined as well as empty
  * degenerate (endkey==startkey) regions.  If there are any overlapping 
 regions,
  * a new region is created and all data is merged into the new region.  
  * 
  * Table integrity repairs deal solely with HDFS and can be done offline -- 
 the
  * hbase region servers or master do not need to be running.  These phase can 
 be
  * use to completely reconstruct the META table in an offline fashion. 
  * 
  * Region consistency requires three conditions -- 1) valid .regioninfo file 
  * present in an hdfs region dir,  2) valid row with .regioninfo data in META,
  * and 3) a region is deployed only at the regionserver that is was assigned 
 to.
  * 
  * Region consistency requires hbck to contact the HBase master and region
  * servers, so the connect() must first be called successfully.  Much of the
  * region consistency information is transient and less risky to repair.
  */
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5623) Race condition when rolling the HLog and hlogFlush

2012-03-23 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5623:
--

Attachment: HBASE-5623_v6-alt.patch

Re-attaching Enis' patch for Hadoop QA

 Race condition when rolling the HLog and hlogFlush
 --

 Key: HBASE-5623
 URL: https://issues.apache.org/jira/browse/HBASE-5623
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.94.0

 Attachments: 5623-suggestion.txt, 5623.txt, 5623v2.txt, 
 HBASE-5623_v0.patch, HBASE-5623_v4.patch, HBASE-5623_v5.patch, 
 HBASE-5623_v6-alt.patch, HBASE-5623_v6-alt.patch


 When doing a ycsb test with a large number of handlers 
 (regionserver.handler.count=60), I get the following exceptions:
 {code}
 Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
 java.lang.NullPointerException
   at 
 org.apache.hadoop.io.SequenceFile$Writer.getLength(SequenceFile.java:1099)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.getLength(SequenceFileLogWriter.java:314)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1291)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1388)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192)
   at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400)
   at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351)
   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:920)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:152)
   at $Proxy1.multi(Unknown Source)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1691)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1689)
   at 
 org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:214)
 {code}
 and 
 {code}
   java.lang.NullPointerException
   at 
 org.apache.hadoop.io.SequenceFile$Writer.checkAndWriteSync(SequenceFile.java:1026)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1068)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1035)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.append(SequenceFileLogWriter.java:279)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.hlogFlush(HLog.java:1237)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1271)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1391)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400)
   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351)
 {code}
 It seems the root cause of the issue is that we open a new log writer and 
 close the old one at HLog#rollWriter() holding the updateLock, but the other 
 threads doing syncer() calls
 {code} 
 logSyncerThread.hlogFlush(this.writer);
 {code}
 without holding the updateLock. LogSyncer only synchronizes against 
 concurrent appends and flush(), but not on the passed writer, which can be 
 closed already by rollWriter(). In this case, since 
 SequenceFile#Writer.close() sets it's out field as null, we get the NPE. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more 

[jira] [Updated] (HBASE-4607) Split log worker should terminate properly when waiting for znode

2012-03-22 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4607:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 Split log worker should terminate properly when waiting for znode
 -

 Key: HBASE-4607
 URL: https://issues.apache.org/jira/browse/HBASE-4607
 Project: HBase
  Issue Type: Bug
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin
Priority: Minor
 Fix For: 0.94.0

 Attachments: 
 HBASE-4607_SplitLogWorker_should_correct-20111017231456-47a82ef3.patch


 This is an attempt to fix the fact that SplitLogWorker threads are not being 
 terminated properly in some unit tests. This probably does not happen in 
 production because the master always creates the log-splitting ZK node, but 
 it does happen in 89-fb. Thanks to Prakash Khemani for help on this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5613) ThriftServer getTableRegions does not return serverName and port

2012-03-22 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5613:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

 ThriftServer getTableRegions does not return serverName and port
 

 Key: HBASE-5613
 URL: https://issues.apache.org/jira/browse/HBASE-5613
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-5613.D2403.1.patch, HBASE-5613.D2403.2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5591) ThiftServerRunner.HBaseHandler.toBytes() is identical to Bytes.getBytes()

2012-03-22 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5591:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

Integrated to trunk.

 ThiftServerRunner.HBaseHandler.toBytes() is identical to Bytes.getBytes()
 -

 Key: HBASE-5591
 URL: https://issues.apache.org/jira/browse/HBASE-5591
 Project: HBase
  Issue Type: Improvement
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-5591.D2355.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5613) ThriftServer getTableRegions does not return serverName and port

2012-03-22 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5613:
--

Fix Version/s: 0.94.0

Scott is preparing a patch for 0.94

 ThriftServer getTableRegions does not return serverName and port
 

 Key: HBASE-5613
 URL: https://issues.apache.org/jira/browse/HBASE-5613
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Fix For: 0.94.0, 0.96.0

 Attachments: HBASE-5613.D2403.1.patch, HBASE-5613.D2403.2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5128) [uber hbck] Enable hbck to automatically repair table integrity problems as well as region consistency problems while online.

2012-03-22 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5128:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12519382/hbase-5128-0.92-v2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 21 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1252//console

This message is automatically generated.)

 [uber hbck] Enable hbck to automatically repair table integrity problems as 
 well as region consistency problems while online.
 -

 Key: HBASE-5128
 URL: https://issues.apache.org/jira/browse/HBASE-5128
 Project: HBase
  Issue Type: New Feature
  Components: hbck
Affects Versions: 0.90.5, 0.92.0, 0.94.0, 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: hbase-5128-0.90-v2.patch, hbase-5128-0.90-v2b.patch, 
 hbase-5128-0.92-v2.patch, hbase-5128-0.94-v2.patch, 
 hbase-5128-trunk-v2.patch, hbase-5128-trunk.patch


 The current (0.90.5, 0.92.0rc2) versions of hbck detects most of region 
 consistency and table integrity invariant violations.  However with '-fix' it 
 can only automatically repair region consistency cases having to do with 
 deployment problems.  This updated version should be able to handle all cases 
 (including a new orphan regiondir case).  When complete will likely deprecate 
 the OfflineMetaRepair tool and subsume several open META-hole related issue.
 Here's the approach (from the comment of at the top of the new version of the 
 file).
 {code}
 /**
  * HBaseFsck (hbck) is a tool for checking and repairing region consistency 
 and
  * table integrity.  
  * 
  * Region consistency checks verify that META, region deployment on
  * region servers and the state of data in HDFS (.regioninfo files) all are in
  * accordance. 
  * 
  * Table integrity checks verify that that all possible row keys can resolve 
 to
  * exactly one region of a table.  This means there are no individual 
 degenerate
  * or backwards regions; no holes between regions; and that there no 
 overlapping
  * regions. 
  * 
  * The general repair strategy works in these steps.
  * 1) Repair Table Integrity on HDFS. (merge or fabricate regions)
  * 2) Repair Region Consistency with META and assignments
  * 
  * For table integrity repairs, the tables their region directories are 
 scanned
  * for .regioninfo files.  Each table's integrity is then verified.  If there 
  * are any orphan regions (regions with no .regioninfo files), or holes, new 
  * regions are fabricated.  Backwards regions are sidelined as well as empty
  * degenerate (endkey==startkey) regions.  If there are any overlapping 
 regions,
  * a new region is created and all data is merged into the new region.  
  * 
  * Table integrity repairs deal solely with HDFS and can be done offline -- 
 the
  * hbase region servers or master do not need to be running.  These phase can 
 be
  * use to completely reconstruct the META table in an offline fashion. 
  * 
  * Region consistency requires three conditions -- 1) valid .regioninfo file 
  * present in an hdfs region dir,  2) valid row with .regioninfo data in META,
  * and 3) a region is deployed only at the regionserver that is was assigned 
 to.
  * 
  * Region consistency requires hbck to contact the HBase master and region
  * servers, so the connect() must first be called successfully.  Much of the
  * region consistency information is transient and less risky to repair.
  */
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5128) [uber hbck] Enable hbck to automatically repair table integrity problems as well as region consistency problems while online.

2012-03-22 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5128:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12519405/hbase-5128-0.90-v2b.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 15 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1254//console

This message is automatically generated.)

 [uber hbck] Enable hbck to automatically repair table integrity problems as 
 well as region consistency problems while online.
 -

 Key: HBASE-5128
 URL: https://issues.apache.org/jira/browse/HBASE-5128
 Project: HBase
  Issue Type: New Feature
  Components: hbck
Affects Versions: 0.90.5, 0.92.0, 0.94.0, 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: hbase-5128-0.90-v2.patch, hbase-5128-0.90-v2b.patch, 
 hbase-5128-0.92-v2.patch, hbase-5128-0.94-v2.patch, 
 hbase-5128-trunk-v2.patch, hbase-5128-trunk.patch


 The current (0.90.5, 0.92.0rc2) versions of hbck detects most of region 
 consistency and table integrity invariant violations.  However with '-fix' it 
 can only automatically repair region consistency cases having to do with 
 deployment problems.  This updated version should be able to handle all cases 
 (including a new orphan regiondir case).  When complete will likely deprecate 
 the OfflineMetaRepair tool and subsume several open META-hole related issue.
 Here's the approach (from the comment of at the top of the new version of the 
 file).
 {code}
 /**
  * HBaseFsck (hbck) is a tool for checking and repairing region consistency 
 and
  * table integrity.  
  * 
  * Region consistency checks verify that META, region deployment on
  * region servers and the state of data in HDFS (.regioninfo files) all are in
  * accordance. 
  * 
  * Table integrity checks verify that that all possible row keys can resolve 
 to
  * exactly one region of a table.  This means there are no individual 
 degenerate
  * or backwards regions; no holes between regions; and that there no 
 overlapping
  * regions. 
  * 
  * The general repair strategy works in these steps.
  * 1) Repair Table Integrity on HDFS. (merge or fabricate regions)
  * 2) Repair Region Consistency with META and assignments
  * 
  * For table integrity repairs, the tables their region directories are 
 scanned
  * for .regioninfo files.  Each table's integrity is then verified.  If there 
  * are any orphan regions (regions with no .regioninfo files), or holes, new 
  * regions are fabricated.  Backwards regions are sidelined as well as empty
  * degenerate (endkey==startkey) regions.  If there are any overlapping 
 regions,
  * a new region is created and all data is merged into the new region.  
  * 
  * Table integrity repairs deal solely with HDFS and can be done offline -- 
 the
  * hbase region servers or master do not need to be running.  These phase can 
 be
  * use to completely reconstruct the META table in an offline fashion. 
  * 
  * Region consistency requires three conditions -- 1) valid .regioninfo file 
  * present in an hdfs region dir,  2) valid row with .regioninfo data in META,
  * and 3) a region is deployed only at the regionserver that is was assigned 
 to.
  * 
  * Region consistency requires hbck to contact the HBase master and region
  * servers, so the connect() must first be called successfully.  Much of the
  * region consistency information is transient and less risky to repair.
  */
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5128) [uber hbck] Enable hbck to automatically repair table integrity problems as well as region consistency problems while online.

2012-03-22 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5128:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12519401/hbase-5128-0.90-v2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 15 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1253//console

This message is automatically generated.)

 [uber hbck] Enable hbck to automatically repair table integrity problems as 
 well as region consistency problems while online.
 -

 Key: HBASE-5128
 URL: https://issues.apache.org/jira/browse/HBASE-5128
 Project: HBase
  Issue Type: New Feature
  Components: hbck
Affects Versions: 0.90.5, 0.92.0, 0.94.0, 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: hbase-5128-0.90-v2.patch, hbase-5128-0.90-v2b.patch, 
 hbase-5128-0.92-v2.patch, hbase-5128-0.94-v2.patch, 
 hbase-5128-trunk-v2.patch, hbase-5128-trunk.patch


 The current (0.90.5, 0.92.0rc2) versions of hbck detects most of region 
 consistency and table integrity invariant violations.  However with '-fix' it 
 can only automatically repair region consistency cases having to do with 
 deployment problems.  This updated version should be able to handle all cases 
 (including a new orphan regiondir case).  When complete will likely deprecate 
 the OfflineMetaRepair tool and subsume several open META-hole related issue.
 Here's the approach (from the comment of at the top of the new version of the 
 file).
 {code}
 /**
  * HBaseFsck (hbck) is a tool for checking and repairing region consistency 
 and
  * table integrity.  
  * 
  * Region consistency checks verify that META, region deployment on
  * region servers and the state of data in HDFS (.regioninfo files) all are in
  * accordance. 
  * 
  * Table integrity checks verify that that all possible row keys can resolve 
 to
  * exactly one region of a table.  This means there are no individual 
 degenerate
  * or backwards regions; no holes between regions; and that there no 
 overlapping
  * regions. 
  * 
  * The general repair strategy works in these steps.
  * 1) Repair Table Integrity on HDFS. (merge or fabricate regions)
  * 2) Repair Region Consistency with META and assignments
  * 
  * For table integrity repairs, the tables their region directories are 
 scanned
  * for .regioninfo files.  Each table's integrity is then verified.  If there 
  * are any orphan regions (regions with no .regioninfo files), or holes, new 
  * regions are fabricated.  Backwards regions are sidelined as well as empty
  * degenerate (endkey==startkey) regions.  If there are any overlapping 
 regions,
  * a new region is created and all data is merged into the new region.  
  * 
  * Table integrity repairs deal solely with HDFS and can be done offline -- 
 the
  * hbase region servers or master do not need to be running.  These phase can 
 be
  * use to completely reconstruct the META table in an offline fashion. 
  * 
  * Region consistency requires three conditions -- 1) valid .regioninfo file 
  * present in an hdfs region dir,  2) valid row with .regioninfo data in META,
  * and 3) a region is deployed only at the regionserver that is was assigned 
 to.
  * 
  * Region consistency requires hbck to contact the HBase master and region
  * servers, so the connect() must first be called successfully.  Much of the
  * region consistency information is transient and less risky to repair.
  */
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   4   5   6   7   8   >