[jira] [Commented] (HBASE-5902) Some scripts are not executable

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268909#comment-13268909
 ] 

Hudson commented on HBASE-5902:
---

Integrated in HBase-TRUNK-security #192 (See 
[https://builds.apache.org/job/HBase-TRUNK-security/192/])
HBASE-5902 Some scripts are not executable (Revision 1334019)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/bin/graceful_stop.sh
* /hbase/trunk/bin/local-master-backup.sh
* /hbase/trunk/bin/local-regionservers.sh


 Some scripts are not executable
 ---

 Key: HBASE-5902
 URL: https://issues.apache.org/jira/browse/HBASE-5902
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Fix For: 0.96.0, 0.94.1

 Attachments: 5902.v1.patch, 5902v2.txt


 -rw-rw-r--  graceful_stop.sh
 -rw-rw-r--  hbase-config.sh
 -rw-rw-r--  local-master-backup.sh
 -rw-rw-r--  local-regionservers.sh

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5887) Make TestAcidGuarantees usable for system testing.

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268910#comment-13268910
 ] 

Hudson commented on HBASE-5887:
---

Integrated in HBase-TRUNK-security #192 (See 
[https://builds.apache.org/job/HBase-TRUNK-security/192/])
HBASE-5887 Make TestAcidGuarantees usable for system testing (Revision 
1333785)

 Result = SUCCESS
jmhsieh : 
Files : 
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java


 Make TestAcidGuarantees usable for system testing.
 --

 Key: HBASE-5887
 URL: https://issues.apache.org/jira/browse/HBASE-5887
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0, 0.92.1, 0.94.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: hbase-5887-92.patch, hbase-5887.patch


 Currently, the TestAcidGuarantees run via main() will always abort with an 
 NPE because it digs into a non-existant HBaseTestingUtility for a flusher 
 thread.  We should tool this up so that it works properly from the command 
 line.  This would be a very useful long running test when used in conjunction 
 with fault injections to verify row acid properties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5876) TestImportExport has been failing against hadoop 0.23 profile

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268908#comment-13268908
 ] 

Hudson commented on HBASE-5876:
---

Integrated in HBase-TRUNK-security #192 (See 
[https://builds.apache.org/job/HBase-TRUNK-security/192/])
HBASE-5876 TestImportExport has been failing against hadoop 0.23 profile 
(Revision 1333778)

 Result = SUCCESS
jmhsieh : 
Files : 
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapreduce/MapreduceTestingShim.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java


 TestImportExport has been failing against hadoop 0.23 profile
 -

 Key: HBASE-5876
 URL: https://issues.apache.org/jira/browse/HBASE-5876
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0, 0.96.0
Reporter: Zhihong Yu
Assignee: Jonathan Hsieh
 Fix For: 0.96.0, 0.94.1

 Attachments: hbase-5876-94.patch, hbase-5876-v2.patch, 
 hbase-5876.patch


 TestImportExport has been failing against hadoop 0.23 profile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5889) Remove HRegionInterface

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268907#comment-13268907
 ] 

Hudson commented on HBASE-5889:
---

Integrated in HBase-TRUNK-security #192 (See 
[https://builds.apache.org/job/HBase-TRUNK-security/192/])
HBASE-5889 Remove HRegionInterface (Revision 1334314)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/conf/hbase-policy.xml
* 
/hbase/trunk/security/src/main/java/org/apache/hadoop/hbase/security/HBasePolicyProvider.java
* 
/hbase/trunk/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HConstants.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServer.java
* /hbase/trunk/src/main/protobuf/Admin.proto
* /hbase/trunk/src/main/protobuf/RPC.proto
* /hbase/trunk/src/main/resources/hbase-default.xml
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestDrainingServer.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestMasterRestartAfterDisablingTable.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSStatusServlet.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 Remove HRegionInterface
 ---

 Key: HBASE-5889
 URL: https://issues.apache.org/jira/browse/HBASE-5889
 Project: HBase
  Issue Type: Improvement
  Components: client, ipc, regionserver
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5889_v3.patch, hbase_5889.patch, 
 hbase_5889_v2.patch, hbase_5889_v4.patch


 As a step to move internals to PB, so as to avoid the conversion for 
 performance reason, we should remove the HRegionInterface. 
 Therefore region server only supports ClientProtocol and AdminProtocol.  
 Later on, HRegion can work with PB messages directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, 

[jira] [Commented] (HBASE-5844) Delete the region servers znode after a regions server crash

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268906#comment-13268906
 ] 

Hudson commented on HBASE-5844:
---

Integrated in HBase-TRUNK-security #192 (See 
[https://builds.apache.org/job/HBase-TRUNK-security/192/])
HBASE-5844 Delete the region servers znode after a regions server crash 
(Revision 1334028)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/bin/hbase-daemon.sh
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


 Delete the region servers znode after a regions server crash
 

 Key: HBASE-5844
 URL: https://issues.apache.org/jira/browse/HBASE-5844
 Project: HBase
  Issue Type: Improvement
  Components: regionserver, scripts
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: 5844.v1.patch, 5844.v2.patch, 5844.v3.patch, 
 5844.v3.patch, 5844.v4.patch


 today, if the regions server crashes, its znode is not deleted in ZooKeeper. 
 So the recovery process will stop only after a timeout, usually 30s.
 By deleting the znode in start script, we remove this delay and the recovery 
 starts immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5889) Remove HRegionInterface

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268911#comment-13268911
 ] 

Hudson commented on HBASE-5889:
---

Integrated in HBase-TRUNK #2850 (See 
[https://builds.apache.org/job/HBase-TRUNK/2850/])
HBASE-5889 Remove HRegionInterface (Revision 1334314)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/conf/hbase-policy.xml
* 
/hbase/trunk/security/src/main/java/org/apache/hadoop/hbase/security/HBasePolicyProvider.java
* 
/hbase/trunk/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HConstants.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServer.java
* /hbase/trunk/src/main/protobuf/Admin.proto
* /hbase/trunk/src/main/protobuf/RPC.proto
* /hbase/trunk/src/main/resources/hbase-default.xml
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestDrainingServer.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestMasterRestartAfterDisablingTable.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSStatusServlet.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 Remove HRegionInterface
 ---

 Key: HBASE-5889
 URL: https://issues.apache.org/jira/browse/HBASE-5889
 Project: HBase
  Issue Type: Improvement
  Components: client, ipc, regionserver
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5889_v3.patch, hbase_5889.patch, 
 hbase_5889_v2.patch, hbase_5889_v4.patch


 As a step to move internals to PB, so as to avoid the conversion for 
 performance reason, we should remove the HRegionInterface. 
 Therefore region server only supports ClientProtocol and AdminProtocol.  
 Later on, HRegion can work with PB messages directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your 

[jira] [Commented] (HBASE-5867) Improve Compaction Throttle Default

2012-05-05 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268935#comment-13268935
 ] 

Phabricator commented on HBASE-5867:


mbautin has committed the revision [jira] [HBASE-5867] [89-fb] Improve 
Compaction Throttle Default.

REVISION DETAIL
  https://reviews.facebook.net/D2943

COMMIT
  https://reviews.facebook.net/rHBASEEIGHTNINEFBBRANCH1334388


 Improve Compaction Throttle Default
 ---

 Key: HBASE-5867
 URL: https://issues.apache.org/jira/browse/HBASE-5867
 Project: HBase
  Issue Type: Improvement
Reporter: Nicolas Spiegelberg
Assignee: Nicolas Spiegelberg
Priority: Minor
 Attachments: D2943.1.patch, HBASE-5867-trunk.patch


 We recently had a production issue where our compactions fell behind because 
 our compaction throttle was improperly tuned and accidentally upgraded all 
 compactions to the large pool.  The default from HBASE-3877 makes 1 bad 
 assumption: the default number of flushed files in a compaction.  Currently 
 the algorithm is:
 throttleSize ~= flushSize * 2
 This assumes that the basic compaction utilizes 3 files and that all 3 files 
 are compressed.  In this case, hbase.hstore.compaction.min == 6  the 
 values were not very compressible.  Both conditions should be taken into 
 consideration.  As a default, it is less damaging for the large thread to be 
 slightly higher than it needs to be versus having everything accidentally 
 promoted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5946) Thrift Filter Language documentation is inconsistent

2012-05-05 Thread Alexander (JIRA)
Alexander created HBASE-5946:


 Summary: Thrift Filter Language documentation is inconsistent
 Key: HBASE-5946
 URL: https://issues.apache.org/jira/browse/HBASE-5946
 Project: HBase
  Issue Type: Bug
  Components: filters, thrift
Affects Versions: 0.92.1
Reporter: Alexander
Priority: Minor


Syntax: SingleColumnValueFilter(compare operator, 'comparator', 'family', 
'qualifier), as described here: http://hbase.apache.org/book/thrift.html is 
not correct.
The correct syntax is: SingleColumnValueFilter('family', 'qualifier', 
compare operator, 'comparator')
Also, comparator parameter must always contain the comparator, e.g. binary: 
or binaryprefix: etc. Without it (except PrefixFilter and maybe some other 
filters) TSocket class throws TTransportException: TSocket read 0 bytes. 
All examples in section 9.3.1.9. Individual Filter Syntax are written without 
comparator.

There also a typo: 
in section 9.3.1.9.12 - Family Filter, syntax and example described for 
QualifierFilter

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-05 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5916:
--

Status: Open  (was: Patch Available)

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-05 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5916:
--

Status: Patch Available  (was: Open)

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_1.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-05 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5916:
--

Attachment: HBASE-5916_trunk_1.patch

Reattaching for hadoopqa.

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_1.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268973#comment-13268973
 ] 

Hadoop QA commented on HBASE-5916:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12525729/HBASE-5916_trunk_1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1779//console

This message is automatically generated.

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_1.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Delete table failed but HBaseAdmin#deletetable report it as success

2012-05-05 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268975#comment-13268975
 ] 

Zhihong Yu commented on HBASE-5894:
---

I got the following when trying to apply the patch:
{code}
Patching file src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java 
using Plan A...
Hunk #1 failed at 511.
Hunk #2 failed at 543.
Hunk #3 failed at 578.
3 out of 3 hunks failed--saving rejects to 
src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java.rej
{code}
Maybe due to the ^M's in the patch ?
{code}
}^M
  }^M
{code}

 Delete table failed but HBaseAdmin#deletetable report it as success
 ---

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_92_patch_v1.patch, 
 HBASE-5894_92_patch_v1_surefire-report.html, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Delete table failed but HBaseAdmin#deletetable report it as success

2012-05-05 Thread xufeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268987#comment-13268987
 ] 

xufeng commented on HBASE-5894:
---

@Ted
Which patch has this problem?

 Delete table failed but HBaseAdmin#deletetable report it as success
 ---

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_92_patch_v1.patch, 
 HBASE-5894_92_patch_v1_surefire-report.html, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Delete table failed but HBaseAdmin#deletetable report it as success

2012-05-05 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268988#comment-13268988
 ] 

Zhihong Yu commented on HBASE-5894:
---

HBASE-5894_trunk_patch_v2.patch

 Delete table failed but HBaseAdmin#deletetable report it as success
 ---

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_92_patch_v1.patch, 
 HBASE-5894_92_patch_v1_surefire-report.html, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2012-05-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268995#comment-13268995
 ] 

Andrew Purtell commented on HBASE-5930:
---

+1 We basically do the same thing as proposed but on the client side with a 
shared DAO layer.

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Priority: Minor

 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5584) Coprocessor hooks can be called in the respective handlers

2012-05-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268998#comment-13268998
 ] 

Andrew Purtell commented on HBASE-5584:
---

+1 Patch looks good Ram.

 Coprocessor hooks can be called in the respective handlers
 --

 Key: HBASE-5584
 URL: https://issues.apache.org/jira/browse/HBASE-5584
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.96.0

 Attachments: HBASE-5584-1.patch, HBASE-5584-2.patch, 
 HBASE-5584-3.patch, HBASE-5584.patch


 Following points can be changed w.r.t to coprocessors
 - Call preCreate, postCreate, preEnable, postEnable, etc. in their 
 respective handlers
 - Currently it is called in the HMaster thus making the postApis async w.r.t 
 the handlers
 - Similar is the case with the balancer.
 with current behaviour once we are in the postEnable(for eg) we any way need 
 to wait for the main enable handler to 
 be completed.
 We should ensure that we dont wait in the main thread so again we need to 
 spawn a thread and wait on that.
 On the other hand if the pre and post api is called on the handlers then only 
 that handler thread will be
 used in the pre/post apis
 If the above said plan is ok i can prepare a patch for all such related 
 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5385) Delete table/column should delete stored permissions on -acl- table

2012-05-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268999#comment-13268999
 ] 

Andrew Purtell commented on HBASE-5385:
---

+1 looks good. 

bq. Maybe we can open another jira for this, to implement the exists check on 
grant and verify in all pre* if there's nothing left.

This is a good idea since it's a different problem scope than this jira.

 Delete table/column should delete stored permissions on -acl- table  
 -

 Key: HBASE-5385
 URL: https://issues.apache.org/jira/browse/HBASE-5385
 Project: HBase
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Matteo Bertozzi
 Attachments: HBASE-5385-v0.patch, HBASE-5385-v1.patch


 Deleting the table or a column does not cascade to the stored permissions at 
 the -acl- table. We should also remove those permissions, otherwise, it can 
 be a security leak, where freshly created tables contain permissions from 
 previous same-named tables. We might also want to ensure, upon table 
 creation, that no entries are already stored at the -acl- table. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5342) Grant/Revoke global permissions

2012-05-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269005#comment-13269005
 ] 

Andrew Purtell commented on HBASE-5342:
---

The AccessControllerProtocol change is not backwards compatible. You should 
deprecate

{code}
public void grant(byte[] user, TablePermission permission)
{code}

and 

{code}
public void revoke(byte[] user, TablePermission permission)
{code}

in 0.92 (and 0.94 if it's released already) and take them out in the next major 
rev after.

The new 'whoami' command for the shell is nice.

I also see some noise/whitespace refactoring around debug logging. That kind of 
change is a little annoying, it distracts from the logic changes. Just a 
suggestion for future changes.

 Grant/Revoke global permissions
 ---

 Key: HBASE-5342
 URL: https://issues.apache.org/jira/browse/HBASE-5342
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Matteo Bertozzi
 Attachments: HBASE-5342-draft.patch, HBASE-5342-v0.patch


 HBASE-3025 introduced simple ACLs based on coprocessors. It defines 
 global/table/cf/cq level permissions. However, there is no way to 
 grant/revoke global level permissions, other than the hbase.superuser conf 
 setting. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5894) Delete table failed but HBaseAdmin#deletetable report it as success

2012-05-05 Thread xufeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xufeng updated HBASE-5894:
--

Attachment: HBASE-5894_trunk_patch_v3.patch

 Delete table failed but HBaseAdmin#deletetable report it as success
 ---

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_92_patch_v1.patch, 
 HBASE-5894_92_patch_v1_surefire-report.html, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5876) TestImportExport has been failing against hadoop 0.23 profile

2012-05-05 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269013#comment-13269013
 ] 

Jonathan Hsieh commented on HBASE-5876:
---

Interesting, the old version of TestImportExport (before my previous attempt) 
used the LocalMRRunner instead of the RPC to the MiniMRCluster.  TestImportTsv 
always uses the RPC/MiniMRCluster. 

Use of the localmrrunner when miniMRcluster is spun up seems wrong. I'm going 
to force usage of the minimrcluster/rpc/runner in TestImportExport.

 TestImportExport has been failing against hadoop 0.23 profile
 -

 Key: HBASE-5876
 URL: https://issues.apache.org/jira/browse/HBASE-5876
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0, 0.96.0
Reporter: Zhihong Yu
Assignee: Jonathan Hsieh
 Fix For: 0.96.0, 0.94.1

 Attachments: hbase-5876-94.patch, hbase-5876-v2.patch, 
 hbase-5876.patch


 TestImportExport has been failing against hadoop 0.23 profile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5894) Delete table failed but HBaseAdmin#deletetable report it as success

2012-05-05 Thread xufeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xufeng updated HBASE-5894:
--

Attachment: HBASE-5894_94_patch_v2.patch
HBASE-5894_90_patch_v2.patch
HBASE-5894_92_patch_v2.patch

@Ted
I also updated the others.
Maybe the patch created by eclipse has this problem.
Now I create the patch by TortoiseSVN.

 Delete table failed but HBaseAdmin#deletetable report it as success
 ---

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Delete table failed but HBaseAdmin#deletetable report it as success

2012-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269017#comment-13269017
 ] 

Hadoop QA commented on HBASE-5894:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12525741/HBASE-5894_94_patch_v2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1781//console

This message is automatically generated.

 Delete table failed but HBaseAdmin#deletetable report it as success
 ---

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2012-05-05 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269025#comment-13269025
 ] 

Matt Corgan commented on HBASE-5930:


Periodically flushing the memstore seems like a good feature to me.  Could also 
help clear out cold data from memory to make more room for bigger memstores on 
regions that are actually being used.

A different solution to the underlying data loss issue might be to have a third 
client setting for WAL writing: NONE, SYNC, and ASYNC.  ASYNC would write the 
data to a memory buffer, return success to the client, and another thread would 
flush the buffer to the WAL.  The WAL would ideally only lag a few seconds 
behind the memstores, but some form of throttling would probably be needed.

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Priority: Minor

 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5894:
--

Summary: Table deletion failed but HBaseAdmin#deletetable reports it as 
success  (was: Delete table failed but HBaseAdmin#deletetable report it as 
success)

 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5891) Change Compression Based on Type of Compaction

2012-05-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269029#comment-13269029
 ] 

Andrew Purtell commented on HBASE-5891:
---

It used to be possible (circa 0.90) to vary the compression algorithm used for 
flushes and minor compactions and that for major compactions. I added this 
because we had a case under consideration where data would grow colder 
proportionally to the delta between current and write time. It was simple and 
low impact to set flush compaction to LZO and major compaction to BZIP2 (and we 
flirted with LZMA but that is simply too bandwidth constrained), and a script 
would trigger region-by-region major compaction daily. I don't know if this is 
maintained in the current code base. Compaction was significantly reworked 0.90 
- 0.92 and we didn't pick up the majority of these changes in our internal 
version. 

 Change Compression Based on Type of Compaction
 --

 Key: HBASE-5891
 URL: https://issues.apache.org/jira/browse/HBASE-5891
 Project: HBase
  Issue Type: New Feature
Reporter: Nicolas Spiegelberg
Priority: Minor

 We currently use LZO on our production systems because the on-demand 
 decompression speed of GZ is too slow.  That said, many of our 
 major-compacted StoreFiles are infrequently read because of lazy seek 
 optimizations, but they occupy the majority of our disk space.  One idea is 
 to change the type of compression depending upon compaction characteristics 
 (input size or major compaction flag).  This would allow us to have our 
 largest and least-read files be GZ compressed and save space.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269043#comment-13269043
 ] 

Hudson commented on HBASE-5894:
---

Integrated in HBase-TRUNK #2852 (See 
[https://builds.apache.org/job/HBase-TRUNK/2852/])
HBASE-5894 Table deletion failed but HBaseAdmin#deletetable reports it as 
success (Xufeng) (Revision 1334464)

 Result = SUCCESS
tedyu : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269047#comment-13269047
 ] 

Zhihong Yu commented on HBASE-5894:
---

Integrated to 0.90, 0.92 and 0.94 branches.

Thanks for the patch Xufeng.

Thanks for the review Stack.

 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5947) Check for valid user/table/family/qualifier and acl state

2012-05-05 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-5947:
--

 Summary: Check for valid user/table/family/qualifier and acl state
 Key: HBASE-5947
 URL: https://issues.apache.org/jira/browse/HBASE-5947
 Project: HBase
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi


HBase Shell grant/revoke doesn't check for valid user or table/family/qualifier 
so can you end up having rights for something that doesn't exists.

We might also want to ensure, upon table/column creation, that no entries are 
already stored at the acl table. We might still have residual acl entries if 
something goes wrong, in postDeleteTable(), postDeleteColumn().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269061#comment-13269061
 ] 

Hudson commented on HBASE-5894:
---

Integrated in HBase-0.94 #183 (See 
[https://builds.apache.org/job/HBase-0.94/183/])
HBASE-5894 Table deletion failed but HBaseAdmin#deletetable reports it as 
success (Xufeng) (Revision 1334475)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2012-05-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269070#comment-13269070
 ] 

stack commented on HBASE-5930:
--

Is our deferred flush == ASYNC described above?

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Priority: Minor

 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269072#comment-13269072
 ] 

Hudson commented on HBASE-5894:
---

Integrated in HBase-0.92 #399 (See 
[https://builds.apache.org/job/HBase-0.92/399/])
HBASE-5894  Table deletion failed but HBaseAdmin#deletetable reports it as 
success (Xufeng) (Revision 1334476)

 Result = FAILURE
tedyu : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2012-05-05 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269074#comment-13269074
 ] 

Lars Hofhansl commented on HBASE-5930:
--

That (deferred flush) is what I told my colleague to use last week.
Would be nice if the client could control this (in addition to writeToWal, we 
could have writeToWalAsynchronously - or something).

A periodic memstore flush still make sense. If I get some time next week I'll 
come up with a patch (unless somebody else wants to take this :) ).

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Priority: Minor

 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5894:
--

Fix Version/s: 0.94.1
   0.96.0
   0.92.2
   0.90.7

 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5894:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5945) Reduce buffer copies in IPC server response path

2012-05-05 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-5945:
---

Attachment: even-fewer-copies.txt

New rev gets rid of some more. This seems to make a noticeable difference in my 
oprofile output and YCSB results. Would appreciate if other folks could verify

(yes, patch still needs more work, please don't review for style/licenses/etc)

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.96.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: buffer-copies.txt, even-fewer-copies.txt


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5894) Table deletion failed but HBaseAdmin#deletetable reports it as success

2012-05-05 Thread xufeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269108#comment-13269108
 ] 

xufeng commented on HBASE-5894:
---

Ted and Stack
Thanks for your help.

 Table deletion failed but HBaseAdmin#deletetable reports it as success
 --

 Key: HBASE-5894
 URL: https://issues.apache.org/jira/browse/HBASE-5894
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.7, 0.92.2, 0.94.0
 Environment: all versions
Reporter: xufeng
Assignee: xufeng
Priority: Minor
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5894_90_patch_v1.patch, 
 HBASE-5894_90_patch_v1_surefire-report.html, HBASE-5894_90_patch_v2.patch, 
 HBASE-5894_92_patch_v1.patch, HBASE-5894_92_patch_v1_surefire-report.html, 
 HBASE-5894_92_patch_v2.patch, HBASE-5894_94_patch_v1.patch, 
 HBASE-5894_94_patch_v1_surefire-report.html, HBASE-5894_94_patch_v2.patch, 
 HBASE-5894_trunk_patch_v1.patch, 
 HBASE-5894_trunk_patch_v1_surefire-report.html, 
 HBASE-5894_trunk_patch_v2.patch, 
 HBASE-5894_trunk_patch_v2_surefire-report.html, 
 HBASE-5894_trunk_patch_v3.patch


 Reproduce this issue by following steps:
 For reproduce it I add this code in DeleteTableHandler#handleTableOperation():
 {noformat}
   LOG.debug(Deleting region  + region.getRegionNameAsString() +
  from META and FS);
 +if (true) {
 +  throw new IOException(ERROR);
 +}
   // Remove region from META
   MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
 {noformat}
 step1:create a table and disable it.
 step2:delete it by HBaseAdmin#deleteTable() API.
 result:after lone time, The log say the Table has been deleted, but in fact 
 if we do list in shell,the table also exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5494) Introduce a zk hosted table-wide read/write lock so only one table operation at a time

2012-05-05 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269110#comment-13269110
 ] 

Phabricator commented on HBASE-5494:


khemani has commented on the revision [jira] [HBASE-5494] [89-fb] Table-level 
locks for schema changing operations..

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/zookeeper/DistributedLock.java:294 
Should you check the return value of this method to ensure that the node still 
exists?

  Say checkExistsAndCreate set the watch because the znode existed.

  Now the watch fires because the lock-owner changes the znode data.

  By the time you reset the watch here the owner releases the lock. 
watchAndCheckExists() will return false and you should then trigger the latch.


  src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWrapper.java:1336 
The zookeeper disconnect error should be specially handled - because the znode 
might have already been created.
  src/main/java/org/apache/hadoop/hbase/zookeeper/DistributedLock.java:112 Will 
lockZNodeVersion always be 0 because this znode has just been created?

  Why should release() need lockZNodeVersion? it should be OK to blindly delete 
the znode?

REVISION DETAIL
  https://reviews.facebook.net/D2997

BRANCH
  table_level_ddl_locks


 Introduce a zk hosted table-wide read/write lock so only one table operation 
 at a time
 --

 Key: HBASE-5494
 URL: https://issues.apache.org/jira/browse/HBASE-5494
 Project: HBase
  Issue Type: Improvement
Reporter: stack
 Attachments: D2997.3.patch, D2997.4.patch, D2997.5.patch, 
 D2997.6.patch


 I saw this facility over in the accumulo code base.
 Currently we just try to sort out the mess when splits come in during an 
 online schema edit; somehow we figure we can figure all possible region 
 transition combinations and make the right call.
 We could try and narrow the number of combinations by taking out a zk table 
 lock when doing table operations.
 For example, on split or merge, we could take a read-only lock meaning the 
 table can't be disabled while these are running.
 We could then take a write only lock if we want to ensure the table doesn't 
 change while disabling or enabling process is happening.
 Shouldn't be too hard to add.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5494) Introduce a zk hosted table-wide read/write lock so only one table operation at a time

2012-05-05 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269113#comment-13269113
 ] 

Phabricator commented on HBASE-5494:


avf has commented on the revision [jira] [HBASE-5494] [89-fb] Table-level 
locks for schema changing operations..

  Hi Prakash.

  Thanks for the comments! I'll chat with you on Monday about the potential 
issue around handling connection loss (I was under impression that 
RecoverableZookeeper handles that).

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWrapper.java:1336 
Isn't this handled by RecoverableZooKeeper?
  src/main/java/org/apache/hadoop/hbase/zookeeper/DistributedLock.java:112 This 
is to verify that we are only releasing a lock that we ourselves acquire, and 
that code doesn't (accidentally) release lock acquired by other 
threads/processes.
  src/main/java/org/apache/hadoop/hbase/zookeeper/DistributedLock.java:294 Good 
catch, will handle this.

REVISION DETAIL
  https://reviews.facebook.net/D2997

BRANCH
  table_level_ddl_locks


 Introduce a zk hosted table-wide read/write lock so only one table operation 
 at a time
 --

 Key: HBASE-5494
 URL: https://issues.apache.org/jira/browse/HBASE-5494
 Project: HBase
  Issue Type: Improvement
Reporter: stack
 Attachments: D2997.3.patch, D2997.4.patch, D2997.5.patch, 
 D2997.6.patch


 I saw this facility over in the accumulo code base.
 Currently we just try to sort out the mess when splits come in during an 
 online schema edit; somehow we figure we can figure all possible region 
 transition combinations and make the right call.
 We could try and narrow the number of combinations by taking out a zk table 
 lock when doing table operations.
 For example, on split or merge, we could take a read-only lock meaning the 
 table can't be disabled while these are running.
 We could then take a write only lock if we want to ensure the table doesn't 
 change while disabling or enabling process is happening.
 Shouldn't be too hard to add.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5104) Provide a reliable intra-row pagination mechanism

2012-05-05 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5104:
---

Attachment: D2799.5.patch

mbautin updated the revision [jira] [HBASE-5104] Provide a reliable intra-row 
pagination mechanism.
Reviewers: madhuvaidya, lhofhansl, Kannan, tedyu, stack, todd, JIRA, jxcn01

  Rebasing and addressing review comments.

REVISION DETAIL
  https://reviews.facebook.net/D2799

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/client/Get.java
  src/main/java/org/apache/hadoop/hbase/client/Scan.java
  src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
  src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
  src/main/protobuf/Client.proto
  src/test/java/org/apache/hadoop/hbase/HTestConst.java
  src/test/java/org/apache/hadoop/hbase/client/TestIntraRowPagination.java
  src/test/java/org/apache/hadoop/hbase/client/TestScannersFromClientSide.java


 Provide a reliable intra-row pagination mechanism
 -

 Key: HBASE-5104
 URL: https://issues.apache.org/jira/browse/HBASE-5104
 Project: HBase
  Issue Type: Bug
Reporter: Kannan Muthukkaruppan
Assignee: Madhuwanti Vaidya
 Attachments: D2799.1.patch, D2799.2.patch, D2799.3.patch, 
 D2799.4.patch, D2799.5.patch, 
 jira-HBASE-5104-Provide-a-reliable-intra-row-paginat-2012-04-16_12_39_42.patch,
  testFilterList.rb


 Addendum:
 Doing pagination (retrieving at most limit number of KVs at a particular 
 offset) is currently supported via the ColumnPaginationFilter. However, it 
 is not a very clean way of supporting pagination.  Some of the problems with 
 it are:
 * Normally, one would expect a query with (Filter(A) AND Filter(B)) to have 
 same results as (query with Filter(A)) INTERSECT (query with Filter(B)). This 
 is not the case for ColumnPaginationFilter as its internal state gets updated 
 depending on whether or not Filter(A) returns TRUE/FALSE for a particular 
 cell.
 * When this Filter is used in combination with other filters (e.g., doing AND 
 with another filter using FilterList), the behavior of the query depends on 
 the order of filters in the FilterList. This is not ideal.
 * ColumnPaginationFilter is a stateful filter which ends up counting multiple 
 versions of the cell as separate values even if another filter upstream or 
 the ScanQueryMatcher is going to reject the value for other reasons.
 Seems like we need a reliable way to do pagination. The particular use case 
 that prompted this JIRA is pagination within the same rowKey. For example, 
 for a given row key R, get columns with prefix P, starting at offset X (among 
 columns which have prefix P) and limit Y. Some possible fixes might be:
 1) enhance ColumnPrefixFilter to support another constructor which supports 
 limit/offset.
 2) Support pagination (limit/offset) at the Scan/Get API level (rather than 
 as a filter) [Like SQL].
 Original Post:
 Thanks Jiakai Liu for reporting this issue and doing the initial 
 investigation. Email from Jiakai below:
 Assuming that we have an index column family with the following entries:
 tag0:001:thread1
 ...
 tag1:001:thread1
 tag1:002:thread2
 ...
 tag1:010:thread10
 ...
 tag2:001:thread1
 tag2:005:thread5
 ...
 To get threads with tag1 in range [5, 10), I tried the following code:
 ColumnPrefixFilter filter1 = new 
 ColumnPrefixFilter(Bytes.toBytes(tag1));
 ColumnPaginationFilter filter2 = new ColumnPaginationFilter(5 /* limit 
 */, 5 /* offset */);
 FilterList filters = new FilterList(Operator.MUST_PASS_ALL);
 filters.addFilter(filter1);
 filters.addFilter(filter2);
 Get get = new Get(USER);
 get.addFamily(COLUMN_FAMILY);
 get.setMaxVersions(1);
 get.setFilter(filters);
 Somehow it didn't work as expected. It returned the entries as if the filter1 
 were not set.
 Turns out the ColumnPrefixFilter returns SEEK_NEXT_USING_HINT in some cases. 
 The FilterList filter does not handle this return code properly (treat it as 
 INCLUDE).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5104) Provide a reliable intra-row pagination mechanism

2012-05-05 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269115#comment-13269115
 ] 

Phabricator commented on HBASE-5104:


mbautin has commented on the revision [jira] [HBASE-5104] Provide a reliable 
intra-row pagination mechanism.

  Michael, Jimmy: thanks for reviewing! See my responses inline.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java:386 Done.

  src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java:387 Done.
  src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java:931 Done.
  src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java:932 Done.
  src/main/java/org/apache/hadoop/hbase/client/Scan.java:638 Done.
  src/main/java/org/apache/hadoop/hbase/client/Get.java:471 Done.
  src/main/protobuf/Client.proto:49 Done.
  src/main/protobuf/Client.proto:50 Done.
  src/main/protobuf/Client.proto:199 Done.
  src/main/protobuf/Client.proto:200 Done.
  src/test/java/org/apache/hadoop/hbase/HTestConst.java:18 This is not a test, 
this is a collection of constants used in tests.

  I tried to save some typing, because the intended usage pattern is 
HTestConst.DEFAULT_{TABLE,CF,ROW,etc}... However, if you feel strongly about 
it, I can rename it to HTestConstants.

  src/test/java/org/apache/hadoop/hbase/client/TestIntraRowPagination.java:60 
Added region.close(). I am assuming that takes care of closing the HLog 
(correct me if I'm wrong).
  src/main/java/org/apache/hadoop/hbase/client/Get.java:212 Yes, this offset is 
only within a particular (row, CF) combination. It gets reset back to zero when 
we move to the next row/CF. Added this to javadoc.
  src/main/java/org/apache/hadoop/hbase/client/Result.java:177 Got rid of this 
method.

REVISION DETAIL
  https://reviews.facebook.net/D2799


 Provide a reliable intra-row pagination mechanism
 -

 Key: HBASE-5104
 URL: https://issues.apache.org/jira/browse/HBASE-5104
 Project: HBase
  Issue Type: Bug
Reporter: Kannan Muthukkaruppan
Assignee: Madhuwanti Vaidya
 Attachments: D2799.1.patch, D2799.2.patch, D2799.3.patch, 
 D2799.4.patch, D2799.5.patch, 
 jira-HBASE-5104-Provide-a-reliable-intra-row-paginat-2012-04-16_12_39_42.patch,
  testFilterList.rb


 Addendum:
 Doing pagination (retrieving at most limit number of KVs at a particular 
 offset) is currently supported via the ColumnPaginationFilter. However, it 
 is not a very clean way of supporting pagination.  Some of the problems with 
 it are:
 * Normally, one would expect a query with (Filter(A) AND Filter(B)) to have 
 same results as (query with Filter(A)) INTERSECT (query with Filter(B)). This 
 is not the case for ColumnPaginationFilter as its internal state gets updated 
 depending on whether or not Filter(A) returns TRUE/FALSE for a particular 
 cell.
 * When this Filter is used in combination with other filters (e.g., doing AND 
 with another filter using FilterList), the behavior of the query depends on 
 the order of filters in the FilterList. This is not ideal.
 * ColumnPaginationFilter is a stateful filter which ends up counting multiple 
 versions of the cell as separate values even if another filter upstream or 
 the ScanQueryMatcher is going to reject the value for other reasons.
 Seems like we need a reliable way to do pagination. The particular use case 
 that prompted this JIRA is pagination within the same rowKey. For example, 
 for a given row key R, get columns with prefix P, starting at offset X (among 
 columns which have prefix P) and limit Y. Some possible fixes might be:
 1) enhance ColumnPrefixFilter to support another constructor which supports 
 limit/offset.
 2) Support pagination (limit/offset) at the Scan/Get API level (rather than 
 as a filter) [Like SQL].
 Original Post:
 Thanks Jiakai Liu for reporting this issue and doing the initial 
 investigation. Email from Jiakai below:
 Assuming that we have an index column family with the following entries:
 tag0:001:thread1
 ...
 tag1:001:thread1
 tag1:002:thread2
 ...
 tag1:010:thread10
 ...
 tag2:001:thread1
 tag2:005:thread5
 ...
 To get threads with tag1 in range [5, 10), I tried the following code:
 ColumnPrefixFilter filter1 = new 
 ColumnPrefixFilter(Bytes.toBytes(tag1));
 ColumnPaginationFilter filter2 = new ColumnPaginationFilter(5 /* limit 
 */, 5 /* offset */);
 FilterList filters = new FilterList(Operator.MUST_PASS_ALL);
 filters.addFilter(filter1);
 filters.addFilter(filter2);
 Get get = new Get(USER);
 get.addFamily(COLUMN_FAMILY);
 get.setMaxVersions(1);
 get.setFilter(filters);
 Somehow it didn't work as expected. It returned the entries as if the filter1 
 were not set.
 Turns out the ColumnPrefixFilter returns 

[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2012-05-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269116#comment-13269116
 ] 

stack commented on HBASE-5930:
--

I like idea of client saying whether to put it on deferred flush queue or 
whether its to be flushed immediately.

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Priority: Minor

 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5494) Introduce a zk hosted table-wide read/write lock so only one table operation at a time

2012-05-05 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269119#comment-13269119
 ] 

Phabricator commented on HBASE-5494:


tedyu has commented on the revision [jira] [HBASE-5494] [89-fb] Table-level 
locks for schema changing operations..

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/zookeeper/DistributedLock.java:294 What 
action of the lock-owner is associated with znode data change ?
  I see znode creation and deletion, not data change.

REVISION DETAIL
  https://reviews.facebook.net/D2997

BRANCH
  table_level_ddl_locks


 Introduce a zk hosted table-wide read/write lock so only one table operation 
 at a time
 --

 Key: HBASE-5494
 URL: https://issues.apache.org/jira/browse/HBASE-5494
 Project: HBase
  Issue Type: Improvement
Reporter: stack
 Attachments: D2997.3.patch, D2997.4.patch, D2997.5.patch, 
 D2997.6.patch


 I saw this facility over in the accumulo code base.
 Currently we just try to sort out the mess when splits come in during an 
 online schema edit; somehow we figure we can figure all possible region 
 transition combinations and make the right call.
 We could try and narrow the number of combinations by taking out a zk table 
 lock when doing table operations.
 For example, on split or merge, we could take a read-only lock meaning the 
 table can't be disabled while these are running.
 We could then take a write only lock if we want to ensure the table doesn't 
 change while disabling or enabling process is happening.
 Shouldn't be too hard to add.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5104) Provide a reliable intra-row pagination mechanism

2012-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269122#comment-13269122
 ] 

Hadoop QA commented on HBASE-5104:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525758/D2799.5.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 7 new or modified tests.

+1 hadoop23.  The patch compiles against the hadoop 0.23.x profile.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.coprocessor.TestMasterObserver
  org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint
  
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1782//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1782//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1782//console

This message is automatically generated.

 Provide a reliable intra-row pagination mechanism
 -

 Key: HBASE-5104
 URL: https://issues.apache.org/jira/browse/HBASE-5104
 Project: HBase
  Issue Type: Bug
Reporter: Kannan Muthukkaruppan
Assignee: Madhuwanti Vaidya
 Attachments: D2799.1.patch, D2799.2.patch, D2799.3.patch, 
 D2799.4.patch, D2799.5.patch, 
 jira-HBASE-5104-Provide-a-reliable-intra-row-paginat-2012-04-16_12_39_42.patch,
  testFilterList.rb


 Addendum:
 Doing pagination (retrieving at most limit number of KVs at a particular 
 offset) is currently supported via the ColumnPaginationFilter. However, it 
 is not a very clean way of supporting pagination.  Some of the problems with 
 it are:
 * Normally, one would expect a query with (Filter(A) AND Filter(B)) to have 
 same results as (query with Filter(A)) INTERSECT (query with Filter(B)). This 
 is not the case for ColumnPaginationFilter as its internal state gets updated 
 depending on whether or not Filter(A) returns TRUE/FALSE for a particular 
 cell.
 * When this Filter is used in combination with other filters (e.g., doing AND 
 with another filter using FilterList), the behavior of the query depends on 
 the order of filters in the FilterList. This is not ideal.
 * ColumnPaginationFilter is a stateful filter which ends up counting multiple 
 versions of the cell as separate values even if another filter upstream or 
 the ScanQueryMatcher is going to reject the value for other reasons.
 Seems like we need a reliable way to do pagination. The particular use case 
 that prompted this JIRA is pagination within the same rowKey. For example, 
 for a given row key R, get columns with prefix P, starting at offset X (among 
 columns which have prefix P) and limit Y. Some possible fixes might be:
 1) enhance ColumnPrefixFilter to support another constructor which supports 
 limit/offset.
 2) Support pagination (limit/offset) at the Scan/Get API level (rather than 
 as a filter) [Like SQL].
 Original Post:
 Thanks Jiakai Liu for reporting this issue and doing the initial 
 investigation. Email from Jiakai below:
 Assuming that we have an index column family with the following entries:
 tag0:001:thread1
 ...
 tag1:001:thread1
 tag1:002:thread2
 ...
 tag1:010:thread10
 ...
 tag2:001:thread1
 tag2:005:thread5
 ...
 To get threads with tag1 in range [5, 10), I tried the following code:
 ColumnPrefixFilter filter1 = new 
 ColumnPrefixFilter(Bytes.toBytes(tag1));
 ColumnPaginationFilter filter2 = new ColumnPaginationFilter(5 /* limit 
 */, 5 /* offset */);
 FilterList filters = new FilterList(Operator.MUST_PASS_ALL);
 filters.addFilter(filter1);
 filters.addFilter(filter2);
 Get get = new Get(USER);
 get.addFamily(COLUMN_FAMILY);
 get.setMaxVersions(1);
 get.setFilter(filters);
 Somehow it didn't work as expected. It returned the entries as if the filter1 
 were not set.
 Turns out the ColumnPrefixFilter returns SEEK_NEXT_USING_HINT in some cases. 
 The FilterList filter does not handle this return code properly (treat it as 
 INCLUDE).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Updated] (HBASE-5945) Reduce buffer copies in IPC server response path

2012-05-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5945:
-

Priority: Critical  (was: Minor)

Making critical so we don't overlook this work.

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.96.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Attachments: buffer-copies.txt, even-fewer-copies.txt


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5945) Reduce buffer copies in IPC server response path

2012-05-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13269128#comment-13269128
 ] 

stack commented on HBASE-5945:
--

Looks great Todd.

Could we keep this DataOutputBuffer for reuse?

{code}
+  DataOutputBuffer buf = new DataOutputBuffer(size);
{code}

 Reduce buffer copies in IPC server response path
 

 Key: HBASE-5945
 URL: https://issues.apache.org/jira/browse/HBASE-5945
 Project: HBase
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.96.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Attachments: buffer-copies.txt, even-fewer-copies.txt


 The new PB code is sloppy with buffers and makes several needless copies. 
 This increases GC time a lot. A few simple changes can cut this back down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira