[jira] [Commented] (HBASE-5261) Update HBase for Java 7

2012-07-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408592#comment-13408592
 ] 

Lars Hofhansl commented on HBASE-5261:
--

I found that in order to build HBase with OpenJDK7 I need to make this change:
{code}
--- pom.xml (revision 1358499)
+++ pom.xml (working copy)
@@ -395,6 +395,9 @@
 target${compileSource}/target
 showWarningstrue/showWarnings
 showDeprecationfalse/showDeprecation
+compilerArguments
+  Xlint:-options/
+/compilerArguments
   /configuration
 /plugin
{code}

That suppressed the following warning (which maven fails to parse and hence 
errors out):
bq. warning: [options] bootstrap class path not set in conjunction with -source 
1.6

G1 is supposed to be viable from OpenJDK7u4 onwards.


 Update HBase for Java 7
 ---

 Key: HBASE-5261
 URL: https://issues.apache.org/jira/browse/HBASE-5261
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 We need to make sure that HBase compiles and works with JDK 7. Once we verify 
 it is reasonably stable, we can explore utilizing the G1 garbage collector. 
 When all deployments are ready to move to JDK 7, we can start using new 
 language features, but in the transition period we will need to maintain a 
 codebase that compiles both with JDK 6 and JDK 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6325) [replication] Race in ReplicationSourceManager.init can initiate a failover even if the node is alive

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408598#comment-13408598
 ] 

stack commented on HBASE-6325:
--

+1

 [replication] Race in ReplicationSourceManager.init can initiate a failover 
 even if the node is alive
 -

 Key: HBASE-6325
 URL: https://issues.apache.org/jira/browse/HBASE-6325
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6, 0.92.1, 0.94.0
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.2

 Attachments: HBASE-6325-0.92-v2.patch, HBASE-6325-0.92.patch


 Yet another bug found during the leap second madness, it's possible to miss 
 the registration of new region servers so that in 
 ReplicationSourceManager.init we start the failover of a live and replicating 
 region server. I don't think there's data loss but the RS that's being failed 
 over will die on:
 {noformat}
 2012-07-01 06:25:15,604 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 sv4r23s48,10304,1341112194623: Writing replication status
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/replication/rs/sv4r23s48,10304,1341112194623/4/sv4r23s48%2C10304%2C1341112194623.1341112195369
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:372)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:655)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:697)
 at 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper.writeReplicationStatus(ReplicationZookeeper.java:470)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:154)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:607)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:368)
 {noformat}
 It seems to me that just refreshing {{otherRegionServers}} after getting the 
 list of {{currentReplicators}} would be enough to fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6060) Regions's in OPENING state from failed regionservers takes a long time to recover

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408601#comment-13408601
 ] 

stack commented on HBASE-6060:
--

v3 should be good.  I posted it up on RB.  Addresses Ted and Ram's issues.

 Regions's in OPENING state from failed regionservers takes a long time to 
 recover
 -

 Key: HBASE-6060
 URL: https://issues.apache.org/jira/browse/HBASE-6060
 Project: HBase
  Issue Type: Bug
  Components: master, regionserver
Reporter: Enis Soztutar
Assignee: rajeshbabu
 Fix For: 0.96.0, 0.94.1, 0.92.3

 Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 
 6060-94-v4_1.patch, 6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 
 6060-trunk_3.patch, 6060_alternative_suggestion.txt, 
 6060_suggestion2_based_off_v3.patch, 6060_suggestion_based_off_v3.patch, 
 6060_suggestion_toassign_rs_wentdown_beforerequest.patch, 
 HBASE-6060-92.patch, HBASE-6060-94.patch, HBASE-6060-trunk_4.patch, 
 HBASE-6060_trunk_5.patch


 we have seen a pattern in tests, that the regions are stuck in OPENING state 
 for a very long time when the region server who is opening the region fails. 
 My understanding of the process: 
  
  - master calls rs to open the region. If rs is offline, a new plan is 
 generated (a new rs is chosen). RegionState is set to PENDING_OPEN (only in 
 master memory, zk still shows OFFLINE). See HRegionServer.openRegion(), 
 HMaster.assign()
  - RegionServer, starts opening a region, changes the state in znode. But 
 that znode is not ephemeral. (see ZkAssign)
  - Rs transitions zk node from OFFLINE to OPENING. See 
 OpenRegionHandler.process()
  - rs then opens the region, and changes znode from OPENING to OPENED
  - when rs is killed between OPENING and OPENED states, then zk shows OPENING 
 state, and the master just waits for rs to change the region state, but since 
 rs is down, that wont happen. 
  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard 
 against these kind of conditions. It periodically checks (every 10 sec by 
 default) the regions in transition to see whether they timedout 
 (hbase.master.assignment.timeoutmonitor.timeout). Default timeout is 30 min, 
 which explains what you and I are seeing. 
  - ServerShutdownHandler in Master does not reassign regions in OPENING 
 state, although it handles other states. 
 Lowering that threshold from the configuration is one option, but still I 
 think we can do better. 
 Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6339) Bulkload call to RS should begin holding write lock only after the file has been transferred

2012-07-07 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HBASE-6339:
---

Description: 
I noticed that right now, under a bulkLoadHFiles call to an RS, we grab the 
HRegion write lock as soon as we determine that it is a multi-family bulk load 
we'll be attempting. The file copy from the caller's source FS is done after 
holding the lock.

This doesn't seem right. For instance, we had a recent use-case where the bulk 
load running cluster is a separate HDFS instance/cluster than the one that runs 
HBase and the transfers between these FSes can get slower than an intra-cluster 
transfer. Hence I think we should begin to hold the write lock only after we've 
got a successful destinationFS copy of the requested file, and thereby allow 
more write throughput to pass.

Does this sound reasonable to do?

  was:
I noticed that right now, under a bulkLoadHFiles call to an RS, we grab the 
write lock as soon as we determine that it is a multi-family bulk load we'll be 
attempting. The file copy from the caller's source FS is done after holding the 
lock.

This doesn't seem right. For instance, we had a recent use-case where the bulk 
load running cluster is a separate HDFS instance/cluster than the one that runs 
HBase and the transfers between these FSes can get slower than an intra-cluster 
transfer. Hence I think we should begin to hold the write lock only after we've 
got a successful destinationFS copy of the requested file, and thereby allow 
more write throughput to pass.

Does this sound reasonable to do?


 Bulkload call to RS should begin holding write lock only after the file has 
 been transferred
 

 Key: HBASE-6339
 URL: https://issues.apache.org/jira/browse/HBASE-6339
 Project: HBase
  Issue Type: Improvement
  Components: client, regionserver
Affects Versions: 0.90.0
Reporter: Harsh J
Assignee: Harsh J

 I noticed that right now, under a bulkLoadHFiles call to an RS, we grab the 
 HRegion write lock as soon as we determine that it is a multi-family bulk 
 load we'll be attempting. The file copy from the caller's source FS is done 
 after holding the lock.
 This doesn't seem right. For instance, we had a recent use-case where the 
 bulk load running cluster is a separate HDFS instance/cluster than the one 
 that runs HBase and the transfers between these FSes can get slower than an 
 intra-cluster transfer. Hence I think we should begin to hold the write lock 
 only after we've got a successful destinationFS copy of the requested file, 
 and thereby allow more write throughput to pass.
 Does this sound reasonable to do?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6350) Some logging improvements for RegionServer bulk load

2012-07-07 Thread Harsh J (JIRA)
Harsh J created HBASE-6350:
--

 Summary: Some logging improvements for RegionServer bulk load
 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
Reporter: Harsh J
Priority: Minor


The current logging in the bulk loading RPC call to a RegionServer lacks some 
info in certain cases. For instance, I recently noticed that it is possible 
that IOException may be caused during bulk load file transfer (copy) off of 
another FS and that during the same time the client already times the socket 
out and thereby does not receive a thrown Exception back remotely (HBase prints 
a ClosedChannelException for the IPC when it attempts to send the real message, 
and hence the real cause is lost).

Improvements around this kind of issue, wherein we could first log the 
IOException at the RS before sending, and a few other wording improvements are 
present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6350) Some logging improvements for RegionServer bulk loading

2012-07-07 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HBASE-6350:
---

Component/s: regionserver
Summary: Some logging improvements for RegionServer bulk loading  (was: 
Some logging improvements for RegionServer bulk load)

 Some logging improvements for RegionServer bulk loading
 ---

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6350) Some logging improvements for RegionServer bulk load

2012-07-07 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HBASE-6350:
---

 Assignee: Harsh J
Affects Version/s: 0.94.0
   Status: Patch Available  (was: Open)

 Some logging improvements for RegionServer bulk load
 

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6350) Some logging improvements for RegionServer bulk load

2012-07-07 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HBASE-6350:
---

Attachment: HBASE-6350.patch

 Some logging improvements for RegionServer bulk load
 

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (HBASE-6349) HBase checkout not getting compiled

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408613#comment-13408613
 ] 

Zhihong Ted Yu edited comment on HBASE-6349 at 7/7/12 10:11 AM:


Correction to my suggestion above, the command should be:
{code}
mvn clean install -DskipTests
{code}

  was (Author: zhi...@ebaysf.com):
Correction to my suggestion above, the command should be:
{code}
mvn clean package -DskipTests
{code}
  
 HBase checkout not getting compiled
 ---

 Key: HBASE-6349
 URL: https://issues.apache.org/jira/browse/HBASE-6349
 Project: HBase
  Issue Type: Bug
Reporter: Varunkumar Manohar

 I am trying to compile the latest svn checkout of HBase source 
 code using Maven.
 This is the error I am facing 
 [ERROR] Failed to execute goal on project hbase-server: Could not resolve 
 dependencies for project org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT: 
 Could not find artifact org.apache.hbase:hbase-common:jar:0.95-SNAPSHOT in 
 cloudbees netty (http://repository-netty.forge.cloudbees.com/snapshot/)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6349) HBase checkout not getting compiled

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408613#comment-13408613
 ] 

Zhihong Ted Yu commented on HBASE-6349:
---

Correction to my suggestion above, the command should be:
{code}
mvn clean package -DskipTests
{code}

 HBase checkout not getting compiled
 ---

 Key: HBASE-6349
 URL: https://issues.apache.org/jira/browse/HBASE-6349
 Project: HBase
  Issue Type: Bug
Reporter: Varunkumar Manohar

 I am trying to compile the latest svn checkout of HBase source 
 code using Maven.
 This is the error I am facing 
 [ERROR] Failed to execute goal on project hbase-server: Could not resolve 
 dependencies for project org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT: 
 Could not find artifact org.apache.hbase:hbase-common:jar:0.95-SNAPSHOT in 
 cloudbees netty (http://repository-netty.forge.cloudbees.com/snapshot/)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6351) IO impact reduction for compaction

2012-07-07 Thread Zhihong Ted Yu (JIRA)
Zhihong Ted Yu created HBASE-6351:
-

 Summary: IO impact reduction for compaction
 Key: HBASE-6351
 URL: https://issues.apache.org/jira/browse/HBASE-6351
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Ted Yu


Lucene 4.0.0-Alpha was recently released.  Mike McCandless, sne of the Lucene 
developers, wrote a really nice post about new things in this version of 
Lucene.  The part that I think is interesting for HBase, and that HBase devs 
may want to look at (and borrow to use with compactions) is this:

Reducing merge IO impact 

Merging (consolidating many small segments into a single big one) is a very IO 
and CPU intensive operation which can easily interfere with ongoing searches. 
In 4.0.0 we now have two ways to reduct this impact:
* Rate-limit the IO caused by ongoing merging, by 
callingFSDirectory.setMaxMergeWriteMBPerSec. 


* Use the new NativeUnixDirectory which bypasses the OS's IO cache for 
all merge IO, by using direct IO. This ensures that a merge won't evict hot 
pages used by searches. (Note that there is also a native WindowsDirectory, but 
it does not yet use direct IO during merging... patches welcome!). 

Remember to also set swappiness to 0 on Linux if you want to maximize search 
responsiveness. 

More generally, the APIs that open an input or output file (Directory.openInput 
andDirectory.createOutput) now take an IOContext describing what's being done 
(e.g., flush vs merge), so you can create a custom Directory that changes its 
behavior depending on the context. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6351) IO impact reduction for compaction

2012-07-07 Thread Zhihong Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Ted Yu updated HBASE-6351:
--

Description: 
The following came from Otis:

Lucene 4.0.0-Alpha was recently released.  Mike McCandless, sne of the Lucene 
developers, wrote a really nice post about new things in this version of 
Lucene.  The part that I think is interesting for HBase, and that HBase devs 
may want to look at (and borrow to use with compactions) is this:

Reducing merge IO impact 

Merging (consolidating many small segments into a single big one) is a very IO 
and CPU intensive operation which can easily interfere with ongoing searches. 
In 4.0.0 we now have two ways to reduct this impact:
* Rate-limit the IO caused by ongoing merging, by 
callingFSDirectory.setMaxMergeWriteMBPerSec. 


* Use the new NativeUnixDirectory which bypasses the OS's IO cache for 
all merge IO, by using direct IO. This ensures that a merge won't evict hot 
pages used by searches. (Note that there is also a native WindowsDirectory, but 
it does not yet use direct IO during merging... patches welcome!). 

Remember to also set swappiness to 0 on Linux if you want to maximize search 
responsiveness. 

More generally, the APIs that open an input or output file (Directory.openInput 
andDirectory.createOutput) now take an IOContext describing what's being done 
(e.g., flush vs merge), so you can create a custom Directory that changes its 
behavior depending on the context. 

  was:
Lucene 4.0.0-Alpha was recently released.  Mike McCandless, sne of the Lucene 
developers, wrote a really nice post about new things in this version of 
Lucene.  The part that I think is interesting for HBase, and that HBase devs 
may want to look at (and borrow to use with compactions) is this:

Reducing merge IO impact 

Merging (consolidating many small segments into a single big one) is a very IO 
and CPU intensive operation which can easily interfere with ongoing searches. 
In 4.0.0 we now have two ways to reduct this impact:
* Rate-limit the IO caused by ongoing merging, by 
callingFSDirectory.setMaxMergeWriteMBPerSec. 


* Use the new NativeUnixDirectory which bypasses the OS's IO cache for 
all merge IO, by using direct IO. This ensures that a merge won't evict hot 
pages used by searches. (Note that there is also a native WindowsDirectory, but 
it does not yet use direct IO during merging... patches welcome!). 

Remember to also set swappiness to 0 on Linux if you want to maximize search 
responsiveness. 

More generally, the APIs that open an input or output file (Directory.openInput 
andDirectory.createOutput) now take an IOContext describing what's being done 
(e.g., flush vs merge), so you can create a custom Directory that changes its 
behavior depending on the context. 


 IO impact reduction for compaction
 --

 Key: HBASE-6351
 URL: https://issues.apache.org/jira/browse/HBASE-6351
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Ted Yu

 The following came from Otis:
 Lucene 4.0.0-Alpha was recently released.  Mike McCandless, sne of the Lucene 
 developers, wrote a really nice post about new things in this version of 
 Lucene.  The part that I think is interesting for HBase, and that HBase devs 
 may want to look at (and borrow to use with compactions) is this:
 Reducing merge IO impact 
 Merging (consolidating many small segments into a single big one) is a very 
 IO and CPU intensive operation which can easily interfere with ongoing 
 searches. In 4.0.0 we now have two ways to reduct this impact:
 * Rate-limit the IO caused by ongoing merging, by 
 callingFSDirectory.setMaxMergeWriteMBPerSec. 
 * Use the new NativeUnixDirectory which bypasses the OS's IO cache 
 for all merge IO, by using direct IO. This ensures that a merge won't evict 
 hot pages used by searches. (Note that there is also a native 
 WindowsDirectory, but it does not yet use direct IO during merging... patches 
 welcome!). 
 Remember to also set swappiness to 0 on Linux if you want to maximize search 
 responsiveness. 
 More generally, the APIs that open an input or output file 
 (Directory.openInput andDirectory.createOutput) now take an IOContext 
 describing what's being done (e.g., flush vs merge), so you can create a 
 custom Directory that changes its behavior depending on the context. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4050) Update HBase metrics framework to metrics2 framework

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408615#comment-13408615
 ] 

Zhihong Ted Yu commented on HBASE-4050:
---

The reason Hadoop QA didn't get back was because of the OutOfMemoryError 
compiling against hadoop 2.0:
{code}
[INFO] Compiling 765 source files to 
/Users/zhihyu/trunk-hbase/hbase-server/target/classes
[INFO] 
[INFO] 
[INFO] Skipping HBase - Server
[INFO] This project has been banned from the build due to previous failures.
[INFO] 
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 12:10.921s
[INFO] Finished at: Sat Jul 07 03:25:44 PDT 2012
[INFO] Final Memory: 35M/123M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile (default-compile) 
on project hbase-server: Fatal error compiling: Error while executing the 
compiler. InvocationTargetException: Java heap space - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile (default-compile) 
on project hbase-server: Fatal error compiling
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoExecutionException: Fatal error compiling
at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:498)
at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more
Caused by: org.codehaus.plexus.compiler.CompilerException: Error while 
executing the compiler.
at 
org.codehaus.plexus.compiler.javac.JavacCompiler.compileInProcess(JavacCompiler.java:434)
at 
org.codehaus.plexus.compiler.javac.JavacCompiler.compile(JavacCompiler.java:141)
at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:493)
... 22 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.codehaus.plexus.compiler.javac.JavacCompiler.compileInProcess(JavacCompiler.java:420)
... 24 more
Caused by: java.lang.OutOfMemoryError: Java heap space
{code}
Here is the command I used:
{code}
mvn clean test help:active-profiles -X -DskipTests -Dhadoop.profile=2.0
{code}

 Update HBase metrics framework to 

[jira] [Commented] (HBASE-6350) Some logging improvements for RegionServer bulk loading

2012-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408616#comment-13408616
 ] 

Hadoop QA commented on HBASE-6350:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12535512/HBASE-6350.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The applied patch generated 5 javac compiler warnings (more than 
the trunk's current 4 warnings).

-1 findbugs.  The patch appears to introduce 7 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestZooKeeper

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2342//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2342//console

This message is automatically generated.

 Some logging improvements for RegionServer bulk loading
 ---

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6307) Fix hbase hadoop 2.0/0.23 tests

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408618#comment-13408618
 ] 

stack commented on HBASE-6307:
--

We should raise it on dev list unless good reason for keeping up 0.22, 0.23, 
0.20, 0.21...

 Fix hbase hadoop 2.0/0.23 tests
 ---

 Key: HBASE-6307
 URL: https://issues.apache.org/jira/browse/HBASE-6307
 Project: HBase
  Issue Type: Umbrella
Reporter: Jonathan Hsieh
Priority: Critical
 Fix For: 0.96.0, 0.94.1


 This is an umbrella issue for fixing unit tests and hbase builds form 0.92+ 
 on top of hadoop 0.23 (currently 0.92/0.94) and hadoop 2.0.x (trunk/0.96).  
 Once these are up and passing properly, we'll close out the umbrella issue by 
 adding hbase-trunk-on-hadoop-2 build to the hadoopqa bot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6220) PersistentMetricsTimeVaryingRate gets used for non-time-based metrics

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408619#comment-13408619
 ] 

stack commented on HBASE-6220:
--

@Paul Easy test is browse to /jmx on regionserver.  Look for your new metrics 
messings.  Thanks for the contrib.

 PersistentMetricsTimeVaryingRate gets used for non-time-based metrics
 -

 Key: HBASE-6220
 URL: https://issues.apache.org/jira/browse/HBASE-6220
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.96.0
Reporter: David S. Wang
Assignee: Paul Cavallaro
Priority: Minor
  Labels: noob
 Attachments: ServerMetrics_HBASE_6220.patch


 PersistentMetricsTimeVaryingRate gets used for metrics that are not 
 time-based, leading to confusing names such as avg_time for compaction 
 size, etc.  You hav to read the code in order to understand that this is 
 actually referring to bytes, not seconds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6205) Support an option to keep data of dropped table for some time

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408620#comment-13408620
 ] 

stack commented on HBASE-6205:
--

bq. Unfortunately, it happens in our environment because one user make a 
mistake between production cluster and testing cluster.

Would a better approach be giving production and test environment's different 
access permissions.

If not, if tester has access to both, IMO more safeguards will never fully 
protect against this error happening.

 Support an option to keep data of dropped table for some time
 -

 Key: HBASE-6205
 URL: https://issues.apache.org/jira/browse/HBASE-6205
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.94.0, 0.96.0
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-6205.patch, HBASE-6205v2.patch, 
 HBASE-6205v3.patch, HBASE-6205v4.patch, HBASE-6205v5.patch


 User may drop table accidentally because of error code or other uncertain 
 reasons.
 Unfortunately, it happens in our environment because one user make a mistake 
 between production cluster and testing cluster.
 So, I just give a suggestion, do we need to support an option to keep data of 
 dropped table for some time, e.g. 1 day
 In the patch:
 We make a new dir named .trashtables in the rood dir.
 In the DeleteTableHandler, we move files in dropped table's dir to trash 
 table dir instead of deleting them directly.
 And Create new class TrashCleaner which will clean dropped tables if it is 
 time out with a period check.
 Default keep time for dropped tables is 1 day, and check period is 1 hour.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created HBASE-6352:
--

 Summary: Add copy method in Bytes
 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Reporter: Jean-Marc Spaggiari
Priority: Minor


Having a copy method into Bytes might be nice to reduce client code size and 
improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-6352:
---

Attachment: HBASE_JIRA_6352.patch

Patch to add a copy method into the Bytes class

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-6352:
---

Affects Version/s: 0.94.0
   Status: Patch Available  (was: Open)

Patch to add a copy method into the Bytes class

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408629#comment-13408629
 ] 

Hadoop QA commented on HBASE-6352:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12535518/HBASE_JIRA_6352.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2343//console

This message is automatically generated.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408631#comment-13408631
 ] 

stack commented on HBASE-6352:
--

Patch looks good.  The formatting is off.  No tabs in hbase codebase.  We use 
two spaces instead.  Follow also the spacing you see in the rest of the code 
base: i.e. a space between ')' and the opening '{'.  Good stuff Jean-Marc.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-6352:
---

Attachment: HBASE_JIRA_6352_v2.patch

Updated version based on Stack's comments.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408641#comment-13408641
 ] 

Hadoop QA commented on HBASE-6352:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12535522/HBASE_JIRA_6352_v2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2344//console

This message is automatically generated.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6353) Snapshots shell

2012-07-07 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-6353:
--

 Summary: Snapshots shell
 Key: HBASE-6353
 URL: https://issues.apache.org/jira/browse/HBASE-6353
 Project: HBase
  Issue Type: New Feature
  Components: shell
Reporter: Matteo Bertozzi


h6. hbase shell with snapshot commands

* snapshot snapshot name table name
** Take a snapshot of the specified name with the specified name 
* restore_snapshot snapshot name
** Restore specified snapshot on the original table
* mount_snapshot snapshot name table name [readonly]
** Load the snapshot data as specified table (optional readonly flag)
* list_snapshots [filter]
** Show a list of snapshots
* delete_snapshot snapshot name
** Remove a specified snapshot

h6. Restore Table
Given a snapshot name restore override the original table with the snapshot 
content.
Before restoring a new snapshot of the table is taken, just to avoid bad 
situations.
(If the table is not disabled we can keep serving reads)

This allows a full and quick rollback to a previous snapshot.

h6. Mount Table (Aka Clone Table)
Given a snapshot name a new table is created with the content of the 
specified snapshot.

This operation allows:
 * To have an old version of the table in parallel with the current one.
 ** Look at snapshot side-by-side with the current before making the decision 
whether to roll back or not
 * To Restore only individual items (only some small range of data was lost 
from current)
 ** MR job that scan the cloned table and update the data in the original one. 
(Partial restore of the data)
 * if the table is not marked as read-only
 ** To Add/Remove data from this table without affecting the original one or 
the snapshot.

h6. Open points
 * Add snapshot type option on take snapshot command (global, timestamp)?
 * Keep separate the restore from mount?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6353) Snapshots shell

2012-07-07 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-6353:
---

Attachment: HBASE-6353-v0.patch

 Snapshots shell
 ---

 Key: HBASE-6353
 URL: https://issues.apache.org/jira/browse/HBASE-6353
 Project: HBase
  Issue Type: New Feature
  Components: shell
Reporter: Matteo Bertozzi
 Attachments: HBASE-6353-v0.patch


 h6. hbase shell with snapshot commands
 * snapshot snapshot name table name
 ** Take a snapshot of the specified name with the specified name 
 * restore_snapshot snapshot name
 ** Restore specified snapshot on the original table
 * mount_snapshot snapshot name table name [readonly]
 ** Load the snapshot data as specified table (optional readonly flag)
 * list_snapshots [filter]
 ** Show a list of snapshots
 * delete_snapshot snapshot name
 ** Remove a specified snapshot
 h6. Restore Table
 Given a snapshot name restore override the original table with the snapshot 
 content.
 Before restoring a new snapshot of the table is taken, just to avoid bad 
 situations.
 (If the table is not disabled we can keep serving reads)
 This allows a full and quick rollback to a previous snapshot.
 h6. Mount Table (Aka Clone Table)
 Given a snapshot name a new table is created with the content of the 
 specified snapshot.
 This operation allows:
  * To have an old version of the table in parallel with the current one.
  ** Look at snapshot side-by-side with the current before making the 
 decision whether to roll back or not
  * To Restore only individual items (only some small range of data was lost 
 from current)
  ** MR job that scan the cloned table and update the data in the original 
 one. (Partial restore of the data)
  * if the table is not marked as read-only
  ** To Add/Remove data from this table without affecting the original one or 
 the snapshot.
 h6. Open points
  * Add snapshot type option on take snapshot command (global, timestamp)?
  * Keep separate the restore from mount?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-6353) Snapshots shell

2012-07-07 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi reassigned HBASE-6353:
--

Assignee: Matteo Bertozzi

 Snapshots shell
 ---

 Key: HBASE-6353
 URL: https://issues.apache.org/jira/browse/HBASE-6353
 Project: HBase
  Issue Type: New Feature
  Components: shell
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: HBASE-6353-v0.patch


 h6. hbase shell with snapshot commands
 * snapshot snapshot name table name
 ** Take a snapshot of the specified name with the specified name 
 * restore_snapshot snapshot name
 ** Restore specified snapshot on the original table
 * mount_snapshot snapshot name table name [readonly]
 ** Load the snapshot data as specified table (optional readonly flag)
 * list_snapshots [filter]
 ** Show a list of snapshots
 * delete_snapshot snapshot name
 ** Remove a specified snapshot
 h6. Restore Table
 Given a snapshot name restore override the original table with the snapshot 
 content.
 Before restoring a new snapshot of the table is taken, just to avoid bad 
 situations.
 (If the table is not disabled we can keep serving reads)
 This allows a full and quick rollback to a previous snapshot.
 h6. Mount Table (Aka Clone Table)
 Given a snapshot name a new table is created with the content of the 
 specified snapshot.
 This operation allows:
  * To have an old version of the table in parallel with the current one.
  ** Look at snapshot side-by-side with the current before making the 
 decision whether to roll back or not
  * To Restore only individual items (only some small range of data was lost 
 from current)
  ** MR job that scan the cloned table and update the data in the original 
 one. (Partial restore of the data)
  * if the table is not marked as read-only
  ** To Add/Remove data from this table without affecting the original one or 
 the snapshot.
 h6. Open points
  * Add snapshot type option on take snapshot command (global, timestamp)?
  * Keep separate the restore from mount?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-07-07 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408646#comment-13408646
 ] 

Matteo Bertozzi commented on HBASE-6055:


@Jesse working on HBASE-6353, I've also switched to use protobuf 
(HMasterInterface was removed HBASE-6039).
Maybe you can use when rebasing on trunk (HBaseAdmin, MasterAdminProtocol, ...)

 Snapshots in HBase 0.96
 ---

 Key: HBASE-6055
 URL: https://issues.apache.org/jira/browse/HBASE-6055
 Project: HBase
  Issue Type: New Feature
  Components: client, master, regionserver, zookeeper
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.96.0

 Attachments: Snapshots in HBase.docx


 Continuation of HBASE-50 for the current trunk. Since the implementation has 
 drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6350) Some logging improvements for RegionServer bulk loading

2012-07-07 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408650#comment-13408650
 ] 

Harsh J commented on HBASE-6350:


{quote}-1 tests included. The patch doesn't appear to include any new or 
modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.{quote}

I started an RS and did a remote FS bulk load call to verify some of the 
logging changes.

bq. -1 javac. The applied patch generated 5 javac compiler warnings (more than 
the trunk's current 4 warnings).

I don't see how this came to be. I've merely added a few already existing 
objects here and there and changed a few string messages. I am not sure this 
warning is cause of me.

bq. -1 findbugs. The patch appears to introduce 7 new Findbugs (version 1.3.9) 
warnings.

No, it does not. At least I don't see how the warnings apply to my changes.

bq. -1 core tests. The patch failed these unit tests: 
org.apache.hadoop.hbase.TestZooKeeper

This doesn't seem to be cause of my trivial patch either.

 Some logging improvements for RegionServer bulk loading
 ---

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408661#comment-13408661
 ] 

Zhihong Ted Yu commented on HBASE-6352:
---

@Jean-Marc:
Can you include a small example of how the copy method reduces client size ?

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408664#comment-13408664
 ] 

Jean-Marc Spaggiari commented on HBASE-6352:


[~zhi...@ebaysf.com]Here is a simple example:
byte[] endKey = Bytes.copy (startKey);
endKey[endKey.length - 1]++;
Scan scan = new Scan (startKey, endKey);

I know that if the last byte is 255 this will cause and issue but it's just for 
the example. Also, I will provide soon another suggestion to have a methode 
that increase the last byte of the array safely. It's usefull when workin with 
the scans (I think).

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408672#comment-13408672
 ] 

Zhihong Ted Yu commented on HBASE-6352:
---

I wasn't able to find the above code snippet under src/main or src/test
Can you point me to the file ?

It would be nice if you use the copy method to simplify that code.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4050) Update HBase metrics framework to metrics2 framework

2012-07-07 Thread Alex Baranau (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408675#comment-13408675
 ] 

Alex Baranau commented on HBASE-4050:
-

The problem with building against hadoop 2.0 (and same is with hadoop 3.0 I 
believe) is that those classes in metrics2 package were renamed. So I got not 
OOME, but compilation errors, like these (showing not all):

{noformat}
[ERROR] 
/Users/alex/shared/hbase-trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java:[25,37]
 cannot find symbol
[ERROR] symbol  : class MetricMutableCounterInt
[ERROR] location: package org.apache.hadoop.metrics2.lib
[ERROR] 
[ERROR] 
/Users/alex/shared/hbase-trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java:[27,40]
 cannot find symbol
[ERROR] symbol  : class JvmMetricsSource
[ERROR] location: package org.apache.hadoop.metrics2.source
{noformat}

In hadoop 2.0+ these classes were renamed (not only these two):
MetricMutableCounterInt - MutableCounterInt
JvmMetricsSource - JvmMetrics

How soon do you think HBase can leave off dependency on hadoop 1.0?
Any suggestions about this situation?

 Update HBase metrics framework to metrics2 framework
 

 Key: HBASE-4050
 URL: https://issues.apache.org/jira/browse/HBASE-4050
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.90.4
 Environment: Java 6
Reporter: Eric Yang
Assignee: Alex Baranau
Priority: Critical
 Fix For: 0.96.0

 Attachments: 4050-metrics-v2.patch, HBASE-4050.patch


 Metrics Framework has been marked deprecated in Hadoop 0.20.203+ and 0.22+, 
 and it might get removed in future Hadoop release.  Hence, HBase needs to 
 revise the dependency of MetricsContext to use Metrics2 framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4050) Update HBase metrics framework to metrics2 framework

2012-07-07 Thread Alex Baranau (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408679#comment-13408679
 ] 

Alex Baranau commented on HBASE-4050:
-

bq. We can introduce metrics2 package and put MasterMetrics.java there, what do 
you think ?

Had same thought, but didn't want to have in HMaster two different classes used 
with same name, the code will look a bit ugly.
What I think is better than V2 is to call those metrics classes 
HMasterMetrics and HRegionServerMetrics, while old ones are without H 
prefix. These names seems to me not that ugly.

bq. I assume, once complete, the Metrics2 metrics would substitute old metrics

I believe so. Though removing older metrics *very soon* might be not an easy 
decision: I think there are home-grown systems/scripts for cluster monitoring 
out there which rely on them. So we may want to be polite and keep both in 
first release with metrics2 included and say that old ones will be removed in 
future. I think this needs broader discussion

 Update HBase metrics framework to metrics2 framework
 

 Key: HBASE-4050
 URL: https://issues.apache.org/jira/browse/HBASE-4050
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.90.4
 Environment: Java 6
Reporter: Eric Yang
Assignee: Alex Baranau
Priority: Critical
 Fix For: 0.96.0

 Attachments: 4050-metrics-v2.patch, HBASE-4050.patch


 Metrics Framework has been marked deprecated in Hadoop 0.20.203+ and 0.22+, 
 and it might get removed in future Hadoop release.  Hence, HBase needs to 
 revise the dependency of MetricsContext to use Metrics2 framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4050) Update HBase metrics framework to metrics2 framework

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408680#comment-13408680
 ] 

Zhihong Ted Yu commented on HBASE-4050:
---

bq. How soon do you think HBase can leave off dependency on hadoop 1.0?
hadoop 1.0 is considered the stable release in the near future. We have to 
accommodate both hadoop 1.0 and hadoop 2.0
Certain form of shim would be needed.
Take a look at 
src/main/java/org/apache/hadoop/hbase/util/ShutdownHookManager.java which I 
introduced in HBASE-5963

I will send out a poll on dev@ list for when old metrics should be removed.

 Update HBase metrics framework to metrics2 framework
 

 Key: HBASE-4050
 URL: https://issues.apache.org/jira/browse/HBASE-4050
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.90.4
 Environment: Java 6
Reporter: Eric Yang
Assignee: Alex Baranau
Priority: Critical
 Fix For: 0.96.0

 Attachments: 4050-metrics-v2.patch, HBASE-4050.patch


 Metrics Framework has been marked deprecated in Hadoop 0.20.203+ and 0.22+, 
 and it might get removed in future Hadoop release.  Hence, HBase needs to 
 revise the dependency of MetricsContext to use Metrics2 framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3855) Performance degradation of memstore because reseek is linear

2012-07-07 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408687#comment-13408687
 ] 

Jonathan Hsieh commented on HBASE-3855:
---

Ouch! 

Yes, I think we'd like to get HBASE-4195 in o 0.90.7 as well -- not for 
performance but for correctness reasons.  

 Performance degradation of memstore because reseek is linear
 

 Key: HBASE-3855
 URL: https://issues.apache.org/jira/browse/HBASE-3855
 Project: HBase
  Issue Type: Improvement
Reporter: dhruba borthakur
Priority: Critical
 Fix For: 0.90.4

 Attachments: memstoreReseek.txt, memstoreReseek2.txt


 The scanner use reseek to find the next row (or next column) as part of a 
 scan. The reseek code iterates over a Set to position itself at the right 
 place. If there are many thousands of kvs that need to be skipped over, then 
 the time-cost is very high. In this case, a seek would be far lesser in cost 
 than a reseek.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-6339) Bulkload call to RS should begin holding write lock only after the file has been transferred

2012-07-07 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-6339.


Resolution: Invalid

I noticed this shouldn't be done, otherwise, given the current logic, a split 
may occur during the bulk load file pull _after_ having been verified. It is 
fine as-is for the moment.

 Bulkload call to RS should begin holding write lock only after the file has 
 been transferred
 

 Key: HBASE-6339
 URL: https://issues.apache.org/jira/browse/HBASE-6339
 Project: HBase
  Issue Type: Improvement
  Components: client, regionserver
Affects Versions: 0.90.0
Reporter: Harsh J
Assignee: Harsh J

 I noticed that right now, under a bulkLoadHFiles call to an RS, we grab the 
 HRegion write lock as soon as we determine that it is a multi-family bulk 
 load we'll be attempting. The file copy from the caller's source FS is done 
 after holding the lock.
 This doesn't seem right. For instance, we had a recent use-case where the 
 bulk load running cluster is a separate HDFS instance/cluster than the one 
 that runs HBase and the transfers between these FSes can get slower than an 
 intra-cluster transfer. Hence I think we should begin to hold the write lock 
 only after we've got a successful destinationFS copy of the requested file, 
 and thereby allow more write throughput to pass.
 Does this sound reasonable to do?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6339) Bulkload call to RS should begin holding write lock only after the file has been transferred

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408695#comment-13408695
 ] 

Zhihong Ted Yu commented on HBASE-6339:
---

In many production systems, region splitting is effectively disabled.
Looks like there is some in gem in your initial proposal if we can reliably 
detect that there is no region splitting.

 Bulkload call to RS should begin holding write lock only after the file has 
 been transferred
 

 Key: HBASE-6339
 URL: https://issues.apache.org/jira/browse/HBASE-6339
 Project: HBase
  Issue Type: Improvement
  Components: client, regionserver
Affects Versions: 0.90.0
Reporter: Harsh J
Assignee: Harsh J

 I noticed that right now, under a bulkLoadHFiles call to an RS, we grab the 
 HRegion write lock as soon as we determine that it is a multi-family bulk 
 load we'll be attempting. The file copy from the caller's source FS is done 
 after holding the lock.
 This doesn't seem right. For instance, we had a recent use-case where the 
 bulk load running cluster is a separate HDFS instance/cluster than the one 
 that runs HBase and the transfers between these FSes can get slower than an 
 intra-cluster transfer. Hence I think we should begin to hold the write lock 
 only after we've got a successful destinationFS copy of the requested file, 
 and thereby allow more write throughput to pass.
 Does this sound reasonable to do?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6339) Bulkload call to RS should begin holding write lock only after the file has been transferred

2012-07-07 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408699#comment-13408699
 ] 

Harsh J commented on HBASE-6339:


Thanks for the comments Ted.

Region splitting being disabled isn't a simple toggle value, so its kinda 
tricky to determine if it is indeed disabled. Besides that, there's still a 
chance of a manual split operation.

Granted we can dupe the checks, once before the file pull (lock before this but 
then release), and once again right after (lock here and return only at end, as 
normal), I think that adds unnecessary complications. For the moment, if Ops 
had HBASE-6350, I think it should be satisfactory enough. It isn't often that I 
notice separated FS clusters loading between them.

Thoughts? Is it worth the extra check and complexity addition?

 Bulkload call to RS should begin holding write lock only after the file has 
 been transferred
 

 Key: HBASE-6339
 URL: https://issues.apache.org/jira/browse/HBASE-6339
 Project: HBase
  Issue Type: Improvement
  Components: client, regionserver
Affects Versions: 0.90.0
Reporter: Harsh J
Assignee: Harsh J

 I noticed that right now, under a bulkLoadHFiles call to an RS, we grab the 
 HRegion write lock as soon as we determine that it is a multi-family bulk 
 load we'll be attempting. The file copy from the caller's source FS is done 
 after holding the lock.
 This doesn't seem right. For instance, we had a recent use-case where the 
 bulk load running cluster is a separate HDFS instance/cluster than the one 
 that runs HBase and the transfers between these FSes can get slower than an 
 intra-cluster transfer. Hence I think we should begin to hold the write lock 
 only after we've got a successful destinationFS copy of the requested file, 
 and thereby allow more write throughput to pass.
 Does this sound reasonable to do?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-6233:
---

Attachment: Restore-Snapshot-Hardlink-alternatives.pdf

I've attached a document that tries to describe the hardlink alternatives 
(Reference Files, .META. Ref-count, Move  SymLink) in relation to the restore 
and mount operations.

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408711#comment-13408711
 ] 

Zhihong Ted Yu commented on HBASE-6233:
---

Nice writeup.
Although we don't know when HDFS-3370 would be implemented, hdfs snapshot v1 
would be delivered later this year.
Do we want to incur extra complexity in our codebase for the hadoop versions 
where there is no hdfs snapshot ?

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-07-07 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408713#comment-13408713
 ] 

Jesse Yates commented on HBASE-6055:


@Matteo I've done that already, looks like my diff-ing got messed up :/ Working 
on pushing up a new patch...

 Snapshots in HBase 0.96
 ---

 Key: HBASE-6055
 URL: https://issues.apache.org/jira/browse/HBASE-6055
 Project: HBase
  Issue Type: New Feature
  Components: client, master, regionserver, zookeeper
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.96.0

 Attachments: Snapshots in HBase.docx


 Continuation of HBASE-50 for the current trunk. Since the implementation has 
 drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408716#comment-13408716
 ] 

Matteo Bertozzi commented on HBASE-6233:


{quote}Do we want to incur extra complexity in our codebase for the hadoop 
versions where there is no hdfs snapshot?{quote}
Are you talking about hdfs snapshot or hdfs hardlink?

I don't think that hbase can rely on hdfs snapshot (E.g. memstore, region info, 
need to be handled in a special way)

For the missing hdfs hardlink support, I think that what I'm trying to propose 
simplify a lot the snapshot, since we don't need to change the current code to 
handle hfile deletions.

but I want some feedback on this, anyone has other suggestions/ideas?

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408718#comment-13408718
 ] 

Zhihong Ted Yu commented on HBASE-6233:
---

bq. Are you talking about hdfs snapshot or hdfs hardlink?
hdfs snapshot. I have a sense that hdfs hardlink wouldn't make it into open 
source.

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408718#comment-13408718
 ] 

Zhihong Ted Yu edited comment on HBASE-6233 at 7/7/12 5:03 PM:
---

bq. Are you talking about hdfs snapshot or hdfs hardlink?
hdfs snapshot. I have a sense that hdfs hardlink wouldn't make it into open 
source.

One other aspect is the timing of releases for hdfs snapshot and HBase snapshot 
(0.96 presumably). If the two are close enough (or hdfs snapshot being earlier 
a little), does it make sense to recommend customers upgrade both hdfs and 
HBase at the same time ?

  was (Author: zhi...@ebaysf.com):
bq. Are you talking about hdfs snapshot or hdfs hardlink?
hdfs snapshot. I have a sense that hdfs hardlink wouldn't make it into open 
source.
  
 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408722#comment-13408722
 ] 

stack commented on HBASE-6352:
--

Patch looks good Jean-Marc.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-6352:
---

Attachment: HBASE_JIRA_6352_v3.patch

@Zhihong: I don't have an specific place to point to. Bytes is used a lot on 
the client side to access HBase. So the idea there was to simplify this client 
side. Not really the server side.

Attached is an updated version with the spaces added before the [] and a test 
for null.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch, 
 HBASE_JIRA_6352_v3.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408728#comment-13408728
 ] 

Hadoop QA commented on HBASE-6352:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12535538/HBASE_JIRA_6352_v3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2345//console

This message is automatically generated.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch, 
 HBASE_JIRA_6352_v3.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408745#comment-13408745
 ] 

Zhihong Ted Yu commented on HBASE-6352:
---

{code}
+   * @param source the byte array do duplicate
+   */
{code}
Please add @return for the return value.
{code}
+System.arraycopy(source, source.length, result, 0, source.length);   
+ return tail (source, source.length);
{code}
Indentation for return statement is off.

Since this JIRA is marked as improvement, a patch for trunk should be generated.

Thanks

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch, 
 HBASE_JIRA_6352_v3.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408746#comment-13408746
 ] 

Zhihong Ted Yu commented on HBASE-6233:
---

bq. But we can have a cleanup ā€œtoolā€ as the other alternatives (Reference 
Files, .META refcount).
So structural changes are needed for symlink approach to work. We should 
carefully evaluate the pros and cons of maintaining this new logic.

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6339) Bulkload call to RS should begin holding write lock only after the file has been transferred

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408747#comment-13408747
 ] 

Zhihong Ted Yu commented on HBASE-6339:
---

@Harsh:
Your argument makes sense.

 Bulkload call to RS should begin holding write lock only after the file has 
 been transferred
 

 Key: HBASE-6339
 URL: https://issues.apache.org/jira/browse/HBASE-6339
 Project: HBase
  Issue Type: Improvement
  Components: client, regionserver
Affects Versions: 0.90.0
Reporter: Harsh J
Assignee: Harsh J

 I noticed that right now, under a bulkLoadHFiles call to an RS, we grab the 
 HRegion write lock as soon as we determine that it is a multi-family bulk 
 load we'll be attempting. The file copy from the caller's source FS is done 
 after holding the lock.
 This doesn't seem right. For instance, we had a recent use-case where the 
 bulk load running cluster is a separate HDFS instance/cluster than the one 
 that runs HBase and the transfers between these FSes can get slower than an 
 intra-cluster transfer. Hence I think we should begin to hold the write lock 
 only after we've got a successful destinationFS copy of the requested file, 
 and thereby allow more write throughput to pass.
 Does this sound reasonable to do?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4050) Update HBase metrics framework to metrics2 framework

2012-07-07 Thread Alex Baranau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Baranau updated HBASE-4050:


Attachment: 4050-metrics-v3.patch

Added metrics v2 to HRegionServer. Renamed v2 from the initial patch as per 
may previous comment.

HRegionServerMetrics for now has only one gauge regions #.

Added simplest unit-test. Also checked (locally) that new metrics are exposed 
via JMX.

I did integration with metrics2 by analogy with DataNode metrics, some things 
(esp. naming) should be reviewed.

Next things:
* make it work with hadoop 2.0+ (the shim)
* test at least one sink, e.g. FileSink, in addition to checking if things 
exposed by JMX
* (?) test perf affect. Not sure this is needed...
* (?) add more metrics in new classes. I think we decided not to do it as a 
part of this issue, as metrics in general should be reworked.
* review patch + fix found issues

 Update HBase metrics framework to metrics2 framework
 

 Key: HBASE-4050
 URL: https://issues.apache.org/jira/browse/HBASE-4050
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 0.90.4
 Environment: Java 6
Reporter: Eric Yang
Assignee: Alex Baranau
Priority: Critical
 Fix For: 0.96.0

 Attachments: 4050-metrics-v2.patch, 4050-metrics-v3.patch, 
 HBASE-4050.patch


 Metrics Framework has been marked deprecated in Hadoop 0.20.203+ and 0.22+, 
 and it might get removed in future Hadoop release.  Hence, HBase needs to 
 revise the dependency of MetricsContext to use Metrics2 framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408749#comment-13408749
 ] 

Matteo Bertozzi commented on HBASE-6233:


{quote}So structural changes are needed for symlink approach to work. We should 
carefully evaluate the pros and cons of maintaining this new logic.{quote}
The cleanup is needed only to remove archived hfiles used by the snapshots,
and can be an external tool or an internal thread that scan the snapshot.
Is not only for the symlink approach but for all three, with the exception for 
.META. refcount that can run a fs.delete() automatically when refcount reaches 
zero.

(In the Jesse implementation HBASE-6055 there's already a cleanup tool 
implemented)

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-6352:
---

Attachment: HBASE_JIRA_6352_v4.patch

Updated patch with one typo corrected in the comments, @return added and test 
procedure.

Also applied it to the trunk.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch, 
 HBASE_JIRA_6352_v3.patch, HBASE_JIRA_6352_v4.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408762#comment-13408762
 ] 

stack commented on HBASE-6233:
--

@Matteo Thanks for taking the time to do the writeup.  Helpful.  I like how 
your symlink work would make it so no work moving up on to hdfs hard links.

I was wondering if you have any concern around creation of all the symlinks on 
a table of some decent size taking a good bit of time Matteo?  The window 
during which the snapshot is being made could be pretty wide.  Would that be a 
problem?

You ask for ideas and the only one I have is the hackneyed one copied from 
bdbje where on compaction, we do not delete files; rather we just rename them 
w/ a '.del' ending and leave them in place.  On snapshot, we make a manifest of 
all files in the table.  On restore, we'd read the manifest and look for files 
first w/o the .del and then if not found, with the .del.  I've not thought it 
all through to the extent of your attached pdf -- I can see how it could get 
tangled pretty quickly -- but throwing it up there since you were asking.

bq. ...and can be an external tool or an internal thread that scan the snapshot.

Could hitch a ride on the current meta scanner, the one that cleans the parent 
regions from .META.

Adding list of files to .META. might make for our being able to do other 
fancyness such as the Accumulo fast table copy, etc.

Let me read your doc. some more (and Jesse's work).



 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408763#comment-13408763
 ] 

Matteo Bertozzi commented on HBASE-6233:


{quote}

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408764#comment-13408764
 ] 

Matteo Bertozzi commented on HBASE-6233:


{quote}I was wondering if you have any concern around creation of all the 
symlinks on a table of some decent size taking a good bit of time Matteo? The 
window during which the snapshot is being made could be pretty wide. Would that 
be a problem?
{quote}
The time is (fs.rename() * nfiles + fs.symlink() * nfiles), but is just a 
metadata operation on HDFS. I don't have the times for how long it takes but I 
can come up with some benchmark, maybe with hdfs under heavy load.

Anyway, you need to keep track of the files in some way: create one reference 
file for each files or add a reference in .META. and both seems much more 
heavier since they require interaction with both namenode + datanode.

{quote}we do not delete files; rather we just rename them w/ a '.del' ending 
and leave them in place.{quote}
But if you want to remove the table this files should be moved.
And by doing this you need to add some logic to the current code to don't read 
the .del files

{quote}
Adding list of files to .META. might make for our being able to do other 
fancyness such as the Accumulo fast table copy, etc.
{quote}
The accumulo clone table is one of the feature that we can easily get with 
snapshots.
I've called it mount snapshot that essentially is the accumulo clone table. 
(Take a look at HBASE-6353, for a description of the snapshot operations).

Again, if you think at restore with the hardlink support you can easily have 
everything. So we just need to come up with an alternative to hardlink.

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6352) Add copy method in Bytes

2012-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408768#comment-13408768
 ] 

Hadoop QA commented on HBASE-6352:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12535546/HBASE_JIRA_6352_v4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The applied patch generated 5 javac compiler warnings (more than 
the trunk's current 4 warnings).

-1 findbugs.  The patch appears to introduce 7 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.replication.TestReplication
  org.apache.hadoop.hbase.regionserver.TestSplitLogWorker

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2347//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2347//console

This message is automatically generated.

 Add copy method in Bytes
 

 Key: HBASE-6352
 URL: https://issues.apache.org/jira/browse/HBASE-6352
 Project: HBase
  Issue Type: Improvement
  Components: util
Affects Versions: 0.94.0
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: Bytes, Util
 Attachments: HBASE_JIRA_6352.patch, HBASE_JIRA_6352_v2.patch, 
 HBASE_JIRA_6352_v3.patch, HBASE_JIRA_6352_v4.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Having a copy method into Bytes might be nice to reduce client code size 
 and improve readability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5549) Master can fail if ZooKeeper session expires

2012-07-07 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408770#comment-13408770
 ] 

Himanshu Vashishtha commented on HBASE-5549:


From the code (and javadoc), it seems we are not 100% sure of the zookeeper 
close session event. 
And, we avoid test failures based on that (either by re-creating zkw as in 
TestZookeeper#testClientSessionExpired, or remove asserts altogether, 
TestReplicationPeer.
Is it ok to impose a hard wait on the session timeout (basic idea is to have 
this in HBaseTestingUtility#expireSession):
{code}
+final boolean[] isClosed = new boolean[]{false} ;
+ZooKeeper monitorWatcher = new ZooKeeper(quorumServers, sessionTimeout, 
new Watcher() {
+  @Override
+  public void process(WatchedEvent event) {
+LOG.info(Closed in the monitor.);
+isClosed[0] = true ;
+  }
+}, sessionID, password);
 monitorWatcher.close();
+while(!isClosed[0]){
+  // sleep;
+  Thread.sleep(sessionTimeout);
+}
{code}

And, remove the two handler approach.

This way, we are sure that the session has indeed expired, and clean up the 
tests. The downside is we sleep, until we actually have expired the session (or 
we can have some increasing sleep time duration and then fail the process after 
a hard limit). 
Good to know what others think.

 Master can fail if ZooKeeper session expires
 

 Key: HBASE-5549
 URL: https://issues.apache.org/jira/browse/HBASE-5549
 Project: HBase
  Issue Type: Bug
  Components: master, zookeeper
Affects Versions: 0.96.0
 Environment: all
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5549.v10.patch, 5549.v11.patch, 5549.v6.patch, 
 5549.v7.patch, 5549.v8.patch, 5549.v9.patch, nochange.patch


 There is a retry mechanism in RecoverableZooKeeper, but when the session 
 expires, the whole ZooKeeperWatcher is recreated, hence the retry mechanism 
 does not work in this case. This is why a sleep is needed in 
 TestZooKeeper#testMasterSessionExpired: we need to wait for ZooKeeperWatcher 
 to be recreated before using the connection.
 This can happen in real life, it can happen when:
 - master  zookeeper starts
 - zookeeper connection is cut
 - master enters the retry loop
 - in the meantime the session expires
 - the network comes back, the session is recreated
 - the retries continues, but on the wrong object, hence fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6233) [brainstorm] snapshots: hardlink alternatives

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408771#comment-13408771
 ] 

stack commented on HBASE-6233:
--

bq. The time is (fs.rename() * nfiles + fs.symlink() * nfiles), but is just a 
metadata operation on HDFS.

Can take a while I've found.  Something to be aware of.

bq. And by doing this you need to add some logic to the current code to don't 
read the .del files

Yes.  It'd be ugly especially compared to symlinking w/ refcounting.

bq. So we just need to come up with an alternative to hardlink

Smile.  Yes.

 [brainstorm] snapshots: hardlink alternatives
 -

 Key: HBASE-6233
 URL: https://issues.apache.org/jira/browse/HBASE-6233
 Project: HBase
  Issue Type: Brainstorming
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Attachments: Restore-Snapshot-Hardlink-alternatives.pdf


 Discussion ticket around snapshots and hardlink alternatives.
 (See the HDFS-3370 discussion about hardlink and implementation problems)
 (taking for a moment WAL out of the discussion and focusing on hfiles)
 With hardlinks available taking snapshot will be fairly easy:
 * (hfiles are immutable)
 * hardlink to .snapshot/name to take snapshot
 * hardlink from .snapshot/name to restore the snapshot
 * No code change needed (on fs.delete() only one reference is deleted)
 but we don't have hardlinks, what are the alternatives?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5549) Master can fail if ZooKeeper session expires

2012-07-07 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408772#comment-13408772
 ] 

nkeywal commented on HBASE-5549:


It seems to be a good idea. For the implementation, you should replace the 
sleep by a notify, plus 'isClosed' should be also be a simple boolean instead 
of an array, tagged as volatile as it's used between thread.

 Master can fail if ZooKeeper session expires
 

 Key: HBASE-5549
 URL: https://issues.apache.org/jira/browse/HBASE-5549
 Project: HBase
  Issue Type: Bug
  Components: master, zookeeper
Affects Versions: 0.96.0
 Environment: all
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5549.v10.patch, 5549.v11.patch, 5549.v6.patch, 
 5549.v7.patch, 5549.v8.patch, 5549.v9.patch, nochange.patch


 There is a retry mechanism in RecoverableZooKeeper, but when the session 
 expires, the whole ZooKeeperWatcher is recreated, hence the retry mechanism 
 does not work in this case. This is why a sleep is needed in 
 TestZooKeeper#testMasterSessionExpired: we need to wait for ZooKeeperWatcher 
 to be recreated before using the connection.
 This can happen in real life, it can happen when:
 - master  zookeeper starts
 - zookeeper connection is cut
 - master enters the retry loop
 - in the meantime the session expires
 - the network comes back, the session is recreated
 - the retries continues, but on the wrong object, hence fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6349) HBase checkout not getting compiled

2012-07-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408775#comment-13408775
 ] 

Lars Hofhansl commented on HBASE-6349:
--

I also verified that the trunk build works. The autobuild are not failing as 
well, so this must be due to the special environment.

Varunkumar: Make sure you have Maven3 and you run the mvn command in the root 
directory of a clean HBase checkout (not that you cannot build from the sources 
included in the HBase tarball).

For more information check here: http://hbase.apache.org/source-repository.html
Let us know via the dev list if that did not work.

 HBase checkout not getting compiled
 ---

 Key: HBASE-6349
 URL: https://issues.apache.org/jira/browse/HBASE-6349
 Project: HBase
  Issue Type: Bug
Reporter: Varunkumar Manohar

 I am trying to compile the latest svn checkout of HBase source 
 code using Maven.
 This is the error I am facing 
 [ERROR] Failed to execute goal on project hbase-server: Could not resolve 
 dependencies for project org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT: 
 Could not find artifact org.apache.hbase:hbase-common:jar:0.95-SNAPSHOT in 
 cloudbees netty (http://repository-netty.forge.cloudbees.com/snapshot/)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5547) Don't delete HFiles when in backup mode

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408778#comment-13408778
 ] 

stack commented on HBASE-5547:
--

Added some review up on rb.

 Don't delete HFiles when in backup mode
 -

 Key: HBASE-5547
 URL: https://issues.apache.org/jira/browse/HBASE-5547
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Jesse Yates
 Attachments: hbase-5447-v8.patch, hbase-5447-v8.patch, 
 hbase-5547-v9.patch, java_HBASE-5547_v4.patch, java_HBASE-5547_v5.patch, 
 java_HBASE-5547_v6.patch, java_HBASE-5547_v7.patch


 This came up in a discussion I had with Stack.
 It would be nice if HBase could be notified that a backup is in progress (via 
 a znode for example) and in that case either:
 1. rename HFiles to be delete to file.bck
 2. rename the HFiles into a special directory
 3. rename them to a general trash directory (which would not need to be tied 
 to backup mode).
 That way it should be able to get a consistent backup based on HFiles (HDFS 
 snapshots or hard links would be better options here, but we do not have 
 those).
 #1 makes cleanup a bit harder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5261) Update HBase for Java 7

2012-07-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408779#comment-13408779
 ] 

Lars Hofhansl commented on HBASE-5261:
--

Also note that this message means that the result will not necessarily work on 
a JDK6 runtime. But I think this is OK.
Should we make the above change in 0.96 and 0.94, so HBase can be built without 
error with JDK7?

 Update HBase for Java 7
 ---

 Key: HBASE-5261
 URL: https://issues.apache.org/jira/browse/HBASE-5261
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 We need to make sure that HBase compiles and works with JDK 7. Once we verify 
 it is reasonably stable, we can explore utilizing the G1 garbage collector. 
 When all deployments are ready to move to JDK 7, we can start using new 
 language features, but in the transition period we will need to maintain a 
 codebase that compiles both with JDK 6 and JDK 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5261) Update HBase for Java 7

2012-07-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408780#comment-13408780
 ] 

stack commented on HBASE-5261:
--

Do we need this addition if we are running on jdk7?  I'm reading this: 
https://blogs.oracle.com/darcy/entry/bootclasspath_older_source  What you 
reading?

 Update HBase for Java 7
 ---

 Key: HBASE-5261
 URL: https://issues.apache.org/jira/browse/HBASE-5261
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 We need to make sure that HBase compiles and works with JDK 7. Once we verify 
 it is reasonably stable, we can explore utilizing the G1 garbage collector. 
 When all deployments are ready to move to JDK 7, we can start using new 
 language features, but in the transition period we will need to maintain a 
 codebase that compiles both with JDK 6 and JDK 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5549) Master can fail if ZooKeeper session expires

2012-07-07 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408782#comment-13408782
 ] 

Himanshu Vashishtha commented on HBASE-5549:


Thanks for looking into this. 
Since the update is happening inside an inner class, it should be final 
(volatile is not possible). I will wait/notify on isClosed array object, and 
upload the patch in a new jira.

 Master can fail if ZooKeeper session expires
 

 Key: HBASE-5549
 URL: https://issues.apache.org/jira/browse/HBASE-5549
 Project: HBase
  Issue Type: Bug
  Components: master, zookeeper
Affects Versions: 0.96.0
 Environment: all
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5549.v10.patch, 5549.v11.patch, 5549.v6.patch, 
 5549.v7.patch, 5549.v8.patch, 5549.v9.patch, nochange.patch


 There is a retry mechanism in RecoverableZooKeeper, but when the session 
 expires, the whole ZooKeeperWatcher is recreated, hence the retry mechanism 
 does not work in this case. This is why a sleep is needed in 
 TestZooKeeper#testMasterSessionExpired: we need to wait for ZooKeeperWatcher 
 to be recreated before using the connection.
 This can happen in real life, it can happen when:
 - master  zookeeper starts
 - zookeeper connection is cut
 - master enters the retry loop
 - in the meantime the session expires
 - the network comes back, the session is recreated
 - the retries continues, but on the wrong object, hence fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5261) Update HBase for Java 7

2012-07-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408785#comment-13408785
 ] 

Lars Hofhansl commented on HBASE-5261:
--

Yeah, that blog summarizes it nicely.

We need this to *compile* HBase with JDK7. (As explained in the blog, even with 
-source 1.6 it will not necessarily run on a JDK6, because of missing/changed 
classes).

We do not need this when we compile HBase with JDK6 and then *run* with JDK7.

But since this option won't hurt anything, being able to compile HBase with 
JDK7 is nice.

 Update HBase for Java 7
 ---

 Key: HBASE-5261
 URL: https://issues.apache.org/jira/browse/HBASE-5261
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 We need to make sure that HBase compiles and works with JDK 7. Once we verify 
 it is reasonably stable, we can explore utilizing the G1 garbage collector. 
 When all deployments are ready to move to JDK 7, we can start using new 
 language features, but in the transition period we will need to maintain a 
 codebase that compiles both with JDK 6 and JDK 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5974) Scanner retry behavior with RPC timeout on next() seems incorrect

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5974:
-

Fix Version/s: (was: 0.94.1)
   0.94.2

Let's move this to 0.94.2. We had this behaviour since the beginning.

 Scanner retry behavior with RPC timeout on next() seems incorrect
 -

 Key: HBASE-5974
 URL: https://issues.apache.org/jira/browse/HBASE-5974
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.90.7, 0.92.1, 0.94.0, 0.96.0
Reporter: Todd Lipcon
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.96.0, 0.94.2

 Attachments: 5974_94-V4.patch, 5974_trunk-V2.patch, 5974_trunk.patch, 
 HBASE-5974_0.94.patch, HBASE-5974_94-V2.patch, HBASE-5974_94-V3.patch


 I'm seeing the following behavior:
 - set RPC timeout to a short value
 - call next() for some batch of rows, big enough so the client times out 
 before the result is returned
 - the HConnectionManager stuff will retry the next() call to the same server. 
 At this point, one of two things can happen: 1) the previous next() call will 
 still be processing, in which case you get a LeaseException, because it was 
 removed from the map during the processing, or 2) the next() call will 
 succeed but skip the prior batch of rows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6294) Detect leftover data in ZK after a user delete all its HBase data

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6294:
-

Fix Version/s: (was: 0.94.1)
   0.94.2

It's not entirely clear how to fix this quickly.
Let's move this to 0.94.2.

Please pull back if you disagree.

 Detect leftover data in ZK after a user delete all its HBase data
 -

 Key: HBASE-6294
 URL: https://issues.apache.org/jira/browse/HBASE-6294
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.0
Reporter: Jean-Daniel Cryans
Priority: Critical
 Fix For: 0.96.0, 0.94.2


 It seems we have a new failure mode when a user deletes the hbase root.dir 
 but doesn't delete the ZK data. For example a user on IRC came with this log:
 {noformat}
 2012-06-30 09:07:48,017 INFO 
 org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open 
 region: kw,,1340981821308.2e8a318837602c9c9961e9d690b7fd02.
 2012-06-30 09:07:48,017 WARN org.apache.hadoop.hbase.util.FSTableDescriptors: 
 The following folder is in HBase's root directory and doesn't contain a table 
 descriptor, do consider deleting it: kw
 2012-06-30 09:07:48,018 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 regionserver:34193-0x1383bfe01b70001 Attempting to transition node 
 2e8a318837602c9c9961e9d690b7fd02 from M_ZK_REGION_OFFLINE to 
 RS_ZK_REGION_OPENING
 2012-06-30 09:07:48,018 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=M_ZK_REGION_OFFLINE, server=localhost,50890,1341036299694, 
 region=2e8a318837602c9c9961e9d690b7fd02
 2012-06-30 09:07:48,020 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_FAILED_OPEN, server=localhost,34193,1341036300138, 
 region=b254af24c9127b8bb22cb6d24e523dad
 2012-06-30 09:07:48,020 DEBUG 
 org.apache.hadoop.hbase.master.handler.ClosedRegionHandler: Handling CLOSED 
 event for b254af24c9127b8bb22cb6d24e523dad
 2012-06-30 09:07:48,020 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
 was=kw_r,,1340981822374.b254af24c9127b8bb22cb6d24e523dad. state=CLOSED, 
 ts=1341036467998, server=localhost,34193,1341036300138
 2012-06-30 09:07:48,020 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 master:50890-0x1383bfe01b7 Creating (or updating) unassigned node for 
 b254af24c9127b8bb22cb6d24e523dad with OFFLINE state
 2012-06-30 09:07:48,028 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 regionserver:34193-0x1383bfe01b70001 Successfully transitioned node 
 2e8a318837602c9c9961e9d690b7fd02 from M_ZK_REGION_OFFLINE to 
 RS_ZK_REGION_OPENING
 2012-06-30 09:07:48,028 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
 Opening region: {NAME = 
 'kw,,1340981821308.2e8a318837602c9c9961e9d690b7fd02.', STARTKEY = '', ENDKEY 
 = '', ENCODED = 2e8a318837602c9c9961e9d690b7fd02,}
 2012-06-30 09:07:48,029 ERROR 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open 
 of region=kw,,1340981821308.2e8a318837602c9c9961e9d690b7fd02., starting to 
 roll back the global memstore size.
 java.lang.IllegalStateException: Could not instantiate a region instance.
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3490)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3628)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
 Source)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3487)
   ... 7 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:133)
   at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.init(RegionCoprocessorHost.java:125)
   at org.apache.hadoop.hbase.regionserver.HRegion.init(HRegion.java:411)
   ... 11 more
 2012-06-30 09:07:48,031 INFO 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening 

[jira] [Commented] (HBASE-6329) Stopping META regionserver when splitting region could cause daughter region to be assigned twice

2012-07-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408789#comment-13408789
 ] 

Lars Hofhansl commented on HBASE-6329:
--

Change looks good. +1 one as well (with the caveat that this is code that I 
find hard to follow - even though I refactored it recently :) )

 Stopping META regionserver when splitting region could cause daughter region 
 to be assigned twice
 -

 Key: HBASE-6329
 URL: https://issues.apache.org/jira/browse/HBASE-6329
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.0
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0, 0.94.1

 Attachments: 6329v3.txt, HBASE-6329v1.patch, HBASE-6329v2.patch


 We found this issue in 0.94, first let me describe the caseļ¼š
 Stop META rs when split is in progress
 1.Stopping META rs(Server A).
 2.The main thread of rs close ZK and delete ephemeral node of the rs.
 3.SplitTransaction is retring MetaEditor.addDaughter
 4.Master's ServerShutdownHandler process the above dead META server
 5.Master fixup daughter and assign the daughter
 6.The daughter is opened on another server(Server B)
 7.Server A's splitTransaction successfully add the daughter to .META. with 
 serverName=Server A
 8.Now, in the .META., daughter's region location is Server A but it is 
 onlined on Server B
 9.Restart Master, and master will assign the daughter again.
 Attaching the logs, daughter region 80f999ea84cb259e20e9a228546f6c8a
 Master log:
 2012-07-04 13:45:56,493 INFO 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Splitting logs 
 for dw93.kgb.sqa.cm4,60020,1341378224464
 2012-07-04 13:45:58,983 INFO 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Fixup; missing 
 daughter 
 writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.
  
 2012-07-04 13:45:58,985 INFO org.apache.hadoop.hbase.catalog.MetaEditor: 
 Added daughter 
 writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.,
  serverName=null 
 2012-07-04 13:45:58,988 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
 writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.
  to dw88.kgb.sqa.cm4,60020,1341379188777 
 2012-07-04 13:46:00,201 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the 
 region 
 writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.
  that was online on dw88.kgb.sqa.cm4,60020,1341379188777 
 Master log after restart:
 2012-07-04 14:27:05,824 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 master:6-0x136187d60e34644 Creating (or updating) unassigned node for 
 80f999ea84cb259e20e9a228546f6c8a with OFFLINE state 
 2012-07-04 14:27:05,851 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Processing region 
 writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.
  in state M_ZK_REGION_OFFLINE 
 2012-07-04 14:27:05,854 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
 writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.
  to dw93.kgb.sqa.cm4,60020,1341380812020 
 2012-07-04 14:27:06,051 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_OPENED, server=dw93.kgb.sqa.cm4,60020,1341380812020, 
 region=80f999ea84cb259e20e9a228546f6c8a 
 Regionserver(META rs) log:
 2012-07-04 13:45:56,491 INFO 
 org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server 
 dw93.kgb.sqa.cm4,60020,1341378224464; zookeeper connection c
 losed.
 2012-07-04 13:46:11,951 INFO org.apache.hadoop.hbase.catalog.MetaEditor: 
 Added daughter 
 writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.,
  serverName=dw93.kgb.sqa.cm4,60020,1341378224464 
 2012-07-04 13:46:11,952 INFO 
 org.apache.hadoop.hbase.regionserver.HRegionServer: Done with post open 
 deploy task for 
 region=writetest,JC\xCA\xC8\xCFQ\xC49OH\xCEV\xCC\xC2\xB5\xC2@\xD4,1341380730558.80f999ea84cb259e20e9a228546f6c8a.,
  daughter=true 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5323) Need to handle assertion error while splitting log through ServerShutDownHandler by shutting down the master

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5323:
-

Fix Version/s: (was: 0.94.1)
   0.94.2

Thanks Ram... Let's try for the next release.
I'll leave it to Jon whether he wants this in 0.90.7.

 Need to handle assertion error while splitting log through 
 ServerShutDownHandler by shutting down the master
 

 Key: HBASE-5323
 URL: https://issues.apache.org/jira/browse/HBASE-5323
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.5
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.90.7, 0.94.2

 Attachments: HBASE-5323.patch, HBASE-5323.patch


 We know that while parsing the HLog we expect the proper length from HDFS.
 In WALReaderFSDataInputStream
 {code}
   assert(realLength = this.length);
 {code}
 We are trying to come out if the above condition is not satisfied.  But if 
 SSH.splitLog() gets this problem then it lands in the run method of 
 EventHandler.  This kills the SSH thread and so further assignment does not 
 happen.  If ROOT and META are to be assigned they cannot be.
 I think in this condition we abort the master by catching such exceptions.
 Please do suggest.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6031) RegionServer does not go down while aborting

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6031:
-

Fix Version/s: (was: 0.94.1)
   0.92.2

It's not immediately clear to me what the issue is. I think this is not urgent 
enough to hold 0.94.1 for it. Agreed?

 RegionServer does not go down while aborting
 

 Key: HBASE-6031
 URL: https://issues.apache.org/jira/browse/HBASE-6031
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.92.2, 0.96.0

 Attachments: rsthread.txt


 Following is the thread dump.
 {code}
 1997531088@qtp-716941846-5 prio=10 tid=0x7f7c5820c800 nid=0xe1b in 
 Object.wait() [0x7f7c56ae8000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.blockWritable(SelectChannelEndPoint.java:279)
   - locked 0x7f7cfe0616d0 (a 
 org.mortbay.jetty.nio.SelectChannelConnector$ConnectorEndPoint)
   at 
 org.mortbay.jetty.AbstractGenerator$Output.blockForOutput(AbstractGenerator.java:545)
   at 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:639)
   at 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:580)
   at java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:109)
   - locked 0x7f7cfe74d758 (a 
 org.mortbay.util.ByteArrayOutputStream2)
   at 
 org.mortbay.jetty.AbstractGenerator$OutputWriter.write(AbstractGenerator.java:904)
   at java.io.Writer.write(Writer.java:96)
   - locked 0x7f7cfca02fc0 (a 
 org.mortbay.jetty.HttpConnection$OutputWriter)
   at java.io.PrintWriter.write(PrintWriter.java:361)
   - locked 0x7f7cfca02fc0 (a 
 org.mortbay.jetty.HttpConnection$OutputWriter)
   at org.jamon.escaping.HtmlEscaping.write(HtmlEscaping.java:43)
   at 
 org.jamon.escaping.AbstractCharacterEscaping.write(AbstractCharacterEscaping.java:35)
   at 
 org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmplImpl.renderNoFlush(RSStatusTmplImpl.java:222)
   at 
 org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmpl.renderNoFlush(RSStatusTmpl.java:180)
   at 
 org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmpl.render(RSStatusTmpl.java:171)
   at 
 org.apache.hadoop.hbase.regionserver.RSStatusServlet.doGet(RSStatusServlet.java:48)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:932)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 1374615312@qtp-716941846-3 prio=10 tid=0x7f7c58214800 nid=0xc42 in 
 Object.wait() [0x7f7c55bd9000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.blockWritable(SelectChannelEndPoint.java:279)
   - locked 0x7f7cfdbb6cc8 (a 
 

[jira] [Updated] (HBASE-6171) Frequent test case failure in 0.94 from #245

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6171:
-

Fix Version/s: (was: 0.94.1)
   0.94.2

No takers. Moving to 0.94.2. I think these are test issues rather than 
production code issues.
Pull back if you think otherwise.

 Frequent test case failure in 0.94 from #245
 

 Key: HBASE-6171
 URL: https://issues.apache.org/jira/browse/HBASE-6171
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.94.2


 HBck related testcases are frequently failing from build #245.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6100) Fix the frequent testcase failures in 0.94 from build no #209

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6100:
-

Fix Version/s: (was: 0.94.1)
   0.94.2

Here again no takers :(
Moving to 0.94.2... Unless somebody thinks these are production code issue 
(rather then test code issues)

 Fix the frequent testcase failures in 0.94 from build no #209
 -

 Key: HBASE-6100
 URL: https://issues.apache.org/jira/browse/HBASE-6100
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.94.2


 Fix the flaky tests in 0.94 branch after #209.  Many test cases like the 
 org.apache.hadoop.hbase.TestLocalHBaseCluster.testLocalHBaseCluster
 org.apache.hadoop.hbase.TestZooKeeper.testClientSessionExpired 
 org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol.testSingleMethod
 are failing frequently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6330) TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2]

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6330:
-

 Priority: Critical  (was: Major)
Fix Version/s: (was: 0.94.1)
   0.94.2

Discussed offline with Jon. Let's make this critical for 0.94.2.

 TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2]
 -

 Key: HBASE-6330
 URL: https://issues.apache.org/jira/browse/HBASE-6330
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0, 0.94.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Critical
  Labels: hadoop-2.0
 Fix For: 0.96.0, 0.94.2

 Attachments: hbase-6330-94.patch, hbase-6330-trunk.patch


 See HBASE-5876.  I'm going to commit the v3 patches under this name since 
 there has been two months (my bad) since the first half was committed and 
 found to be incomplte.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6305) TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds.

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6305:
-

Fix Version/s: (was: 0.94.1)
   0.94.2

Discussed offline with Jon. Let's make this critical for 0.94.2.

 TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds.
 

 Key: HBASE-6305
 URL: https://issues.apache.org/jira/browse/HBASE-6305
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.92.2, 0.94.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.92.2, 0.94.2

 Attachments: hbase-6305-94.patch


 trunk: mvn clean test -Dhadoop.profile=2.0 -Dtest=TestLocalHBaseCluster
 0.94: mvn clean test -Dhadoop.profile=23 -Dtest=TestLocalHBaseCluster
 {code}
 testLocalHBaseCluster(org.apache.hadoop.hbase.TestLocalHBaseCluster)  Time 
 elapsed: 0.022 sec   ERROR!
 java.lang.RuntimeException: Master not initialized after 200 seconds
 at 
 org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:208)
 at 
 org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:424)
 at 
 org.apache.hadoop.hbase.TestLocalHBaseCluster.testLocalHBaseCluster(TestLocalHBaseCluster.java:66)
 ...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6305) TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds.

2012-07-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6305:
-

Priority: Critical  (was: Major)

 TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds.
 

 Key: HBASE-6305
 URL: https://issues.apache.org/jira/browse/HBASE-6305
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.92.2, 0.94.1
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Critical
 Fix For: 0.92.2, 0.94.2

 Attachments: hbase-6305-94.patch


 trunk: mvn clean test -Dhadoop.profile=2.0 -Dtest=TestLocalHBaseCluster
 0.94: mvn clean test -Dhadoop.profile=23 -Dtest=TestLocalHBaseCluster
 {code}
 testLocalHBaseCluster(org.apache.hadoop.hbase.TestLocalHBaseCluster)  Time 
 elapsed: 0.022 sec   ERROR!
 java.lang.RuntimeException: Master not initialized after 200 seconds
 at 
 org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:208)
 at 
 org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:424)
 at 
 org.apache.hadoop.hbase.TestLocalHBaseCluster.testLocalHBaseCluster(TestLocalHBaseCluster.java:66)
 ...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5991) Introduce sequential ZNode based read/write locks

2012-07-07 Thread Zhihong Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Ted Yu reassigned HBASE-5991:
-

Assignee: (was: Alex Feinberg)

I seems Alex is not working on this.

 Introduce sequential ZNode based read/write locks 
 --

 Key: HBASE-5991
 URL: https://issues.apache.org/jira/browse/HBASE-5991
 Project: HBase
  Issue Type: Improvement
Reporter: Alex Feinberg

 This is a continuation of HBASE-5494:
 Currently table-level write locks have been implemented using non-sequential 
 ZNodes as part of HBASE-5494 and committed to 89-fb branch. This issue is to 
 track converting the table-level locks to sequential ZNodes and supporting 
 read-write locks, as to solve the issue of preventing schema changes during 
 region splits or merges.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6350) Some logging improvements for RegionServer bulk loading

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408808#comment-13408808
 ] 

Zhihong Ted Yu commented on HBASE-6350:
---

The changes look fine.
However, I couldn't see where the difference is for the first hunk.

 Some logging improvements for RegionServer bulk loading
 ---

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6350) Some logging improvements for RegionServer bulk loading

2012-07-07 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408816#comment-13408816
 ] 

Harsh J commented on HBASE-6350:


In the first hunk, I pre-create the exception, log it and only then throw it. 
Previously it was getting thrown directly and was not getting logged before 
getting thrown, thereby open to the edge case of client timeout causing an eat 
up of the message.

 Some logging improvements for RegionServer bulk loading
 ---

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6350) Some logging improvements for RegionServer bulk loading

2012-07-07 Thread Zhihong Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408817#comment-13408817
 ] 

Zhihong Ted Yu commented on HBASE-6350:
---

+1 on patch.

Will integrate on Monday if there is no objection.

 Some logging improvements for RegionServer bulk loading
 ---

 Key: HBASE-6350
 URL: https://issues.apache.org/jira/browse/HBASE-6350
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HBASE-6350.patch


 The current logging in the bulk loading RPC call to a RegionServer lacks some 
 info in certain cases. For instance, I recently noticed that it is possible 
 that IOException may be caused during bulk load file transfer (copy) off of 
 another FS and that during the same time the client already times the socket 
 out and thereby does not receive a thrown Exception back remotely (HBase 
 prints a ClosedChannelException for the IPC when it attempts to send the real 
 message, and hence the real cause is lost).
 Improvements around this kind of issue, wherein we could first log the 
 IOException at the RS before sending, and a few other wording improvements 
 are present in my patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6354) Wait till hard fail in case of erratic zookeeper session expiry

2012-07-07 Thread Himanshu Vashishtha (JIRA)
Himanshu Vashishtha created HBASE-6354:
--

 Summary: Wait till hard fail in case of erratic zookeeper session 
expiry
 Key: HBASE-6354
 URL: https://issues.apache.org/jira/browse/HBASE-6354
 Project: HBase
  Issue Type: Improvement
Reporter: Himanshu Vashishtha


There are number of tests that depends on zookeeper expire session 
(HBaseTestingUtility#expireSession). The current approach is to create handles 
on the existing sessions and call a close() on one of the handle. It closes all 
the handles associated with this session. This should work in theory but 
sometimes, it just doesn't work (don't know the root cause yet).
We need to do some hacks ( such as in TestZookeeper#testClientSessionExpired).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6354) Wait till hard fail in case of erratic zookeeper session expiry

2012-07-07 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408828#comment-13408828
 ] 

Himanshu Vashishtha commented on HBASE-6354:


There is some more context in the last few comments of HBase-5549

 Wait till hard fail in case of erratic zookeeper session expiry
 ---

 Key: HBASE-6354
 URL: https://issues.apache.org/jira/browse/HBASE-6354
 Project: HBase
  Issue Type: Improvement
Reporter: Himanshu Vashishtha

 There are number of tests that depends on zookeeper expire session 
 (HBaseTestingUtility#expireSession). The current approach is to create 
 handles on the existing sessions and call a close() on one of the handle. It 
 closes all the handles associated with this session. This should work in 
 theory but sometimes, it just doesn't work (don't know the root cause yet).
 We need to do some hacks ( such as in TestZookeeper#testClientSessionExpired).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6354) Wait till hard fail in case of erratic zookeeper session expiry

2012-07-07 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-6354:
---

Attachment: HBase6354-v1.patch

Attached is a patch with that approach. I tested it by running some 
TestReplicationPeer class, which use the changed method.

 Wait till hard fail in case of erratic zookeeper session expiry
 ---

 Key: HBASE-6354
 URL: https://issues.apache.org/jira/browse/HBASE-6354
 Project: HBase
  Issue Type: Improvement
Reporter: Himanshu Vashishtha
 Attachments: HBase6354-v1.patch


 There are number of tests that depends on zookeeper expire session 
 (HBaseTestingUtility#expireSession). The current approach is to create 
 handles on the existing sessions and call a close() on one of the handle. It 
 closes all the handles associated with this session. This should work in 
 theory but sometimes, it just doesn't work (don't know the root cause yet).
 We need to do some hacks ( such as in TestZookeeper#testClientSessionExpired).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5991) Introduce sequential ZNode based read/write locks

2012-07-07 Thread Alex Feinberg (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408832#comment-13408832
 ] 

Alex Feinberg commented on HBASE-5991:
--

Hi Ted,

Sorry, I haven't followed up on this -- I have been busy.

Yes, I still intend to do work on this. Unless you've started working on it, I 
can finish it: I've already started and have a design in mind.

 Introduce sequential ZNode based read/write locks 
 --

 Key: HBASE-5991
 URL: https://issues.apache.org/jira/browse/HBASE-5991
 Project: HBase
  Issue Type: Improvement
Reporter: Alex Feinberg

 This is a continuation of HBASE-5494:
 Currently table-level write locks have been implemented using non-sequential 
 ZNodes as part of HBASE-5494 and committed to 89-fb branch. This issue is to 
 track converting the table-level locks to sequential ZNodes and supporting 
 read-write locks, as to solve the issue of preventing schema changes during 
 region splits or merges.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5991) Introduce sequential ZNode based read/write locks

2012-07-07 Thread Alex Feinberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Feinberg reassigned HBASE-5991:


Assignee: Alex Feinberg

 Introduce sequential ZNode based read/write locks 
 --

 Key: HBASE-5991
 URL: https://issues.apache.org/jira/browse/HBASE-5991
 Project: HBase
  Issue Type: Improvement
Reporter: Alex Feinberg
Assignee: Alex Feinberg

 This is a continuation of HBASE-5494:
 Currently table-level write locks have been implemented using non-sequential 
 ZNodes as part of HBASE-5494 and committed to 89-fb branch. This issue is to 
 track converting the table-level locks to sequential ZNodes and supporting 
 read-write locks, as to solve the issue of preventing schema changes during 
 region splits or merges.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6312) Make BlockCache eviction thresholds configurable

2012-07-07 Thread Jie Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13408844#comment-13408844
 ] 

Jie Huang commented on HBASE-6312:
--

I agree to modify those 2 configuration items here. Meanwhile, another point 
which Jason states here is how to make the customer understand the relationship 
between hfile.block.cache.size and actual cached size. The configurable items 
somehow can make them aware of the thresholds existing, and what the expected 
result is. Otherwise, they need to read the source code or ask in MailList.

 Make BlockCache eviction thresholds configurable
 

 Key: HBASE-6312
 URL: https://issues.apache.org/jira/browse/HBASE-6312
 Project: HBase
  Issue Type: Improvement
  Components: io
Affects Versions: 0.94.0
Reporter: Jie Huang
Priority: Minor
 Attachments: hbase-6312.patch


 Some of our customers found that tuning the BlockCache eviction thresholds 
 made test results different in their test environment. However, those 
 thresholds are not configurable in the current implementation. The only way 
 to change those values is to re-compile the HBase source code. We wonder if 
 it is possible to make them configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira