[jira] [Commented] (HBASE-7507) Make memstore flush be able to retry after exception

2013-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558722#comment-13558722
 ] 

Hudson commented on HBASE-7507:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #364 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/])
HBASE-7507 Make memstore flush be able to retry after exception (Chunhui) 
(Revision 1436111)

 Result = FAILURE
zjushch : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Make memstore flush be able to retry after exception
 

 Key: HBASE-7507
 URL: https://issues.apache.org/jira/browse/HBASE-7507
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0, 0.94.5

 Attachments: 7507-94.patch, 7507-trunk v1.patch, 7507-trunk v2.patch, 
 7507-trunkv3.patch


 We will abort regionserver if memstore flush throws exception.
 I thinks we could do retry to make regionserver more stable because file 
 system may be not ok in a transient time. e.g. Switching namenode in the 
 NamenodeHA environment
 {code}
 HRegion#internalFlushcache(){
 ...
 try {
 ...
 }catch(Throwable t){
 DroppedSnapshotException dse = new DroppedSnapshotException(region:  +
   Bytes.toStringBinary(getRegionName()));
 dse.initCause(t);
 throw dse;
 }
 ...
 }
 MemStoreFlusher#flushRegion(){
 ...
 region.flushcache();
 ...
  try {
 }catch(DroppedSnapshotException ex){
 server.abort(Replay of HLog required. Forcing server shutdown, ex);
 }
 ...
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-21 Thread Joey Echeverria (JIRA)
Joey Echeverria created HBASE-7633:
--

 Summary: Add a metric that tracks the current number of used RPC 
threads on the regionservers
 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
Reporter: Joey Echeverria


One way to detect that you're hitting a John Wayne disk[1] would be if we 
could see when region servers exhausted their RPC handlers. This would also be 
useful when tuning the cluster for your workload to make sure that reads or 
writes were not starving the other operations out.

[1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6907) KeyValue equals and compareTo methods should match

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558861#comment-13558861
 ] 

Ted Yu commented on HBASE-6907:
---

Integrated to trunk.

Thanks for the review, Stack and Matt.

 KeyValue equals and compareTo methods should match
 --

 Key: HBASE-6907
 URL: https://issues.apache.org/jira/browse/HBASE-6907
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Matt Corgan
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 6907-v1.txt, 6907-v2.txt, 6907-v3.txt, 6907-v4.txt, 
 6907-v5.txt


 KeyValue.KVComparator includes the memstoreTS when comparing, however the 
 KeyValue.equals() method ignores the memstoreTS.
 The Comparator interface has always specified that comparator return 0 when 
 equals would return true and vice versa.  Obeying that rule has been sort of 
 optional in the past, but Java 7 introduces a new default collection sorting 
 algorithm called Tim Sort which relies on that behavior.  
 http://bugs.sun.com/view_bug.do?bug_id=6804124
 Possible problem spots:
 * there's a Collections.sort(KeyValues) in 
 RedundantKVGenerator.generateTestKeyValues(..)
 * TestColumnSeeking compares two collections of KeyValues using the 
 containsAll method.  It is intentionally ignoring memstoreTS, so will need an 
 alternative method for comparing the two collections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7634) Replication handling of changes to peer clusters is inefficient

2013-01-21 Thread Gabriel Reid (JIRA)
Gabriel Reid created HBASE-7634:
---

 Summary: Replication handling of changes to peer clusters is 
inefficient
 Key: HBASE-7634
 URL: https://issues.apache.org/jira/browse/HBASE-7634
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
 Attachments: HBASE-7634.patch

The current handling of changes to the region servers in a replication peer 
cluster is currently quite inefficient. The list of region servers that are 
being replicated to is only updated if there are a large number of issues 
encountered while replicating.

This can cause it to take quite a while to recognize that a number of the 
regionserver in a peer cluster are no longer available. A potentially bigger 
problem is that if a replication peer cluster is started with a small number of 
regionservers, and then more region servers are added after replication has 
started, the additional region servers will never be used for replication 
(unless there are failures on the in-use regionservers).



Part of the current issue is that the retry code in ReplicationSource#shipEdits 
checks a randomly-chosen replication peer regionserver (in 
ReplicationSource#isSlaveDown) to see if it is up after a replication write has 
failed on a different randonly-chosen replication peer. If the peer is seen as 
not down, another randomly-chosen peer is used for writing.



A second part of the issue is that changes to the list of region servers in a 
peer cluster are not detected at all, and are only picked up if a certain 
number of failures have occurred when trying to ship edits.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7634) Replication handling of changes to peer clusters is inefficient

2013-01-21 Thread Gabriel Reid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabriel Reid updated HBASE-7634:


Attachment: HBASE-7634.patch

Initial patch to resolve this issue. Adds a watcher to replication peer's list 
of region servers to respond to changes in the list of region servers. 

Also changes checking randomly-chosen peer regionservers with the ability to 
report a bad peer regionserver. When a bad peer regionserver has been 
reported three times, it is no longer used for replication until the list of 
replication peer regionservers is refreshed.


 Replication handling of changes to peer clusters is inefficient
 ---

 Key: HBASE-7634
 URL: https://issues.apache.org/jira/browse/HBASE-7634
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
 Attachments: HBASE-7634.patch


 The current handling of changes to the region servers in a replication peer 
 cluster is currently quite inefficient. The list of region servers that are 
 being replicated to is only updated if there are a large number of issues 
 encountered while replicating.
 This can cause it to take quite a while to recognize that a number of the 
 regionserver in a peer cluster are no longer available. A potentially bigger 
 problem is that if a replication peer cluster is started with a small number 
 of regionservers, and then more region servers are added after replication 
 has started, the additional region servers will never be used for replication 
 (unless there are failures on the in-use regionservers).
 Part of the current issue is that the retry code in 
 ReplicationSource#shipEdits checks a randomly-chosen replication peer 
 regionserver (in ReplicationSource#isSlaveDown) to see if it is up after a 
 replication write has failed on a different randonly-chosen replication peer. 
 If the peer is seen as not down, another randomly-chosen peer is used for 
 writing.
 A second part of the issue is that changes to the list of region servers in a 
 peer cluster are not detected at all, and are only picked up if a certain 
 number of failures have occurred when trying to ship edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)
Ted Yu created HBASE-7635:
-

 Summary: HFileSystem should implement Closeable
 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0


From 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
 :
{code}
2013-01-21 11:49:26,141 ERROR [Shutdown of 
org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
Closeable or does not provide closeable invocation handler class $Proxy20
at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
at 
org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
at 
org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Attachment: 7635.txt

Straightforward patch.

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Status: Patch Available  (was: Open)

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs

2013-01-21 Thread Bryan Baugher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Baugher updated HBASE-3996:
-

Attachment: 3996-v12.txt

Update to latest trunk which had a conflict

 Support multiple tables and scanners as input to the mapper in map/reduce jobs
 --

 Key: HBASE-3996
 URL: https://issues.apache.org/jira/browse/HBASE-3996
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Eran Kutner
Assignee: Bryan Baugher
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v2.txt, 
 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 
 3996-v9.txt, HBase-3996.patch


 It seems that in many cases feeding data from multiple tables or multiple 
 scanners on a single table can save a lot of time when running map/reduce 
 jobs.
 I propose a new MultiTableInputFormat class that would allow doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558889#comment-13558889
 ] 

stack commented on HBASE-7635:
--

HFileSystem implements FilterFileSystem which implements FileSystem which 
implement Closeable (am I missing something?).  Given this, why we need this 
patch?

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs

2013-01-21 Thread Bryan Baugher (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558890#comment-13558890
 ] 

Bryan Baugher commented on HBASE-3996:
--

I believe there are two possible questions left unanswered as well as some +1's 
still needed,

* The changes to TableSplit would not allow a new version of it to be 
deserialized by an old server. Is that OK for a M/R job?
* It has been mentioned to scope this to scans (of a single table) rather then 
multiple tables.

 Support multiple tables and scanners as input to the mapper in map/reduce jobs
 --

 Key: HBASE-3996
 URL: https://issues.apache.org/jira/browse/HBASE-3996
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Eran Kutner
Assignee: Bryan Baugher
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v2.txt, 
 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 
 3996-v9.txt, HBase-3996.patch


 It seems that in many cases feeding data from multiple tables or multiple 
 scanners on a single table can save a lot of time when running map/reduce 
 jobs.
 I propose a new MultiTableInputFormat class that would allow doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-3996:
--

Status: Patch Available  (was: Open)

@Bryan:
Can you upload patch to review board ?

Thanks

 Support multiple tables and scanners as input to the mapper in map/reduce jobs
 --

 Key: HBASE-3996
 URL: https://issues.apache.org/jira/browse/HBASE-3996
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Eran Kutner
Assignee: Bryan Baugher
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v2.txt, 
 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 
 3996-v9.txt, HBase-3996.patch


 It seems that in many cases feeding data from multiple tables or multiple 
 scanners on a single table can save a lot of time when running map/reduce 
 jobs.
 I propose a new MultiTableInputFormat class that would allow doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs

2013-01-21 Thread Bryan Baugher (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558899#comment-13558899
 ] 

Bryan Baugher commented on HBASE-3996:
--

Done, https://reviews.apache.org/r/9042/diff/

 Support multiple tables and scanners as input to the mapper in map/reduce jobs
 --

 Key: HBASE-3996
 URL: https://issues.apache.org/jira/browse/HBASE-3996
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Eran Kutner
Assignee: Bryan Baugher
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v2.txt, 
 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 
 3996-v9.txt, HBase-3996.patch


 It seems that in many cases feeding data from multiple tables or multiple 
 scanners on a single table can save a lot of time when running map/reduce 
 jobs.
 I propose a new MultiTableInputFormat class that would allow doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Status: Open  (was: Patch Available)

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Attachment: (was: 7635.txt)

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row

2013-01-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-5664:
--

Attachment: HBASE-5664_Trunk.patch

Patch for Trunk. Let me try against HadoopQA

 CP hooks in Scan flow for fast forward when filter filters out a row
 

 Key: HBASE-5664
 URL: https://issues.apache.org/jira/browse/HBASE-5664
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.92.1
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-5664_94.patch, HBASE-5664_Trunk.patch


 In HRegion.nextInternal(int limit, String metric)
   We have while(true) loop so as to fetch a next result which satisfies 
 filter condition. When Filter filters out the current fetched row we call 
 nextRow(byte [] currentRow) before going with the next row.
 {code}
 if (results.isEmpty() || filterRow()) {
 // this seems like a redundant step - we already consumed the row
 // there're no left overs.
 // the reasons for calling this method are:
 // 1. reset the filters.
 // 2. provide a hook to fast forward the row (used by subclasses)
 nextRow(currentRow);
 {code}
 // 2. provide a hook to fast forward the row (used by subclasses)
 We can provide same feature of fast forward support for the CP also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row

2013-01-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-5664:
--

Status: Patch Available  (was: Open)

 CP hooks in Scan flow for fast forward when filter filters out a row
 

 Key: HBASE-5664
 URL: https://issues.apache.org/jira/browse/HBASE-5664
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.92.1
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-5664_94.patch, HBASE-5664_Trunk.patch


 In HRegion.nextInternal(int limit, String metric)
   We have while(true) loop so as to fetch a next result which satisfies 
 filter condition. When Filter filters out the current fetched row we call 
 nextRow(byte [] currentRow) before going with the next row.
 {code}
 if (results.isEmpty() || filterRow()) {
 // this seems like a redundant step - we already consumed the row
 // there're no left overs.
 // the reasons for calling this method are:
 // 1. reset the filters.
 // 2. provide a hook to fast forward the row (used by subclasses)
 nextRow(currentRow);
 {code}
 // 2. provide a hook to fast forward the row (used by subclasses)
 We can provide same feature of fast forward support for the CP also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row

2013-01-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-5664:
--

Component/s: Filters

 CP hooks in Scan flow for fast forward when filter filters out a row
 

 Key: HBASE-5664
 URL: https://issues.apache.org/jira/browse/HBASE-5664
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Filters
Affects Versions: 0.92.1
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-5664_94.patch, HBASE-5664_Trunk.patch


 In HRegion.nextInternal(int limit, String metric)
   We have while(true) loop so as to fetch a next result which satisfies 
 filter condition. When Filter filters out the current fetched row we call 
 nextRow(byte [] currentRow) before going with the next row.
 {code}
 if (results.isEmpty() || filterRow()) {
 // this seems like a redundant step - we already consumed the row
 // there're no left overs.
 // the reasons for calling this method are:
 // 1. reset the filters.
 // 2. provide a hook to fast forward the row (used by subclasses)
 nextRow(currentRow);
 {code}
 // 2. provide a hook to fast forward the row (used by subclasses)
 We can provide same feature of fast forward support for the CP also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-7633:


Assignee: Elliott Clark

We have anything like this currently Mr Metric?

 Add a metric that tracks the current number of used RPC threads on the 
 regionservers
 

 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
Reporter: Joey Echeverria
Assignee: Elliott Clark

 One way to detect that you're hitting a John Wayne disk[1] would be if we 
 could see when region servers exhausted their RPC handlers. This would also 
 be useful when tuning the cluster for your workload to make sure that reads 
 or writes were not starving the other operations out.
 [1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3170) RegionServer confused about empty row keys

2013-01-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3170:
-

Attachment: 3170v5.txt

v5 factors out a bit of common code into a method.  Patch looks good to me.   
We could probably just throw an exception if you try and pass a Get a null row 
too... put the query out of its misery earlier rather than later.  Was 
wondering about the test.  We spin up a mini cluster instance to do the null 
key test.  Should this test just be added to another test that has already put 
up a cluster?

Good stuff lads.

 RegionServer confused about empty row keys
 --

 Key: HBASE-3170
 URL: https://issues.apache.org/jira/browse/HBASE-3170
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89.20100621, 0.89.20100924, 0.90.0, 0.90.1, 0.90.2, 
 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1
Reporter: Benoit Sigoure
Assignee: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 3170-1.patch, 3170-3.patch, 3170-4.patch, 3170-v2.patch, 
 3170-v3.patch, 3170-v3.patch, 3170v5.txt


 I'm no longer sure about the expected behavior when using an empty row key 
 (e.g. a 0-byte long byte array).  I assumed that this was a legitimate row 
 key, just like having an empty column qualifier is allowed.  But it seems 
 that the RegionServer considers the empty row key to be whatever the first 
 row key is.
 {code}
 Version: 0.89.20100830, r0da2890b242584a8a5648d83532742ca7243346b, Sat Sep 18 
 15:30:09 PDT 2010
 hbase(main):001:0 scan 'tsdb-uid', {LIMIT = 1}
 ROW   COLUMN+CELL 
  
  \x00 column=id:metrics, timestamp=1288375187699, 
 value=foo  
  \x00 column=id:tagk, timestamp=1287522021046, 
 value=bar 
  \x00 column=id:tagv, timestamp=1288111387685, 
 value=qux  
 1 row(s) in 0.4610 seconds
 hbase(main):002:0 get 'tsdb-uid', ''
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0910 seconds
 hbase(main):003:0 get 'tsdb-uid', \000
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0550 seconds
 {code}
 This isn't a parsing problem with the command-line of the shell.  I can 
 reproduce this behavior both with plain Java code and with my asynchbase 
 client.
 Since I don't actually have a row with an empty row key, I expected that the 
 first {{get}} would return nothing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Status: Patch Available  (was: Open)

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Attachment: 7635.txt

Thanks for the review, Stack.

See if this patch is better.

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6907) KeyValue equals and compareTo methods should match

2013-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558927#comment-13558927
 ] 

Hudson commented on HBASE-6907:
---

Integrated in HBase-TRUNK #3773 (See 
[https://builds.apache.org/job/HBase-TRUNK/3773/])
HBASE-6907 KeyValue equals and compareTo methods should match (Ted Yu) 
(Revision 1436434)

 Result = FAILURE
tedyu : 
Files : 
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueTestUtil.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java


 KeyValue equals and compareTo methods should match
 --

 Key: HBASE-6907
 URL: https://issues.apache.org/jira/browse/HBASE-6907
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Matt Corgan
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 6907-v1.txt, 6907-v2.txt, 6907-v3.txt, 6907-v4.txt, 
 6907-v5.txt


 KeyValue.KVComparator includes the memstoreTS when comparing, however the 
 KeyValue.equals() method ignores the memstoreTS.
 The Comparator interface has always specified that comparator return 0 when 
 equals would return true and vice versa.  Obeying that rule has been sort of 
 optional in the past, but Java 7 introduces a new default collection sorting 
 algorithm called Tim Sort which relies on that behavior.  
 http://bugs.sun.com/view_bug.do?bug_id=6804124
 Possible problem spots:
 * there's a Collections.sort(KeyValues) in 
 RedundantKVGenerator.generateTestKeyValues(..)
 * TestColumnSeeking compares two collections of KeyValues using the 
 containsAll method.  It is intentionally ignoring memstoreTS, so will need an 
 alternative method for comparing the two collections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558928#comment-13558928
 ] 

stack commented on HBASE-7635:
--

Does it fix the issue?

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7636) TestDistributedLogSplitting#testThreeRSAbort fails against hadoop 2.0

2013-01-21 Thread Ted Yu (JIRA)
Ted Yu created HBASE-7636:
-

 Summary: TestDistributedLogSplitting#testThreeRSAbort fails 
against hadoop 2.0
 Key: HBASE-7636
 URL: https://issues.apache.org/jira/browse/HBASE-7636
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
 Fix For: 0.96.0


From 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
 :
{code}
2013-01-21 11:49:34,276 DEBUG 
[MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
client.HConnectionManager$HConnectionImplementation(956): Looked up root region 
location, connection=hconnection 0x12f19fe; 
serverName=juno.apache.org,55531,1358768819479
2013-01-21 11:49:34,278 INFO  
[MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
catalog.CatalogTracker(576): Failed verification of .META.,,1 at 
address=juno.apache.org,57582,1358768819456; 
org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
in the failed servers list: juno.apache.org/67.195.138.61:57582
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558930#comment-13558930
 ] 

Ted Yu commented on HBASE-7635:
---

Yes.
I issued the following command:

mvn clean test -Dhadoop.profile=2.0 
-Dtest=TestDistributedLogSplitting#testThreeRSAbort

I kept monitoring / searching 
org.apache.hadoop.hbase.master.TestDistributedLogSplitting-output.txt for above 
mentioned exception message. There was none.

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558932#comment-13558932
 ] 

stack commented on HBASE-7635:
--

+1

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558934#comment-13558934
 ] 

Hadoop QA commented on HBASE-3996:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565806/3996-v12.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4108//console

This message is automatically generated.

 Support multiple tables and scanners as input to the mapper in map/reduce jobs
 --

 Key: HBASE-3996
 URL: https://issues.apache.org/jira/browse/HBASE-3996
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Eran Kutner
Assignee: Bryan Baugher
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v2.txt, 
 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 
 3996-v9.txt, HBase-3996.patch


 It seems that in many cases feeding data from multiple tables or multiple 
 scanners on a single table can save a lot of time when running map/reduce 
 jobs.
 I propose a new MultiTableInputFormat class that would allow doing this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558937#comment-13558937
 ] 

Ted Yu commented on HBASE-5664:
---

{code}
+   * @return Returns whether more rows are available for the scanner or not.
{code}
Remove 'Returns'
{code}
+RegionScannerImpl(Scan scan, ListKeyValueScanner additionalScanners, 
HRegion region) throws IOException {
{code}
Wrap long line above.
{code}
+   * @param s
+   * @param currentRow
+   * @return
+   * @throws IOException
+   */
+  public boolean postScannerFilterRow(final InternalScanner s, final byte[] 
currentRow)
{code}
Please complete javadoc.

See if the test failures in TestFromClientSideWithCoprocessor and 
TestFromClientSide are related to this patch.

 CP hooks in Scan flow for fast forward when filter filters out a row
 

 Key: HBASE-5664
 URL: https://issues.apache.org/jira/browse/HBASE-5664
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Filters
Affects Versions: 0.92.1
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-5664_94.patch, HBASE-5664_Trunk.patch


 In HRegion.nextInternal(int limit, String metric)
   We have while(true) loop so as to fetch a next result which satisfies 
 filter condition. When Filter filters out the current fetched row we call 
 nextRow(byte [] currentRow) before going with the next row.
 {code}
 if (results.isEmpty() || filterRow()) {
 // this seems like a redundant step - we already consumed the row
 // there're no left overs.
 // the reasons for calling this method are:
 // 1. reset the filters.
 // 2. provide a hook to fast forward the row (used by subclasses)
 nextRow(currentRow);
 {code}
 // 2. provide a hook to fast forward the row (used by subclasses)
 We can provide same feature of fast forward support for the CP also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2611) Handle RS that fails while processing the failure of another one

2013-01-21 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558938#comment-13558938
 ] 

Himanshu Vashishtha commented on HBASE-2611:


[~lhofhansl]: Yes, I followed the same approach in the attached patch.

 Handle RS that fails while processing the failure of another one
 

 Key: HBASE-2611
 URL: https://issues.apache.org/jira/browse/HBASE-2611
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Jean-Daniel Cryans
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0, 0.94.5

 Attachments: HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch


 HBASE-2223 doesn't manage region servers that fail while doing the transfer 
 of HLogs queues from other region servers that failed. Devise a reliable way 
 to do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3170) RegionServer confused about empty row keys

2013-01-21 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558939#comment-13558939
 ] 

Devaraj Das commented on HBASE-3170:


bq. We could probably just throw an exception if you try and pass a Get a null 
row too.. Was wondering about the test.
Ok [~stack], will review the patch from that point of view..

 RegionServer confused about empty row keys
 --

 Key: HBASE-3170
 URL: https://issues.apache.org/jira/browse/HBASE-3170
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89.20100621, 0.89.20100924, 0.90.0, 0.90.1, 0.90.2, 
 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1
Reporter: Benoit Sigoure
Assignee: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 3170-1.patch, 3170-3.patch, 3170-4.patch, 3170-v2.patch, 
 3170-v3.patch, 3170-v3.patch, 3170v5.txt


 I'm no longer sure about the expected behavior when using an empty row key 
 (e.g. a 0-byte long byte array).  I assumed that this was a legitimate row 
 key, just like having an empty column qualifier is allowed.  But it seems 
 that the RegionServer considers the empty row key to be whatever the first 
 row key is.
 {code}
 Version: 0.89.20100830, r0da2890b242584a8a5648d83532742ca7243346b, Sat Sep 18 
 15:30:09 PDT 2010
 hbase(main):001:0 scan 'tsdb-uid', {LIMIT = 1}
 ROW   COLUMN+CELL 
  
  \x00 column=id:metrics, timestamp=1288375187699, 
 value=foo  
  \x00 column=id:tagk, timestamp=1287522021046, 
 value=bar 
  \x00 column=id:tagv, timestamp=1288111387685, 
 value=qux  
 1 row(s) in 0.4610 seconds
 hbase(main):002:0 get 'tsdb-uid', ''
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0910 seconds
 hbase(main):003:0 get 'tsdb-uid', \000
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0550 seconds
 {code}
 This isn't a parsing problem with the command-line of the shell.  I can 
 reproduce this behavior both with plain Java code and with my asynchbase 
 client.
 Since I don't actually have a row with an empty row key, I expected that the 
 first {{get}} would return nothing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread nkeywal (JIRA)
nkeywal created HBASE-7637:
--

 Summary: hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical


I'm unclear on the root cause / fix. Here is the scenario:
{noformat}
mvn clean package install -Dhadoop.profile=2.0 -DskipTests
bin/start-hbase.sh
{noformat}
fails with

{noformat}
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.metrics2.lib.MetricMutable
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
{noformat}

doing 
{noformat}
rm -rf hbase-hadoop1-compat/target/
{noformat}

makes it work. 

In the pom.xml, we never reference hadoop2-compat. But doing so does not help: 
hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3170) RegionServer confused about empty row keys

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558941#comment-13558941
 ] 

Hadoop QA commented on HBASE-3170:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565810/3170v5.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestLocalHBaseCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4110//console

This message is automatically generated.

 RegionServer confused about empty row keys
 --

 Key: HBASE-3170
 URL: https://issues.apache.org/jira/browse/HBASE-3170
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89.20100621, 0.89.20100924, 0.90.0, 0.90.1, 0.90.2, 
 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1
Reporter: Benoit Sigoure
Assignee: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 3170-1.patch, 3170-3.patch, 3170-4.patch, 3170-v2.patch, 
 3170-v3.patch, 3170-v3.patch, 3170v5.txt


 I'm no longer sure about the expected behavior when using an empty row key 
 (e.g. a 0-byte long byte array).  I assumed that this was a legitimate row 
 key, just like having an empty column qualifier is allowed.  But it seems 
 that the RegionServer considers the empty row key to be whatever the first 
 row key is.
 {code}
 Version: 0.89.20100830, r0da2890b242584a8a5648d83532742ca7243346b, Sat Sep 18 
 15:30:09 PDT 2010
 hbase(main):001:0 scan 'tsdb-uid', {LIMIT = 1}
 ROW   COLUMN+CELL 
  
  \x00 column=id:metrics, timestamp=1288375187699, 
 value=foo  
  \x00 column=id:tagk, timestamp=1287522021046, 
 value=bar 
  \x00 column=id:tagv, timestamp=1288111387685, 
 value=qux  
 1 row(s) in 0.4610 seconds
 hbase(main):002:0 get 'tsdb-uid', ''
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0910 seconds
 hbase(main):003:0 get 'tsdb-uid', \000
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  

[jira] [Commented] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558942#comment-13558942
 ] 

Hadoop QA commented on HBASE-5664:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12565809/HBASE-5664_Trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.access.TestAccessController
  
org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
  org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint
  org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
  org.apache.hadoop.hbase.TestLocalHBaseCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4109//console

This message is automatically generated.

 CP hooks in Scan flow for fast forward when filter filters out a row
 

 Key: HBASE-5664
 URL: https://issues.apache.org/jira/browse/HBASE-5664
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Filters
Affects Versions: 0.92.1
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-5664_94.patch, HBASE-5664_Trunk.patch


 In HRegion.nextInternal(int limit, String metric)
   We have while(true) loop so as to fetch a next result which satisfies 
 filter condition. When Filter filters out the current fetched row we call 
 nextRow(byte [] currentRow) before going with the next row.
 {code}
 if (results.isEmpty() || filterRow()) {
 // this seems like a redundant step - we already consumed the row
 // there're no left overs.
 // the reasons for calling this method are:
 // 1. reset the filters.
 // 2. provide a hook to fast forward the row (used by subclasses)
 nextRow(currentRow);
 {code}
 // 2. provide a hook to fast forward the row (used by subclasses)
 We can provide same feature of fast forward support for the CP also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3170) RegionServer confused about empty row keys

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558964#comment-13558964
 ] 

Ted Yu commented on HBASE-3170:
---

How about adding the new test to:
{code}
public class TestFromClientSide3 {
{code}

 RegionServer confused about empty row keys
 --

 Key: HBASE-3170
 URL: https://issues.apache.org/jira/browse/HBASE-3170
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89.20100621, 0.89.20100924, 0.90.0, 0.90.1, 0.90.2, 
 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1
Reporter: Benoit Sigoure
Assignee: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 3170-1.patch, 3170-3.patch, 3170-4.patch, 3170-v2.patch, 
 3170-v3.patch, 3170-v3.patch, 3170v5.txt


 I'm no longer sure about the expected behavior when using an empty row key 
 (e.g. a 0-byte long byte array).  I assumed that this was a legitimate row 
 key, just like having an empty column qualifier is allowed.  But it seems 
 that the RegionServer considers the empty row key to be whatever the first 
 row key is.
 {code}
 Version: 0.89.20100830, r0da2890b242584a8a5648d83532742ca7243346b, Sat Sep 18 
 15:30:09 PDT 2010
 hbase(main):001:0 scan 'tsdb-uid', {LIMIT = 1}
 ROW   COLUMN+CELL 
  
  \x00 column=id:metrics, timestamp=1288375187699, 
 value=foo  
  \x00 column=id:tagk, timestamp=1287522021046, 
 value=bar 
  \x00 column=id:tagv, timestamp=1288111387685, 
 value=qux  
 1 row(s) in 0.4610 seconds
 hbase(main):002:0 get 'tsdb-uid', ''
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0910 seconds
 hbase(main):003:0 get 'tsdb-uid', \000
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0550 seconds
 {code}
 This isn't a parsing problem with the command-line of the shell.  I can 
 reproduce this behavior both with plain Java code and with my asynchbase 
 client.
 Since I don't actually have a row with an empty row key, I expected that the 
 first {{get}} would return nothing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7637:
--

Fix Version/s: 0.96.0

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558968#comment-13558968
 ] 

Hadoop QA commented on HBASE-7635:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565811/7635.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.wal.TestHLog

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//console

This message is automatically generated.

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7516) Make compaction policy pluggable

2013-01-21 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7516:
---

Attachment: trunk-7516.patch

Attached trunk-7516.patch, which supports level compaction and other pluggable 
compaction policies.

 Make compaction policy pluggable
 

 Key: HBASE-7516
 URL: https://issues.apache.org/jira/browse/HBASE-7516
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Sergey Shelukhin
 Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, 
 HBASE-7516-v2.patch, trunk-7516.patch


 Currently, the compaction selection is pluggable. It will be great to make 
 the compaction algorithm pluggable too so that we can implement and play with 
 other compaction algorithms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558973#comment-13558973
 ] 

Sergey Shelukhin commented on HBASE-7329:
-

bq. Have you done much testing of this patch SS?
I've run mvn tests; I will run some integration tests.

bq. Nit: Below looks like it should be class comment rather than internal 
implementation comment or do you think otherwise?
It's actually an implementation comment since it describes the implementation 
:) Class and method javadoc describe the external stuff.

bq.Initially, the number of operations is 1.
bq. Is it right having it at 1 when we construct the class? Should we wait on 
first beginOp call?
Having 1 at start allows us to maintain a simple invariant that decrement to 0 
is always the last.

bq. It is not your fault but that is sure an ugly name on a method, 
getCompleteCacheFlushSequenceId. From its name you would not know what it is 
for.
bq. In fact, if you want to remove it it looks like you could since 
'TransactionalRegion' is a facility that no longer exists and going forward if 
you wanted to do this kinda thing, you'd do it via a coprocessor.
Removed.

bq. Is it right getting seqid before we advance memstore? We used to do it 
other way around.
Fixed.

bq. I wonder if the log of the roll should be outside of the lock...? Could be 
a bit sloppy but maybe it does not have to be too precise?
You mean rollWriter? The only reason it's inside the lock is to prevent two 
concurrent rolls. It can be done differently e.g. we could
lock around a boolean check and set, and bail duplicates. However current logic 
worked as having rollWriter-s queue up, not sure if anything relies on that.

I will submit another patch, then run some integration tests on real cluster.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558975#comment-13558975
 ] 

Ted Yu commented on HBASE-7635:
---

Looks like QA environment issue:
https://builds.apache.org/job/PreCommit-HBASE-Build/4111//testReport/org.apache.hadoop.hbase.regionserver.wal/TestHLog/testAppendClose/
{code}
java.net.BindException: Problem binding to localhost/127.0.0.1:57829 : Address 
already in use
at org.apache.hadoop.ipc.Server.bind(Server.java:228)
at org.apache.hadoop.ipc.Server$Listener.init(Server.java:302)
at org.apache.hadoop.ipc.Server.init(Server.java:1488)
at org.apache.hadoop.ipc.RPC$Server.init(RPC.java:560)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:521)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:295)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:529)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1403)
{code}
I ran the test against both hadoop 1.0 and 2.0:

 1528  ~/runtest.sh 4 TestHLog
 1529  mvn clean test -Dhadoop.profile=2.0 -Dtest=TestHLog

They all passed.

Will integrate later today if there is no further review comment.

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) HFileSystem should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Hadoop Flags: Reviewed

 HFileSystem should implement Closeable
 --

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7516) Make compaction policy pluggable

2013-01-21 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558976#comment-13558976
 ] 

Jimmy Xiang commented on HBASE-7516:


I put my patch on RB https://reviews.apache.org/r/9044/.  It is very different 
from Sergey's patch. Instead just making the configuration pluggable, I made 
the compactor part of the policy. I assume the configuration is tightly related 
to the compactor for each compaction algorithm.

 Make compaction policy pluggable
 

 Key: HBASE-7516
 URL: https://issues.apache.org/jira/browse/HBASE-7516
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Sergey Shelukhin
 Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, 
 HBASE-7516-v2.patch, trunk-7516.patch


 Currently, the compaction selection is pluggable. It will be great to make 
 the compaction algorithm pluggable too so that we can implement and play with 
 other compaction algorithms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558984#comment-13558984
 ] 

stack commented on HBASE-7329:
--

bq. You mean rollWriter? 

I meant the fat log message we make under the log.  Maybe this log could be 
down outside the lock?

I think running integration tests a good idea.  The spaghetti locking and state 
management that is in place around flush/close and log roll was hard won.  
Changing it around may bite in unexpected ways.

Regards the internal comment, it talks about calling beginOp then you must 
call...I'm not sure. looks like two possibilities.  I was thinking the 
class comment would include how you would use the class?  This is a nit.  Not 
important.

Thanks for persisting w/ this one Sergey.  Its a nice patch.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7516) Make compaction policy pluggable

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558985#comment-13558985
 ] 

Hadoop QA commented on HBASE-7516:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565819/trunk-7516.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestCatalogJanitor
  org.apache.hadoop.hbase.regionserver.TestBlocksScanned
  org.apache.hadoop.hbase.regionserver.TestResettingCounters
  org.apache.hadoop.hbase.regionserver.TestScanWithBloomError
  org.apache.hadoop.hbase.regionserver.TestColumnSeeking
  org.apache.hadoop.hbase.regionserver.TestHBase7051
  org.apache.hadoop.hbase.regionserver.TestSplitTransaction
  org.apache.hadoop.hbase.filter.TestColumnPrefixFilter
  
org.apache.hadoop.hbase.regionserver.TestDefaultCompactSelection
  org.apache.hadoop.hbase.client.TestIntraRowPagination
  org.apache.hadoop.hbase.filter.TestDependentColumnFilter
  org.apache.hadoop.hbase.coprocessor.TestRegionObserverStacking
  org.apache.hadoop.hbase.regionserver.TestHRegionInfo
  org.apache.hadoop.hbase.filter.TestMultipleColumnPrefixFilter
  org.apache.hadoop.hbase.regionserver.TestKeepDeletes
  org.apache.hadoop.hbase.regionserver.TestMinVersions
  org.apache.hadoop.hbase.filter.TestFilter
  org.apache.hadoop.hbase.regionserver.TestScanner
  org.apache.hadoop.hbase.regionserver.TestWideScanner
  org.apache.hadoop.hbase.coprocessor.TestCoprocessorInterface

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4112//console

This message is automatically generated.

 Make compaction policy pluggable
 

 Key: HBASE-7516
 URL: https://issues.apache.org/jira/browse/HBASE-7516
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Sergey Shelukhin
 Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, 
 HBASE-7516-v2.patch, trunk-7516.patch


 Currently, the compaction selection is pluggable. It will be great to make 
 the compaction algorithm pluggable too so that we can implement and play with 
 other compaction algorithms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7516) Make compaction policy pluggable

2013-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558992#comment-13558992
 ] 

Sergey Shelukhin commented on HBASE-7516:
-

I took a cursory look; this patch makes compactor also pluggable and makes it 
return multiple files, and moves the default compaction policy stuff off the 
base class.
If compactionPolicy returns, I wonder if (while Compactor is separate for 
reuse) it makes sense to make compactionPolicy interface simpler, and just let 
it compact. E.g. if Store has no default selection stuff anymore it doesn't 
make sense for it to get selection and feed it into compactor, right?
Then, does it make sense to change CP interface to be called for all files post 
compaction instead of one file? I am not sure what use-cases it has, but 
there's no way to tell apart different compaction algorithms.

My main concern is that this patch does not allow us to implement level 
compaction as described (see txt in level compaction issue). We can implement 
different algos which will allow for gradual compaction at the cost of IO, but 
not level algorithm, because that will remove the file ordering by seqNum, 
break the heuristic for determining mid-point for split, and other things.



 Make compaction policy pluggable
 

 Key: HBASE-7516
 URL: https://issues.apache.org/jira/browse/HBASE-7516
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Sergey Shelukhin
 Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, 
 HBASE-7516-v2.patch, trunk-7516.patch


 Currently, the compaction selection is pluggable. It will be great to make 
 the compaction algorithm pluggable too so that we can implement and play with 
 other compaction algorithms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7329:


Attachment: HBASE-7329-v6.patch

Changes from review by stack (minor).

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558996#comment-13558996
 ] 

nkeywal commented on HBASE-7637:


I will need to fix it, as it's blocks some MTTR tests. May be I'm actually the 
cause of it , with stuff on the pom and cached path. But if someone already 
knows exactly what to do, I'm interested :-).

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7559) Add additional Snapshots Unit Test Coverage

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7559:
--

Status: Open  (was: Patch Available)

 Add additional Snapshots Unit Test Coverage
 ---

 Key: HBASE-7559
 URL: https://issues.apache.org/jira/browse/HBASE-7559
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Aleksandr Shulman
Assignee: Aleksandr Shulman
 Fix For: 0.96.0

 Attachments: 7559-v7.txt, aleks-snapshots.patch


 Add additional testing for Snapshots. In particular, we should add tests to 
 verify that operations on cloned tables do not affect the original (and vice 
 versa). Also, we should do testing on table describes before and after 
 snapshot/restore operations. Finally, we should add testing for the HBase 
 shell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7559) Add additional Snapshots Unit Test Coverage

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7559:
--

Attachment: 7559-v7.txt

Patch v7 fixes the format of license header in the newly added files.

 Add additional Snapshots Unit Test Coverage
 ---

 Key: HBASE-7559
 URL: https://issues.apache.org/jira/browse/HBASE-7559
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Aleksandr Shulman
Assignee: Aleksandr Shulman
 Fix For: 0.96.0

 Attachments: 7559-v7.txt, aleks-snapshots.patch


 Add additional testing for Snapshots. In particular, we should add tests to 
 verify that operations on cloned tables do not affect the original (and vice 
 versa). Also, we should do testing on table describes before and after 
 snapshot/restore operations. Finally, we should add testing for the HBase 
 shell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3170) RegionServer confused about empty row keys

2013-01-21 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-3170:
---

Attachment: 3170-5.patch

This puts in the test in TestFromClientSide3. On the handling of null row key, 
the RPC handler will throw a NPE immediately since the ProtoBufUtil tries to 
convert the PB request to a regular request and that will throw NPE. I think 
handling that should be outside the scope of this jira since there could be 
other such nulls in requests. [~stack] what do you think?

 RegionServer confused about empty row keys
 --

 Key: HBASE-3170
 URL: https://issues.apache.org/jira/browse/HBASE-3170
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89.20100621, 0.89.20100924, 0.90.0, 0.90.1, 0.90.2, 
 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1
Reporter: Benoit Sigoure
Assignee: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 3170-1.patch, 3170-3.patch, 3170-4.patch, 3170-5.patch, 
 3170-v2.patch, 3170-v3.patch, 3170-v3.patch, 3170v5.txt


 I'm no longer sure about the expected behavior when using an empty row key 
 (e.g. a 0-byte long byte array).  I assumed that this was a legitimate row 
 key, just like having an empty column qualifier is allowed.  But it seems 
 that the RegionServer considers the empty row key to be whatever the first 
 row key is.
 {code}
 Version: 0.89.20100830, r0da2890b242584a8a5648d83532742ca7243346b, Sat Sep 18 
 15:30:09 PDT 2010
 hbase(main):001:0 scan 'tsdb-uid', {LIMIT = 1}
 ROW   COLUMN+CELL 
  
  \x00 column=id:metrics, timestamp=1288375187699, 
 value=foo  
  \x00 column=id:tagk, timestamp=1287522021046, 
 value=bar 
  \x00 column=id:tagv, timestamp=1288111387685, 
 value=qux  
 1 row(s) in 0.4610 seconds
 hbase(main):002:0 get 'tsdb-uid', ''
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0910 seconds
 hbase(main):003:0 get 'tsdb-uid', \000
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0550 seconds
 {code}
 This isn't a parsing problem with the command-line of the shell.  I can 
 reproduce this behavior both with plain Java code and with my asynchbase 
 client.
 Since I don't actually have a row with an empty row key, I expected that the 
 first {{get}} would return nothing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7559) Add additional Snapshots Unit Test Coverage

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559009#comment-13559009
 ] 

Ted Yu commented on HBASE-7559:
---

New tests pass:

Running org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence
2013-01-21 11:19:49.812 java[98894:1203] Unable to load realm info from 
SCDynamicStore
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.802 sec
Running org.apache.hadoop.hbase.client.TestSnapshotMetadata
2013-01-21 11:20:59.189 java[98909:1203] Unable to load realm info from 
SCDynamicStore
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.716 sec

I think patch v7 should be good.

 Add additional Snapshots Unit Test Coverage
 ---

 Key: HBASE-7559
 URL: https://issues.apache.org/jira/browse/HBASE-7559
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Aleksandr Shulman
Assignee: Aleksandr Shulman
 Fix For: 0.96.0

 Attachments: 7559-v7.txt, aleks-snapshots.patch


 Add additional testing for Snapshots. In particular, we should add tests to 
 verify that operations on cloned tables do not affect the original (and vice 
 versa). Also, we should do testing on table describes before and after 
 snapshot/restore operations. Finally, we should add testing for the HBase 
 shell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7329:


Attachment: HBASE-7329-v6.patch

A little more CR feedback... not technically related but good point. Moved log 
message outside of updateLock and cleaned it up a bit

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559020#comment-13559020
 ] 

Ted Yu commented on HBASE-7637:
---

I think this should be related to line 906 in pom.xml:
compat.modulehbase-hadoop1-compat/compat.module

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559024#comment-13559024
 ] 

nkeywal commented on HBASE-7637:


Yes, I tried to change it (and I guess it must be done), but compat1 is still 
built and is found in the path.

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559034#comment-13559034
 ] 

nkeywal commented on HBASE-7637:


it's not in cached_classpath.txt
But if you do bin/hbase classpath, it shows up.

investigating...

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6466) Enable multi-thread for memstore flush

2013-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559036#comment-13559036
 ] 

Sergey Shelukhin commented on HBASE-6466:
-

ping?

 Enable multi-thread for memstore flush
 --

 Key: HBASE-6466
 URL: https://issues.apache.org/jira/browse/HBASE-6466
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.96.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-6466.patch, HBASE-6466v2.patch, 
 HBASE-6466v3.1.patch, HBASE-6466v3.patch, HBASE-6466-v4.patch, 
 HBASE-6466-v4.patch


 If the KV is large or Hlog is closed with high-pressure putting, we found 
 memstore is often above the high water mark and block the putting.
 So should we enable multi-thread for Memstore Flush?
 Some performance test data for reference,
 1.test environment ๏ผš 
 random writting๏ผ›upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
 regions per regionserver๏ผ›row len=50 bytes, value len=1024 bytes;5 
 regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
 per client for writing
 2.test results:
 one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
 regionserver, appears many aboveGlobalMemstoreLimit blocking
 two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
 regionserver,
 200 thread handler per client  two cacheFlush handlers, tps:16.1k/s per 
 regionserver, Flush:18.6MB/s per regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559037#comment-13559037
 ] 

nkeywal commented on HBASE-7637:


Here is the guilty, in bin/hbase

add_maven_main_classes_to_classpath() {
 # assumes all modules are named hbase-* in the top level directory
  IFS=$ORIG_IFS
  for module in `ls $HBASE_HOME | grep 'hbase-*'`
  do
add_to_cp_if_exists $HBASE_HOME/$module/target/classes
  done
}


 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6466) Enable multi-thread for memstore flush

2013-01-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559038#comment-13559038
 ] 

Elliott Clark commented on HBASE-6466:
--

Tested this on a small cluster and everything seemed to work pretty well.  
Nothing strange happened and there were no pauses.

Code in trunk has changed quite a lot since this patch started, so I can't 
really pinpoint what was happening. 

 Enable multi-thread for memstore flush
 --

 Key: HBASE-6466
 URL: https://issues.apache.org/jira/browse/HBASE-6466
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.96.0
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-6466.patch, HBASE-6466v2.patch, 
 HBASE-6466v3.1.patch, HBASE-6466v3.patch, HBASE-6466-v4.patch, 
 HBASE-6466-v4.patch


 If the KV is large or Hlog is closed with high-pressure putting, we found 
 memstore is often above the high water mark and block the putting.
 So should we enable multi-thread for Memstore Flush?
 Some performance test data for reference,
 1.test environment ๏ผš 
 random writting๏ผ›upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
 regions per regionserver๏ผ›row len=50 bytes, value len=1024 bytes;5 
 regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
 per client for writing
 2.test results:
 one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
 regionserver, appears many aboveGlobalMemstoreLimit blocking
 two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
 regionserver,
 200 thread handler per client  two cacheFlush handlers, tps:16.1k/s per 
 regionserver, Flush:18.6MB/s per regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559040#comment-13559040
 ] 

nkeywal commented on HBASE-7637:


The best option would be to build only the compat module we need. I don't know 
if it's possible with maven, but I will try (tomorrow :-)/

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7594:
--

Attachment: 7594-1.patch

Treat the symptom.

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559046#comment-13559046
 ] 

stack commented on HBASE-7594:
--

go for it

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7594:
--

Attachment: (was: 7594-1.patch)

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell

 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7594:
--

Status: Patch Available  (was: Open)

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7594:
--

Attachment: 7594-1.patch

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7268) correct local region location cache information can be overwritten w/stale information from an old server

2013-01-21 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-7268:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed v9. Thanks Sergey. 
To be clear, this bug also affects 0.94, but we won't backport this. The reason 
is that this is a relatively rare corner case. Correct me if I am wrong. 

 correct local region location cache information can be overwritten w/stale 
 information from an old server
 -

 Key: HBASE-7268
 URL: https://issues.apache.org/jira/browse/HBASE-7268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Fix For: 0.96.0

 Attachments: 7268-v6.patch, 7268-v8.patch, HBASE-7268-v0.patch, 
 HBASE-7268-v0.patch, HBASE-7268-v1.patch, HBASE-7268-v2.patch, 
 HBASE-7268-v2-plus-masterTs.patch, HBASE-7268-v2-plus-masterTs.patch, 
 HBASE-7268-v3.patch, HBASE-7268-v4.patch, HBASE-7268-v5.patch, 
 HBASE-7268-v6.patch, HBASE-7268-v7.patch, HBASE-7268-v8.patch, 
 HBASE-7268-v9.patch


 Discovered via HBASE-7250; related to HBASE-5877.
 Test is writing from multiple threads.
 Server A has region R; client knows that.
 R gets moved from A to server B.
 B gets killed.
 R gets moved by master to server C.
 ~15 seconds later, client tries to write to it (on A?).
 Multiple client threads report from RegionMoved exception processing logic R 
 moved from C to B, even though such transition never happened (neither in 
 nor before the sequence described below). Not quite sure how the client 
 learned of the transition to C, I assume it's from meta from some other 
 thread...
 Then, put fails (it may fail due to accumulated errors that are not logged, 
 which I am investigating... but the bogus cache update is there 
 nonwithstanding).
 I have a patch but not sure if it works, test still fails locally for yet 
 unknown reason.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3170) RegionServer confused about empty row keys

2013-01-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3170:
-

  Resolution: Fixed
Release Note: If no row specified by a Get, we no longer return first row 
in the table.  Now we fail.
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks lads.

 RegionServer confused about empty row keys
 --

 Key: HBASE-3170
 URL: https://issues.apache.org/jira/browse/HBASE-3170
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89.20100621, 0.89.20100924, 0.90.0, 0.90.1, 0.90.2, 
 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1
Reporter: Benoit Sigoure
Assignee: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 3170-1.patch, 3170-3.patch, 3170-4.patch, 3170-5.patch, 
 3170-v2.patch, 3170-v3.patch, 3170-v3.patch, 3170v5.txt


 I'm no longer sure about the expected behavior when using an empty row key 
 (e.g. a 0-byte long byte array).  I assumed that this was a legitimate row 
 key, just like having an empty column qualifier is allowed.  But it seems 
 that the RegionServer considers the empty row key to be whatever the first 
 row key is.
 {code}
 Version: 0.89.20100830, r0da2890b242584a8a5648d83532742ca7243346b, Sat Sep 18 
 15:30:09 PDT 2010
 hbase(main):001:0 scan 'tsdb-uid', {LIMIT = 1}
 ROW   COLUMN+CELL 
  
  \x00 column=id:metrics, timestamp=1288375187699, 
 value=foo  
  \x00 column=id:tagk, timestamp=1287522021046, 
 value=bar 
  \x00 column=id:tagv, timestamp=1288111387685, 
 value=qux  
 1 row(s) in 0.4610 seconds
 hbase(main):002:0 get 'tsdb-uid', ''
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0910 seconds
 hbase(main):003:0 get 'tsdb-uid', \000
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0550 seconds
 {code}
 This isn't a parsing problem with the command-line of the shell.  I can 
 reproduce this behavior both with plain Java code and with my asynchbase 
 client.
 Since I don't actually have a row with an empty row key, I expected that the 
 first {{get}} would return nothing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559054#comment-13559054
 ] 

Hadoop QA commented on HBASE-7329:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565825/HBASE-7329-v6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestLocalHBaseCluster

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4113//console

This message is automatically generated.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559055#comment-13559055
 ] 

Sergey Shelukhin commented on HBASE-7329:
-

Integration tests have run without problems... I can see a lot of 
rolling/flushing in the logs, although of course this is not a rigorous test 
for this patch in particular.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3170) RegionServer confused about empty row keys

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559056#comment-13559056
 ] 

Hadoop QA commented on HBASE-3170:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565830/3170-5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4114//console

This message is automatically generated.

 RegionServer confused about empty row keys
 --

 Key: HBASE-3170
 URL: https://issues.apache.org/jira/browse/HBASE-3170
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89.20100621, 0.89.20100924, 0.90.0, 0.90.1, 0.90.2, 
 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1
Reporter: Benoit Sigoure
Assignee: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 3170-1.patch, 3170-3.patch, 3170-4.patch, 3170-5.patch, 
 3170-v2.patch, 3170-v3.patch, 3170-v3.patch, 3170v5.txt


 I'm no longer sure about the expected behavior when using an empty row key 
 (e.g. a 0-byte long byte array).  I assumed that this was a legitimate row 
 key, just like having an empty column qualifier is allowed.  But it seems 
 that the RegionServer considers the empty row key to be whatever the first 
 row key is.
 {code}
 Version: 0.89.20100830, r0da2890b242584a8a5648d83532742ca7243346b, Sat Sep 18 
 15:30:09 PDT 2010
 hbase(main):001:0 scan 'tsdb-uid', {LIMIT = 1}
 ROW   COLUMN+CELL 
  
  \x00 column=id:metrics, timestamp=1288375187699, 
 value=foo  
  \x00 column=id:tagk, timestamp=1287522021046, 
 value=bar 
  \x00 column=id:tagv, timestamp=1288111387685, 
 value=qux  
 1 row(s) in 0.4610 seconds
 hbase(main):002:0 get 'tsdb-uid', ''
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  id:tagv  timestamp=1288111387685, value=qux  
 
 3 row(s) in 0.0910 seconds
 hbase(main):003:0 get 'tsdb-uid', \000
 COLUMNCELL
  
  id:metrics   timestamp=1288375187699, value=foo  

  id:tagk  timestamp=1287522021046, value=bar  

  

[jira] [Commented] (HBASE-7268) correct local region location cache information can be overwritten w/stale information from an old server

2013-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559066#comment-13559066
 ] 

Sergey Shelukhin commented on HBASE-7268:
-

I think part of this bug would affect 0.94 (removal from cache only), and it 
wouldn't require anything involved to fix, I will create a JIRA.
Main part is not necessary because the redirection logic is not in 94

 correct local region location cache information can be overwritten w/stale 
 information from an old server
 -

 Key: HBASE-7268
 URL: https://issues.apache.org/jira/browse/HBASE-7268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Fix For: 0.96.0

 Attachments: 7268-v6.patch, 7268-v8.patch, HBASE-7268-v0.patch, 
 HBASE-7268-v0.patch, HBASE-7268-v1.patch, HBASE-7268-v2.patch, 
 HBASE-7268-v2-plus-masterTs.patch, HBASE-7268-v2-plus-masterTs.patch, 
 HBASE-7268-v3.patch, HBASE-7268-v4.patch, HBASE-7268-v5.patch, 
 HBASE-7268-v6.patch, HBASE-7268-v7.patch, HBASE-7268-v8.patch, 
 HBASE-7268-v9.patch


 Discovered via HBASE-7250; related to HBASE-5877.
 Test is writing from multiple threads.
 Server A has region R; client knows that.
 R gets moved from A to server B.
 B gets killed.
 R gets moved by master to server C.
 ~15 seconds later, client tries to write to it (on A?).
 Multiple client threads report from RegionMoved exception processing logic R 
 moved from C to B, even though such transition never happened (neither in 
 nor before the sequence described below). Not quite sure how the client 
 learned of the transition to C, I assume it's from meta from some other 
 thread...
 Then, put fails (it may fail due to accumulated errors that are not logged, 
 which I am investigating... but the bogus cache update is there 
 nonwithstanding).
 I have a patch but not sure if it works, test still fails locally for yet 
 unknown reason.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7638) [0.94] region cache entry should only be removed on error if the error is from the server currently in cache

2013-01-21 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-7638:
---

 Summary: [0.94] region cache entry should only be removed on error 
if the error is from the server currently in cache
 Key: HBASE-7638
 URL: https://issues.apache.org/jira/browse/HBASE-7638
 Project: HBase
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor


See HBASE-7268. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559067#comment-13559067
 ] 

Hadoop QA commented on HBASE-7329:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565832/HBASE-7329-v6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestLocalHBaseCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4115//console

This message is automatically generated.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7635) Proxy created by HFileSystem#createReorderingProxy() should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559071#comment-13559071
 ] 

Ted Yu commented on HBASE-7635:
---

Integrated to trunk.

Thanks for the review, Stack.

 Proxy created by HFileSystem#createReorderingProxy() should implement 
 Closeable
 ---

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7635) Proxy created by HFileSystem#createReorderingProxy() should implement Closeable

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7635:
--

Summary: Proxy created by HFileSystem#createReorderingProxy() should 
implement Closeable  (was: HFileSystem should implement Closeable)

 Proxy created by HFileSystem#createReorderingProxy() should implement 
 Closeable
 ---

 Key: HBASE-7635
 URL: https://issues.apache.org/jira/browse/HBASE-7635
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7635.txt


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:26,141 ERROR [Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081] 
 server.NIOServerCnxnFactory$1(44): Thread Thread[Shutdown of 
 org.apache.hadoop.hbase.fs.HFileSystem@1792081,5,main] died
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not 
 Closeable or does not provide closeable invocation handler class $Proxy20
   at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)
   at 
 org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:638)
   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:696)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:539)
   at 
 org.apache.hadoop.fs.FilterFileSystem.close(FilterFileSystem.java:404)
   at org.apache.hadoop.hbase.fs.HFileSystem.close(HFileSystem.java:148)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread.run(MiniHBaseCluster.java:187)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559072#comment-13559072
 ] 

Hadoop QA commented on HBASE-7594:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565838/7594-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestHFile

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4116//console

This message is automatically generated.

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at 

[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559078#comment-13559078
 ] 

Elliott Clark commented on HBASE-7637:
--

Do we even need that add_maven_main_classes_to_classpath any more ?  Seems like 
cached_classpath contained all of the jars that I would expect.

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559080#comment-13559080
 ] 

Elliott Clark commented on HBASE-7633:
--

In 0.94 there's: 
* callQueueLen

In trunk there are a few more metrics:
* numCallsInGeneralQueue
* numCallsInPriorityQueue
* numCallsInReplicationQueue

While the don't tell you how many threads are currently running they do hint at 
if things are backing up.

 Add a metric that tracks the current number of used RPC threads on the 
 regionservers
 

 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
Reporter: Joey Echeverria
Assignee: Elliott Clark

 One way to detect that you're hitting a John Wayne disk[1] would be if we 
 could see when region servers exhausted their RPC handlers. This would also 
 be useful when tuning the cluster for your workload to make sure that reads 
 or writes were not starving the other operations out.
 [1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7594:
--

Status: Open  (was: Patch Available)

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7637:
-

Attachment: HBASE-7637-0.patch

Something like this works for me.

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7637-0.patch


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559103#comment-13559103
 ] 

stack commented on HBASE-7633:
--

Given what Elliott says, can we close this [~fwiffo]?

 Add a metric that tracks the current number of used RPC threads on the 
 regionservers
 

 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
Reporter: Joey Echeverria
Assignee: Elliott Clark

 One way to detect that you're hitting a John Wayne disk[1] would be if we 
 could see when region servers exhausted their RPC handlers. This would also 
 be useful when tuning the cluster for your workload to make sure that reads 
 or writes were not starving the other operations out.
 [1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559104#comment-13559104
 ] 

stack commented on HBASE-7329:
--

[~sershe] Thanks for running tests. +1 on commit.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-21 Thread Joey Echeverria (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559106#comment-13559106
 ] 

Joey Echeverria commented on HBASE-7633:


callQueueLen is close I think, but I'm not sure how that translates into tuning 
hbase.regionserver.handler.count. Do we have a good example of interpreting 
that value?

 Add a metric that tracks the current number of used RPC threads on the 
 regionservers
 

 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
Reporter: Joey Echeverria
Assignee: Elliott Clark

 One way to detect that you're hitting a John Wayne disk[1] would be if we 
 could see when region servers exhausted their RPC handlers. This would also 
 be useful when tuning the cluster for your workload to make sure that reads 
 or writes were not starving the other operations out.
 [1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7637:
--

Status: Patch Available  (was: Open)

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7637-0.patch


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7622) Add table descriptor verification after snapshot restore

2013-01-21 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7622:
---

Attachment: HBASE-7622-v2.patch

v2 is equals to v1 but adds table.close() to TestRestoreFlush (similar to 
TestRestore)

 Add table descriptor verification after snapshot restore
 

 Key: HBASE-7622
 URL: https://issues.apache.org/jira/browse/HBASE-7622
 Project: HBase
  Issue Type: Sub-task
  Components: snapshots
Affects Versions: hbase-6055
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: hbase-6055

 Attachments: HBASE-7622-v0.patch, HBASE-7622-v1.patch, 
 HBASE-7622-v2.patch


 Add the schema verification not only based on disk data, but also on the 
 HTableDescriptor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559114#comment-13559114
 ] 

stack commented on HBASE-7633:
--

bq. ...translates into tuning hbase.regionserver.handler.count.  Do we have a 
good example of interpreting that value?

Not really other than if they are backed up frequently and we're not blocked on 
cpu or io, then bump them up (Is this for another issue [~fwiffo]?)  Thanks 
boss.



 Add a metric that tracks the current number of used RPC threads on the 
 regionservers
 

 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
Reporter: Joey Echeverria
Assignee: Elliott Clark

 One way to detect that you're hitting a John Wayne disk[1] would be if we 
 could see when region servers exhausted their RPC handlers. This would also 
 be useful when tuning the cluster for your workload to make sure that reads 
 or writes were not starving the other operations out.
 [1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559122#comment-13559122
 ] 

Ted Yu commented on HBASE-7329:
---

HBASE-7268 has been checked in.

I checked in and then reverted patch v6 because of compilation error.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7329:
--

Attachment: 7329-v7.txt

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, 7329-v7.txt, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559137#comment-13559137
 ] 

Ted Yu commented on HBASE-7329:
---

Patch v7 compiles based on latest trunk.

TestHLog passes. Let Hadoop QA tell us the result.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, 7329-v7.txt, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6907) KeyValue equals and compareTo methods should match

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6907:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 KeyValue equals and compareTo methods should match
 --

 Key: HBASE-6907
 URL: https://issues.apache.org/jira/browse/HBASE-6907
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Matt Corgan
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 6907-v1.txt, 6907-v2.txt, 6907-v3.txt, 6907-v4.txt, 
 6907-v5.txt


 KeyValue.KVComparator includes the memstoreTS when comparing, however the 
 KeyValue.equals() method ignores the memstoreTS.
 The Comparator interface has always specified that comparator return 0 when 
 equals would return true and vice versa.  Obeying that rule has been sort of 
 optional in the past, but Java 7 introduces a new default collection sorting 
 algorithm called Tim Sort which relies on that behavior.  
 http://bugs.sun.com/view_bug.do?bug_id=6804124
 Possible problem spots:
 * there's a Collections.sort(KeyValues) in 
 RedundantKVGenerator.generateTestKeyValues(..)
 * TestColumnSeeking compares two collections of KeyValues using the 
 containsAll method.  It is intentionally ignoring memstoreTS, so will need an 
 alternative method for comparing the two collections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559147#comment-13559147
 ] 

Andrew Purtell commented on HBASE-7594:
---

The instantiation check flagged TestHFile as doing something bogus. This is 
interesting. TestHFile#testComparator creates an anonymous class which does not 
have a nullary constructor so cannot be instantiated. See this disassembly with 
javap:

{noformat}
Compiled from TestHFile.java
class org.apache.hadoop.hbase.io.hfile.TestHFile$2 extends 
org.apache.hadoop.hbase.KeyValue$KeyComparator{
final org.apache.hadoop.hbase.io.hfile.TestHFile this$0;
  Signature: Lorg/apache/hadoop/hbase/io/hfile/TestHFile;
org.apache.hadoop.hbase.io.hfile.TestHFile$2(org.apache.hadoop.hbase.io.hfile.TestHFile);
  Signature: (Lorg/apache/hadoop/hbase/io/hfile/TestHFile;)V
public int compare(byte[], int, int, byte[], int, int);
  Signature: ([BII[BII)I
public int compare(byte[], byte[]);
  Signature: ([B[B)I
public int compare(java.lang.Object, java.lang.Object);
  Signature: (Ljava/lang/Object;Ljava/lang/Object;)I
}
{noformat}

Note the constructor.

This minor change to TestHFile fixes the problem locally:

{noformat}
--- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java  
(revision 1436569)
+++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java  
(working copy)
@@ -347,21 +347,25 @@
 assertTrue(Compression.Algorithm.LZ4.ordinal() == 4);
   }
 
+  // This can't be an anonymous class because the compiler will not generate
+  // a nullary constructor for it.
+  static class CustomKeyComparator extends KeyComparator {
+@Override
+public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2,
+int l2) {
+  return -Bytes.compareTo(b1, s1, l1, b2, s2, l2);
+}
+@Override
+public int compare(byte[] o1, byte[] o2) {
+  return compare(o1, 0, o1.length, o2, 0, o2.length);
+}
+  }
+
   public void testComparator() throws IOException {
 if (cacheConf == null) cacheConf = new CacheConfig(conf);
 Path mFile = new Path(ROOT_DIR, meta.tfile);
 FSDataOutputStream fout = createFSOutput(mFile);
-KeyComparator comparator = new KeyComparator() {
-  @Override
-  public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2,
-  int l2) {
-return -Bytes.compareTo(b1, s1, l1, b2, s2, l2);
-  }
-  @Override
-  public int compare(byte[] o1, byte[] o2) {
-return compare(o1, 0, o1.length, o2, 0, o2.length);
-  }
-};
+KeyComparator comparator = new CustomKeyComparator();
 Writer writer = HFile.getWriterFactory(conf, cacheConf)
 .withOutputStream(fout)
 .withBlockSize(minBlockSize)
{noformat}

Will update the patch to include this and try again.

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at 

[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7594:
--

Attachment: 7594-2.patch

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch, 7594-2.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7594:
--

Status: Patch Available  (was: Open)

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch, 7594-2.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7516) Make compaction policy pluggable

2013-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559151#comment-13559151
 ] 

Sergey Shelukhin commented on HBASE-7516:
-

This code will also have to change depending on compation algo, in particular 
for level.
{code}
  // exclude all files older than the newest file we're currently
  // compacting. this allows us to preserve contiguity (HBASE-2856)
  StoreFile last = filesCompacting.get(filesCompacting.size() - 1);
  int idx = candidates.indexOf(last);
  Preconditions.checkArgument(idx != -1);
  candidates.subList(0, idx + 1).clear();
}{code}
I will try to get a prototype patch for store files into the refactor jira by 
tomorrow, tomorrow we can discuss the necessary changes 

 Make compaction policy pluggable
 

 Key: HBASE-7516
 URL: https://issues.apache.org/jira/browse/HBASE-7516
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Sergey Shelukhin
 Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, 
 HBASE-7516-v2.patch, trunk-7516.patch


 Currently, the compaction selection is pluggable. It will be great to make 
 the compaction algorithm pluggable too so that we can implement and play with 
 other compaction algorithms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2

2013-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559156#comment-13559156
 ] 

stack commented on HBASE-7594:
--

Nice find Andrew.  +1

 TestLocalHBaseCluster failing on ubuntu2
 

 Key: HBASE-7594
 URL: https://issues.apache.org/jira/browse/HBASE-7594
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 7594-1.patch, 7594-2.patch


 {noformat}
 java.io.IOException: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427)
   at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.InstantiationException: org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450)
   at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585)
   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   ... 3 more
 Caused by: java.io.IOException: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525)
   at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426)
   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422)
   ... 8 more
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.io.RawComparator
   at java.lang.Class.newInstance0(Class.java:340)
   at java.lang.Class.newInstance(Class.java:308)
   at 
 org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605)
   ... 17 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2013-01-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6669:
--

Attachment: 6669-0.94-v5.txt

Patch v5 addresses latest review comments.

Running org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter
2013-01-21 14:06:27.730 java[99778:1203] Unable to load realm info from 
SCDynamicStore
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.557 sec

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: Client, Coprocessors
Affects Versions: 0.94.3
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Fix For: 0.94.5

 Attachments: 6669-0.94-v4.txt, 6669-0.94-v5.txt, 
 BigDecimalColumnInterpreter.java, BigDecimalColumnInterpreter.patch, 
 BigDecimalColumnInterpreter.patch, HBASE-6669.patch, HBASE-6669-v2.patch, 
 HBASE-6669-v3.patch, TestBDAggregateProtocol.patch, 
 TestBigDecimalColumnInterpreter.java


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559163#comment-13559163
 ] 

Hadoop QA commented on HBASE-7637:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565849/HBASE-7637-0.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestLocalHBaseCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4117//console

This message is automatically generated.

 hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
 

 Key: HBASE-7637
 URL: https://issues.apache.org/jira/browse/HBASE-7637
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7637-0.patch


 I'm unclear on the root cause / fix. Here is the scenario:
 {noformat}
 mvn clean package install -Dhadoop.profile=2.0 -DskipTests
 bin/start-hbase.sh
 {noformat}
 fails with
 {noformat}
 Caused by: java.lang.ClassNotFoundException: 
 org.apache.hadoop.metrics2.lib.MetricMutable
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 {noformat}
 doing 
 {noformat}
 rm -rf hbase-hadoop1-compat/target/
 {noformat}
 makes it work. 
 In the pom.xml, we never reference hadoop2-compat. But doing so does not 
 help: hadoop1-compat is compiled and takes precedence over hadoop2...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7639) Enable online schema update by default

2013-01-21 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-7639:


 Summary: Enable online schema update by default 
 Key: HBASE-7639
 URL: https://issues.apache.org/jira/browse/HBASE-7639
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar


After we get HBASE-7305 and HBASE-7546, things will become stable enough to 
enable online schema update to be enabled by default. 

{code}
  property
namehbase.online.schema.update.enable/name
valuefalse/value
description
Set true to enable online schema changes.  This is an experimental 
feature.ยทยท
There are known issues modifying table schemas at the same time a region
split is happening so your table needs to be quiescent or else you have to
be running with splits disabled.
/description
  /property
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7268) correct local region location cache information can be overwritten w/stale information from an old server

2013-01-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7268:
-

Release Note: On region open, save the edit seqId and then write it into 
META. On region move, the region is opened again with a greater seqId somewhere 
else.  Pass the client the seqid to the client when asks about locations.  
Client can reason about cache invalidation with seqids (if seqid for new 
location is  its current seqid, it can recognize the new location stale).

Does this fix mean that hbase-it works again?

 correct local region location cache information can be overwritten w/stale 
 information from an old server
 -

 Key: HBASE-7268
 URL: https://issues.apache.org/jira/browse/HBASE-7268
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Fix For: 0.96.0

 Attachments: 7268-v6.patch, 7268-v8.patch, HBASE-7268-v0.patch, 
 HBASE-7268-v0.patch, HBASE-7268-v1.patch, HBASE-7268-v2.patch, 
 HBASE-7268-v2-plus-masterTs.patch, HBASE-7268-v2-plus-masterTs.patch, 
 HBASE-7268-v3.patch, HBASE-7268-v4.patch, HBASE-7268-v5.patch, 
 HBASE-7268-v6.patch, HBASE-7268-v7.patch, HBASE-7268-v8.patch, 
 HBASE-7268-v9.patch


 Discovered via HBASE-7250; related to HBASE-5877.
 Test is writing from multiple threads.
 Server A has region R; client knows that.
 R gets moved from A to server B.
 B gets killed.
 R gets moved by master to server C.
 ~15 seconds later, client tries to write to it (on A?).
 Multiple client threads report from RegionMoved exception processing logic R 
 moved from C to B, even though such transition never happened (neither in 
 nor before the sequence described below). Not quite sure how the client 
 learned of the transition to C, I assume it's from meta from some other 
 thread...
 Then, put fails (it may fail due to accumulated errors that are not logged, 
 which I am investigating... but the bogus cache update is there 
 nonwithstanding).
 I have a patch but not sure if it works, test still fails locally for yet 
 unknown reason.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7640) FSUtils#getTableStoreFilePathMap should ignore non-hfiles.

2013-01-21 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-7640:
--

 Summary: FSUtils#getTableStoreFilePathMap should ignore non-hfiles.
 Key: HBASE-7640
 URL: https://issues.apache.org/jira/browse/HBASE-7640
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Priority: Trivial


ERROR: Found lingering reference file
hdfs://node3:9000/hbase/entry_proposed/fbd1735591467005e53f48645278b006/recovered.edits/00091843039.temp

recovered.edits is not a column family.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559183#comment-13559183
 ] 

Hadoop QA commented on HBASE-7329:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565856/7329-v7.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4118//console

This message is automatically generated.

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, 7329-v7.txt, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >