[jira] [Commented] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463586#comment-13463586
 ] 

nkeywal commented on HBASE-6702:


[~saint@gmail.com] There is everything in this patch, except the 
documentation update that I will do in a different jira. Instead of migrating 
to a newer surefire version, the local tests version now uses the same as the 
parallel tests (so it's our patched version).

 ResourceChecker refinement
 --

 Key: HBASE-6702
 URL: https://issues.apache.org/jira/browse/HBASE-6702
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: 6702.v1.patch, 6702.v4.patch


 This was based on some discussion from HBASE-6234.
 The ResourceChecker was added by N. Keywal to help resolve some hadoop qa 
 issues, but has since not be widely utilized. Further, with modularization we 
 have had to drop the ResourceChecker from the tests that are moved into the 
 hbase-common module because bringing the ResourceChecker up to hbase-common 
 would involved bringing all its dependencies (which are quite far reaching).
 The question then is, what should we do with it? Get rid of it? Refactor and 
 resuse? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5844) Delete the region servers znode after a regions server crash

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463591#comment-13463591
 ] 

nkeywal commented on HBASE-5844:


Humm.
The feature is important imho: waiting 30s (at best) before starting a recovery 
is really nice.
In a ideal world, ZooKeeper would make this less useful by detecting the dead 
process sooner, but still it can't be faster than this. 

Note that znode remover should occur when the process finishes, not before 
starting a new one. What JD describes seems a bug to me.







 Delete the region servers znode after a regions server crash
 

 Key: HBASE-5844
 URL: https://issues.apache.org/jira/browse/HBASE-5844
 Project: HBase
  Issue Type: Improvement
  Components: regionserver, scripts
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: 5844.v1.patch, 5844.v2.patch, 5844.v3.patch, 
 5844.v3.patch, 5844.v4.patch


 today, if the regions server crashes, its znode is not deleted in ZooKeeper. 
 So the recovery process will stop only after a timeout, usually 30s.
 By deleting the znode in start script, we remove this delay and the recovery 
 starts immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6435) Reading WAL files after a recovery leads to time lost in HDFS timeouts when using dead datanodes

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463678#comment-13463678
 ] 

nkeywal commented on HBASE-6435:


As HDFS-3701 (dataloss) is into the branch 1.1 as HDFS-3703 (helps to minimize 
data reads errors), I think it implies that we should target 1.1 for 0.96 as 
the recommended minimal version. If it's the case, we can remove this fix, as 
it contains a dependency on hdfs internals. If we keep it, I need to fix the 
filename analysis and to add -splitting on the directories managed. In both 
cases, it should be done in a separate jiras, but let's have the discussion 
here.

 Reading WAL files after a recovery leads to time lost in HDFS timeouts when 
 using dead datanodes
 

 Key: HBASE-6435
 URL: https://issues.apache.org/jira/browse/HBASE-6435
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: 6435.unfinished.patch, 6435.v10.patch, 6435.v10.patch, 
 6435.v12.patch, 6435.v12.patch, 6435.v12.patch, 6435-v12.txt, 6435.v13.patch, 
 6435.v14.patch, 6435.v2.patch, 6435.v7.patch, 6435.v8.patch, 6435.v9.patch, 
 6435.v9.patch, 6535.v11.patch


 HBase writes a Write-Ahead-Log to revover from hardware failure. This log is 
 written on hdfs.
 Through ZooKeeper, HBase gets informed usually in 30s that it should start 
 the recovery process. 
 This means reading the Write-Ahead-Log to replay the edits on the other 
 servers.
 In standards deployments, HBase process (regionserver) are deployed on the 
 same box as the datanodes.
 It means that when the box stops, we've actually lost one of the edits, as we 
 lost both the regionserver and the datanode.
 As HDFS marks a node as dead after ~10 minutes, it appears as available when 
 we try to read the blocks to recover. As such, we are delaying the recovery 
 process by 60 seconds as the read will usually fail with a socket timeout. If 
 the file is still opened for writing, it adds an extra 20s + a risk of losing 
 edits if we connect with ipc to the dead DN.
 Possible solutions are:
 - shorter dead datanodes detection by the NN. Requires a NN code change.
 - better dead datanodes management in DFSClient. Requires a DFS code change.
 - NN customisation to write the WAL files on another DN instead of the local 
 one.
 - reordering the blocks returned by the NN on the client side to put the 
 blocks on the same DN as the dead RS at the end of the priority queue. 
 Requires a DFS code change or a kind of workaround.
 The solution retained is the last one. Compared to what was discussed on the 
 mailing list, the proposed patch will not modify HDFS source code but adds a 
 proxy. This for two reasons:
 - Some HDFS functions managing block orders are static 
 (MD5MD5CRC32FileChecksum). Implementing the hook in the DFSClient would 
 require to implement partially the fix, change the DFS interface to make this 
 function non static, or put the hook static. None of these solution is very 
 clean. 
 - Adding a proxy allows to put all the code in HBase, simplifying dependency 
 management.
 Nevertheless, it would be better to have this in HDFS. But this solution 
 allows to target the last version only, and this could allow minimal 
 interface changes such as non static methods.
 Moreover, writing the blocks to the non local DN would be an even better 
 solution long term.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3577) enables Thrift client to get the Region location

2012-09-26 Thread liang xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463683#comment-13463683
 ] 

liang xie commented on HBASE-3577:
--

getRegionInfo function can be found on 0.940.96 codebase; This issue seems 
only exists on 0.92 and before.

 enables Thrift client to get the Region location
 

 Key: HBASE-3577
 URL: https://issues.apache.org/jira/browse/HBASE-3577
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Kazuki Ohta
 Fix For: 0.96.0

 Attachments: HBASE3577-1.patch, HBASE3577-2.patch


 The current thrift interface has the getTableRegions() interface like below.
 {code}
   listTRegionInfo getTableRegions(
 /** table name */
 1:Text tableName)
 throws (1:IOError io)
 {code}
 {code}
 struct TRegionInfo {
   1:Text startKey,
   2:Text endKey,
   3:i64 id,
   4:Text name,
   5:byte version
 }
 {code}
 But the method don't have the region location information (where the region 
 is located).
 I want to add the Thrift interfaces like below in HTable.java.
 {code}
 public MapHRegionInfo, HServerAddress getRegionsInfo() throws IOException
 {code}
 {code}
 public HRegionLocation getRegionLocation(final String row)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6878) DistributerLogSplit can fail to resubmit a task done if there is an exception during the log archiving

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463686#comment-13463686
 ] 

nkeywal commented on HBASE-6878:


Any feedback on this? It's in the probability of occurrence: very low 
category, but HBASE-6738 increases the delay so the probability, so I would 
prefer to commit the two altogether.

A few lines after the one mentioned in the description there is a similar 
pattern. I will fix both of them if it's confirmed.


 DistributerLogSplit can fail to resubmit a task done if there is an exception 
 during the log archiving
 --

 Key: HBASE-6878
 URL: https://issues.apache.org/jira/browse/HBASE-6878
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: nkeywal
Priority: Minor

 The code in SplitLogManager# getDataSetWatchSuccess is:
 {code}
 if (slt.isDone()) {
   LOG.info(task  + path +  entered state:  + slt.toString());
   if (taskFinisher != null  !ZKSplitLog.isRescanNode(watcher, path)) {
 if (taskFinisher.finish(slt.getServerName(), 
 ZKSplitLog.getFileName(path)) == Status.DONE) {
   setDone(path, SUCCESS);
 } else {
   resubmitOrFail(path, CHECK);
 }
   } else {
 setDone(path, SUCCESS);
   }
 {code}
   resubmitOrFail(path, CHECK);
 should be 
   resubmitOrFail(path, FORCE);
 Without it, the task won't be resubmitted if the delay is not reached, and 
 the task will be marked as failed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2689) Implement common gateway service daemon for Avro and Thrift servers

2012-09-26 Thread liang xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463691#comment-13463691
 ] 

liang xie commented on HBASE-2689:
--

Seems it can be closed ...

 Implement common gateway service daemon for Avro and Thrift servers
 ---

 Key: HBASE-2689
 URL: https://issues.apache.org/jira/browse/HBASE-2689
 Project: HBase
  Issue Type: Improvement
  Components: avro, Thrift
Reporter: Jeff Hammerbacher



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6702:
---

Status: Open  (was: Patch Available)

 ResourceChecker refinement
 --

 Key: HBASE-6702
 URL: https://issues.apache.org/jira/browse/HBASE-6702
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: 6702.v1.patch, 6702.v4.patch


 This was based on some discussion from HBASE-6234.
 The ResourceChecker was added by N. Keywal to help resolve some hadoop qa 
 issues, but has since not be widely utilized. Further, with modularization we 
 have had to drop the ResourceChecker from the tests that are moved into the 
 hbase-common module because bringing the ResourceChecker up to hbase-common 
 would involved bringing all its dependencies (which are quite far reaching).
 The question then is, what should we do with it? Get rid of it? Refactor and 
 resuse? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6702:
---

Attachment: 6702.v5.patch

 ResourceChecker refinement
 --

 Key: HBASE-6702
 URL: https://issues.apache.org/jira/browse/HBASE-6702
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: 6702.v1.patch, 6702.v4.patch, 6702.v5.patch


 This was based on some discussion from HBASE-6234.
 The ResourceChecker was added by N. Keywal to help resolve some hadoop qa 
 issues, but has since not be widely utilized. Further, with modularization we 
 have had to drop the ResourceChecker from the tests that are moved into the 
 hbase-common module because bringing the ResourceChecker up to hbase-common 
 would involved bringing all its dependencies (which are quite far reaching).
 The question then is, what should we do with it? Get rid of it? Refactor and 
 resuse? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6702:
---

Status: Patch Available  (was: Open)

 ResourceChecker refinement
 --

 Key: HBASE-6702
 URL: https://issues.apache.org/jira/browse/HBASE-6702
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: 6702.v1.patch, 6702.v4.patch, 6702.v5.patch


 This was based on some discussion from HBASE-6234.
 The ResourceChecker was added by N. Keywal to help resolve some hadoop qa 
 issues, but has since not be widely utilized. Further, with modularization we 
 have had to drop the ResourceChecker from the tests that are moved into the 
 hbase-common module because bringing the ResourceChecker up to hbase-common 
 would involved bringing all its dependencies (which are quite far reaching).
 The question then is, what should we do with it? Get rid of it? Refactor and 
 resuse? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-09-26 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463700#comment-13463700
 ] 

Matteo Bertozzi commented on HBASE-6055:


When you're talking about hfiles, you are referring to the log files right? 
I've a bit a of confusion reading your comment, bacause the log files are 
sequence files. anyway...

The logs in /hbase/.logs are splitted (new files are created in 
region/recover.edits) and if you look at HRegion.replayRecoveredEditsIfAny(), 
the content of recover.edits is removed as soon as the edits are applied. 
Removed, not archived. And this means that as soon as the table goes online, 
the snapshot doesn't have a way to read those files.

but as you've said, the original (full) log is still available during split, 
but moved to the archive (.oldlogs) as soon as the split is done. 

This means that if you see files in recover.edits, you should have the full 
logs in /hbase/.logs folder. And you can keep a reference to them, as you do 
for the online snapshot.

Another semi-unrelated note... currently we keep full logs files, and the 
restore needs to split them (see the restore code SnapshotLogSplitter, 
https://github.com/matteobertozzi/hbase/blob/snapshot-dev/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/restore/RestoreSnapshotHelper.java#L398)
Can we move this logic at the end of the take snapshot operation and split the 
logs in .snapshot/region/recover.edits?

 Snapshots in HBase 0.96
 ---

 Key: HBASE-6055
 URL: https://issues.apache.org/jira/browse/HBASE-6055
 Project: HBase
  Issue Type: New Feature
  Components: Client, master, regionserver, snapshots, Zookeeper
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: hbase-6055, 0.96.0

 Attachments: Snapshots in HBase.docx


 Continuation of HBASE-50 for the current trunk. Since the implementation has 
 drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread nkeywal (JIRA)
nkeywal created HBASE-6884:
--

 Summary: Update documentation on unit tests
 Key: HBASE-6884
 URL: https://issues.apache.org/jira/browse/HBASE-6884
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.96.0
Reporter: nkeywal
Priority: Minor
 Fix For: 0.96.0


Points to address:
- we don't have anymore JUnit rules in the tests
- we should document how to run the test faster.
- some stuff is not used (run only a category) and should be removed from the 
doc imho.

Below the proposal:

--

15.6.2. Unit Tests

HBase unit tests are subdivided into three categories: small, medium and large, 
with corresponding JUnit categories: SmallTests, MediumTests, LargeTests. JUnit 
categories are denoted using java annotations and look like this in your unit 
test code.

...
@Category(SmallTests.class)
public class TestHRegionInfo {

  @Test
  public void testCreateHRegionInfoName() throws Exception {
// ...
  }
}

The above example shows how to mark a test as belonging to the small category. 
HBase uses a patched maven surefire plugin and maven profiles to implement its 
unit test characterizations. 



15.6.2.4. Running tests

Below we describe how to run the HBase junit categories.
15.6.2.4.1. Default: small and medium category tests

Running

mvn test

will execute all small tests in a single JVM (no fork) and then medium tests in 
a separate JVM for each test instance. Medium tests are NOT executed if there 
is an error in a small test. Large tests are NOT executed. There is one report 
for small tests, and one report for medium tests if they are executed.
15.6.2.4.2. Running all tests

Running

mvn test -P runAllTests

will execute small tests in a single JVM then medium and large tests in a 
separate JVM for each test. Medium and large tests are NOT executed if there is 
an error in a small test. Large tests are NOT executed if there is an error in 
a small or medium test. There is one report for small tests, and one report for 
medium and large tests if they are executed

15.6.2.4.3. Running a single test or all tests in a package

To run an individual test, e.g. MyTest, do

mvn test -P localTests -Dtest=MyTest

You can also pass multiple, individual tests as a comma-delimited list:

mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3

You can also pass a package, which will run all tests under the package:

mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*

The -P localTests will remove the JUnit category effect (without this specific 
profile, the categories are taken into account). Each junit tests is executed 
in a separate JVM (A fork per test class). There is no parallelization when 
localTests profile is set. You will see a new message at the end of the report: 
[INFO] Tests are skipped. It's harmless.

15.6.2.4.4. Running test faster
[replace previous chapter]

By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
increased for many developper machine. Consider that you can have 2 tests in 
parallel per core, and you need about 2Gb of memory per test. Hence, if you 
have a 8 cores and 24Gb box, you can have 16 tests in parallel.

The setting is:
mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12

To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
memory to run all the test. You will also need to delete the files between two 
test run.
The typical way to configure a ramdisk on Linux is:

sudo mkdir /ram2G
sudo mount -t tmpfs -o size=2048M tmpfs /ram2G

You can then use it to run all HBase tests with the command:

mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
-Dtest.build.data.basedirectory=/ram2G


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6435) Reading WAL files after a recovery leads to time lost in HDFS timeouts when using dead datanodes

2012-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463714#comment-13463714
 ] 

Ted Yu commented on HBASE-6435:
---

I think we can poll dev@hbase for minimal hadoop version requirement.
If 1.1 passes as the minimal version, we should remove this fix.

 Reading WAL files after a recovery leads to time lost in HDFS timeouts when 
 using dead datanodes
 

 Key: HBASE-6435
 URL: https://issues.apache.org/jira/browse/HBASE-6435
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: 6435.unfinished.patch, 6435.v10.patch, 6435.v10.patch, 
 6435.v12.patch, 6435.v12.patch, 6435.v12.patch, 6435-v12.txt, 6435.v13.patch, 
 6435.v14.patch, 6435.v2.patch, 6435.v7.patch, 6435.v8.patch, 6435.v9.patch, 
 6435.v9.patch, 6535.v11.patch


 HBase writes a Write-Ahead-Log to revover from hardware failure. This log is 
 written on hdfs.
 Through ZooKeeper, HBase gets informed usually in 30s that it should start 
 the recovery process. 
 This means reading the Write-Ahead-Log to replay the edits on the other 
 servers.
 In standards deployments, HBase process (regionserver) are deployed on the 
 same box as the datanodes.
 It means that when the box stops, we've actually lost one of the edits, as we 
 lost both the regionserver and the datanode.
 As HDFS marks a node as dead after ~10 minutes, it appears as available when 
 we try to read the blocks to recover. As such, we are delaying the recovery 
 process by 60 seconds as the read will usually fail with a socket timeout. If 
 the file is still opened for writing, it adds an extra 20s + a risk of losing 
 edits if we connect with ipc to the dead DN.
 Possible solutions are:
 - shorter dead datanodes detection by the NN. Requires a NN code change.
 - better dead datanodes management in DFSClient. Requires a DFS code change.
 - NN customisation to write the WAL files on another DN instead of the local 
 one.
 - reordering the blocks returned by the NN on the client side to put the 
 blocks on the same DN as the dead RS at the end of the priority queue. 
 Requires a DFS code change or a kind of workaround.
 The solution retained is the last one. Compared to what was discussed on the 
 mailing list, the proposed patch will not modify HDFS source code but adds a 
 proxy. This for two reasons:
 - Some HDFS functions managing block orders are static 
 (MD5MD5CRC32FileChecksum). Implementing the hook in the DFSClient would 
 require to implement partially the fix, change the DFS interface to make this 
 function non static, or put the hook static. None of these solution is very 
 clean. 
 - Adding a proxy allows to put all the code in HBase, simplifying dependency 
 management.
 Nevertheless, it would be better to have this in HDFS. But this solution 
 allows to target the last version only, and this could allow minimal 
 interface changes such as non static methods.
 Moreover, writing the blocks to the non local DN would be an even better 
 solution long term.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6435) Reading WAL files after a recovery leads to time lost in HDFS timeouts when using dead datanodes

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463718#comment-13463718
 ] 

nkeywal commented on HBASE-6435:


I suppose we won't want to put it as minimum, at least to ease migration. But 
someone considering the mttr as important would have to migrate to 1.1.

 Reading WAL files after a recovery leads to time lost in HDFS timeouts when 
 using dead datanodes
 

 Key: HBASE-6435
 URL: https://issues.apache.org/jira/browse/HBASE-6435
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: 6435.unfinished.patch, 6435.v10.patch, 6435.v10.patch, 
 6435.v12.patch, 6435.v12.patch, 6435.v12.patch, 6435-v12.txt, 6435.v13.patch, 
 6435.v14.patch, 6435.v2.patch, 6435.v7.patch, 6435.v8.patch, 6435.v9.patch, 
 6435.v9.patch, 6535.v11.patch


 HBase writes a Write-Ahead-Log to revover from hardware failure. This log is 
 written on hdfs.
 Through ZooKeeper, HBase gets informed usually in 30s that it should start 
 the recovery process. 
 This means reading the Write-Ahead-Log to replay the edits on the other 
 servers.
 In standards deployments, HBase process (regionserver) are deployed on the 
 same box as the datanodes.
 It means that when the box stops, we've actually lost one of the edits, as we 
 lost both the regionserver and the datanode.
 As HDFS marks a node as dead after ~10 minutes, it appears as available when 
 we try to read the blocks to recover. As such, we are delaying the recovery 
 process by 60 seconds as the read will usually fail with a socket timeout. If 
 the file is still opened for writing, it adds an extra 20s + a risk of losing 
 edits if we connect with ipc to the dead DN.
 Possible solutions are:
 - shorter dead datanodes detection by the NN. Requires a NN code change.
 - better dead datanodes management in DFSClient. Requires a DFS code change.
 - NN customisation to write the WAL files on another DN instead of the local 
 one.
 - reordering the blocks returned by the NN on the client side to put the 
 blocks on the same DN as the dead RS at the end of the priority queue. 
 Requires a DFS code change or a kind of workaround.
 The solution retained is the last one. Compared to what was discussed on the 
 mailing list, the proposed patch will not modify HDFS source code but adds a 
 proxy. This for two reasons:
 - Some HDFS functions managing block orders are static 
 (MD5MD5CRC32FileChecksum). Implementing the hook in the DFSClient would 
 require to implement partially the fix, change the DFS interface to make this 
 function non static, or put the hook static. None of these solution is very 
 clean. 
 - Adding a proxy allows to put all the code in HBase, simplifying dependency 
 management.
 Nevertheless, it would be better to have this in HDFS. But this solution 
 allows to target the last version only, and this could allow minimal 
 interface changes such as non static methods.
 Moreover, writing the blocks to the non local DN would be an even better 
 solution long term.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463728#comment-13463728
 ] 

Hadoop QA commented on HBASE-6702:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12546677/6702.v5.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 858 new or modified tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2933//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2933//console

This message is automatically generated.

 ResourceChecker refinement
 --

 Key: HBASE-6702
 URL: https://issues.apache.org/jira/browse/HBASE-6702
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: 6702.v1.patch, 6702.v4.patch, 6702.v5.patch


 This was based on some discussion from HBASE-6234.
 The ResourceChecker was added by N. Keywal to help resolve some hadoop qa 
 issues, but has since not be widely utilized. Further, with modularization we 
 have had to drop the ResourceChecker from the tests that are moved into the 
 hbase-common module because bringing the ResourceChecker up to hbase-common 
 would involved bringing all its dependencies (which are quite far reaching).
 The question then is, what should we do with it? Get rid of it? Refactor and 
 resuse? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6025) Expose Hadoop Dynamic Metrics through JSON Rest interface

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463731#comment-13463731
 ] 

Hudson commented on HBASE-6025:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #193 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/193/])
HBASE-6025 Expose Hadoop Dynamic Metrics through JSON Rest interface; 
REAPPLY (Revision 1390240)
HBASE-6025 Expose Hadoop Dynamic Metrics through JSON Rest interface; REVERT -- 
OVERCOMMIT (Revision 1390239)
HBASE-6025 Expose Hadoop Dynamic Metrics through JSON Rest interface (Revision 
1390238)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* 
/hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/zk.jsp

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* 
/hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/zk.jsp
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/table.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/formatter.rb

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* 
/hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/zk.jsp
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/table.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/formatter.rb


 Expose Hadoop Dynamic Metrics through JSON Rest interface
 -

 Key: HBASE-6025
 URL: https://issues.apache.org/jira/browse/HBASE-6025
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.96.0

 Attachments: HBASE-6025-0.patch, HBASE-6025-1.patch, 
 HBASE-6025-2.patch, HBASE-6025-3.patch, HBASE-6025-4.patch, hbase-jmx2.patch, 
 hbase-jmx.patch, hbase-jmx.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6846) BitComparator bug - ArrayIndexOutOfBoundsException

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463807#comment-13463807
 ] 

Hadoop QA commented on HBASE-6846:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12546687/HBASE-6846.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2934//console

This message is automatically generated.

 BitComparator bug - ArrayIndexOutOfBoundsException
 --

 Key: HBASE-6846
 URL: https://issues.apache.org/jira/browse/HBASE-6846
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.1
 Environment: HBase 0.94.1 + Hadoop 2.0.0-cdh4.0.1
Reporter: Lucian George Iordache
 Attachments: HBASE-6846.patch


 The HBase 0.94.1 BitComparator introduced a bug in the method compareTo:
 @Override
   public int compareTo(byte[] value, int offset, int length) {
 if (length != this.value.length) {
   return 1;
 }
 int b = 0;
 //Iterating backwards is faster because we can quit after one non-zero 
 byte.
 for (int i = value.length - 1; i = 0  b == 0; i--) {
   switch (bitOperator) {
 case AND:
   b = (this.value[i]  value[i+offset])  0xff;
   break;
 case OR:
   b = (this.value[i] | value[i+offset])  0xff;
   break;
 case XOR:
   b = (this.value[i] ^ value[i+offset])  0xff;
   break;
   }
 }
 return b == 0 ? 1 : 0;
   }
 I've encountered this problem when using a BitComparator with a configured 
 this.value.length=8, and in the HBase table there were KeyValues with 
 keyValue.getBuffer().length=207911 bytes. In this case:
 for (int i = 207910; i = 0  b == 0; i--) {
   switch (bitOperator) {
 case AND:
   b = (this.value[207910] ... == ArrayIndexOutOfBoundsException
   break;
 That loop should use:
   for (int i = length - 1; i = 0  b == 0; i--) { (or this.value.length.)
 Should I provide a patch for correcting the problem?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6885) Typo in the Javadoc for close method of HTableInterface class

2012-09-26 Thread Jingguo Yao (JIRA)
Jingguo Yao created HBASE-6885:
--

 Summary: Typo in the Javadoc for close method of HTableInterface 
class
 Key: HBASE-6885
 URL: https://issues.apache.org/jira/browse/HBASE-6885
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.94.1
Reporter: Jingguo Yao
Priority: Minor


help in Releases any resources help or pending changes in internal buffers 
should be held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6885) Typo in the Javadoc for close method of HTableInterface class

2012-09-26 Thread Jingguo Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingguo Yao updated HBASE-6885:
---

Status: Patch Available  (was: Open)

 Typo in the Javadoc for close method of HTableInterface class
 -

 Key: HBASE-6885
 URL: https://issues.apache.org/jira/browse/HBASE-6885
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.94.1
Reporter: Jingguo Yao
Priority: Minor
   Original Estimate: 5m
  Remaining Estimate: 5m

 help in Releases any resources help or pending changes in internal 
 buffers should be held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6885) Typo in the Javadoc for close method of HTableInterface class

2012-09-26 Thread Jingguo Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingguo Yao updated HBASE-6885:
---

Attachment: HTableInterface-HBASE-6885.patch

 Typo in the Javadoc for close method of HTableInterface class
 -

 Key: HBASE-6885
 URL: https://issues.apache.org/jira/browse/HBASE-6885
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.94.1
Reporter: Jingguo Yao
Priority: Minor
 Attachments: HTableInterface-HBASE-6885.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 help in Releases any resources help or pending changes in internal 
 buffers should be held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6885) Typo in the Javadoc for close method of HTableInterface class

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463886#comment-13463886
 ] 

Hadoop QA commented on HBASE-6885:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546699/HTableInterface-HBASE-6885.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+0 tests included.  The patch appears to be a documentation patch that 
doesn't require tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2935//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2935//console

This message is automatically generated.

 Typo in the Javadoc for close method of HTableInterface class
 -

 Key: HBASE-6885
 URL: https://issues.apache.org/jira/browse/HBASE-6885
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.94.1
Reporter: Jingguo Yao
Priority: Minor
 Attachments: HTableInterface-HBASE-6885.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 help in Releases any resources help or pending changes in internal 
 buffers should be held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463887#comment-13463887
 ] 

nkeywal commented on HBASE-6702:


Committed revision 1390433.

Committed the v5, except the change on docbkx/developer.xml.
It includes Jesse's comments.

 ResourceChecker refinement
 --

 Key: HBASE-6702
 URL: https://issues.apache.org/jira/browse/HBASE-6702
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: nkeywal
Priority: Critical
 Fix For: 0.96.0

 Attachments: 6702.v1.patch, 6702.v4.patch, 6702.v5.patch


 This was based on some discussion from HBASE-6234.
 The ResourceChecker was added by N. Keywal to help resolve some hadoop qa 
 issues, but has since not be widely utilized. Further, with modularization we 
 have had to drop the ResourceChecker from the tests that are moved into the 
 hbase-common module because bringing the ResourceChecker up to hbase-common 
 would involved bringing all its dependencies (which are quite far reaching).
 The question then is, what should we do with it? Get rid of it? Refactor and 
 resuse? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5844) Delete the region servers znode after a regions server crash

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463899#comment-13463899
 ] 

nkeywal commented on HBASE-5844:


btw: I'm having a look at this to understand what's happening.

 Delete the region servers znode after a regions server crash
 

 Key: HBASE-5844
 URL: https://issues.apache.org/jira/browse/HBASE-5844
 Project: HBase
  Issue Type: Improvement
  Components: regionserver, scripts
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
 Fix For: 0.96.0

 Attachments: 5844.v1.patch, 5844.v2.patch, 5844.v3.patch, 
 5844.v3.patch, 5844.v4.patch


 today, if the regions server crashes, its znode is not deleted in ZooKeeper. 
 So the recovery process will stop only after a timeout, usually 30s.
 By deleting the znode in start script, we remove this delay and the recovery 
 starts immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463904#comment-13463904
 ] 

Hudson commented on HBASE-6702:
---

Integrated in HBase-TRUNK #3380 (See 
[https://builds.apache.org/job/HBase-TRUNK/3380/])
HBASE-6702  ResourceChecker refinement (Revision 1390433)

 Result = FAILURE
nkeywal : 
Files : 
* /hbase/trunk/hbase-common/pom.xml
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/IntegrationTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/LargeTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/MediumTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitListener.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/SmallTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java
* /hbase/trunk/hbase-it/pom.xml
* /hbase/trunk/hbase-server/pom.xml
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/IntegrationTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/LargeTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MediumTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitRule.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ServerResourceCheckerJUnitListener.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/SmallTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestClusterBootOrder.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestCompare.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestDrainingServer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestFSTableDescriptorForceCreation.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHRegionLocation.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHServerAddress.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHServerInfo.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestMultiVersions.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerName.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTrackerOnCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaMigrationConvertingToPB.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditorNoCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFakeKeyInFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestGet.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableUtil.java
* 

[jira] [Resolved] (HBASE-5257) Allow filter to be evaluated after version handling

2012-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-5257.
--

Resolution: Later

 Allow filter to be evaluated after version handling
 ---

 Key: HBASE-5257
 URL: https://issues.apache.org/jira/browse/HBASE-5257
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl

 There are various usecases and filter types where evaluating the filter 
 before version are handled either do not make sense, or make filter handling 
 more complicated.
 Also see this comment in ScanQueryMatcher:
 {code}
 /**
  * Filters should be checked before checking column trackers. If we do
  * otherwise, as was previously being done, ColumnTracker may increment 
 its
  * counter for even that KV which may be discarded later on by Filter. 
 This
  * would lead to incorrect results in certain cases.
  */
 {code}
 So we had Filters after the column trackers (which do the version checking), 
 and then moved it.
 Should be at the discretion of the Filter.
 Could either add a new method to FilterBase (maybe excludeVersions() or 
 something). Or have a new Filter wrapper (like WhileMatchFilter), that should 
 only be used as outmost filter and indicates the same (maybe 
 ExcludeVersionsFilter).
 See latest comments on HBASE-5229 for motivation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5565) Refactoring doMiniBatchPut()

2012-09-26