[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2012-09-04 Thread Gopinathan A (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447519#comment-13447519
 ] 

Gopinathan A commented on HBASE-5291:
-

@Andrew: I am interested to work on this issue.



 Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
 -

 Key: HBASE-5291
 URL: https://issues.apache.org/jira/browse/HBASE-5291
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver, security
Reporter: Andrew Purtell

 Like HADOOP-7119, the same motivations:
 {quote}
 Hadoop RPC already supports Kerberos authentication. 
 {quote}
 As does the HBase secure RPC engine.
 {quote}
 Kerberos enables single sign-on.
 Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
 HTTP SPNEGO.
 Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
 a unified authentication mechanism and single sign-on for web UI and RPC.
 {quote}
 Also like HADOOP-7119, the same solution:
 A servlet filter is configured in front of all Hadoop web consoles for 
 authentication.
 This filter verifies if the incoming request is already authenticated by the 
 presence of a signed HTTP cookie. If the cookie is present, its signature is 
 valid and its value didn't expire; then the request continues its way to the 
 page invoked by the request. If the cookie is not present, it is invalid or 
 it expired; then the request is delegated to an authenticator handler. The 
 authenticator handler then is responsible for requesting/validating the 
 user-agent for the user credentials. This may require one or more additional 
 interactions between the authenticator handler and the user-agent (which will 
 be multiple HTTP requests). Once the authenticator handler verifies the 
 credentials and generates an authentication token, a signed cookie is 
 returned to the user-agent for all subsequent invocations.
 The authenticator handler is pluggable and 2 implementations are provided out 
 of the box: pseudo/simple and kerberos.
 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
 pseudo/simple authentication. It trusts the value of the user.name query 
 string parameter. The pseudo/simple authenticator handler supports an 
 anonymous mode which accepts any request without requiring the user.name 
 query string parameter to create the token. This is the default behavior, 
 preserving the behavior of the HBase web consoles before this patch.
 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
 implementation. This authenticator handler will generate a token only if a 
 successful Kerberos HTTP SPNEGO interaction is performed between the 
 user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
 support Kerberos HTTP SPNEGO.
 We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
 matter of wiring up the filter to our infoservers in a similar manner. 
 And from 
 https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
 {quote}
 Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
 authentication for webapps via a filter. You should consider using it. You 
 don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
 artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6694) Test scanner batching in export job feature HBASE-6372 AND report on improvement HBASE-6372 adds

2012-09-04 Thread Alexander Alten-Lorenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Alten-Lorenz updated HBASE-6694:
--

Attachment: HBASE-6694.patch

Test added

 Test scanner batching in export job feature HBASE-6372 AND report on 
 improvement HBASE-6372 adds
 

 Key: HBASE-6694
 URL: https://issues.apache.org/jira/browse/HBASE-6694
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: Alexander Alten-Lorenz
 Attachments: HBASE-6694.patch


 From tail of HBASE-6372, Jon had raised issue that test added did not 
 actually test the feature.  This issue is about adding a test of HBASE-6372.  
 We should also have numbers for the improvement that HBASE-6372 brings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447529#comment-13447529
 ] 

Hadoop QA commented on HBASE-6711:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543619/6711-0.96-v1.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 110 warning 
messages.

-1 javac.  The applied patch generated 5 javac compiler warnings (more than 
the trunk's current 4 warnings).

-1 findbugs.  The patch appears to introduce 7 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
  org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.regionserver.TestAtomicOperation

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2767//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2767//console

This message is automatically generated.

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.96-v1.txt, 6711-0.96-v1.txt, 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447535#comment-13447535
 ] 

nkeywal commented on HBASE-6674:


bq. on the console, the time reported for small tests seems to be wrong
Regression in 2.12.3 vs. 2.12.2 and before. SUREFIRE-909

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 6674.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 
 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Status: Patch Available  (was: Open)

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 6674.patch, 6674.v2.patch, 
 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Attachment: 5processes.patch

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 6674.patch, 6674.v2.patch, 
 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Status: Open  (was: Patch Available)

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 6674.patch, 6674.v2.patch, 
 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Status: Open  (was: Patch Available)

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 5processes.patch, 6674.patch, 
 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Attachment: 5processes.patch

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 5processes.patch, 6674.patch, 
 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Status: Patch Available  (was: Open)

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 5processes.patch, 6674.patch, 
 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Status: Open  (was: Patch Available)

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 5processes.patch, 5processes.patch, 
 6674.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Attachment: 5processes.patch

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 5processes.patch, 5processes.patch, 
 6674.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal updated HBASE-6674:
---

Status: Patch Available  (was: Open)

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 5processes.patch, 5processes.patch, 
 6674.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6674) Check behavior of current surefire trunk on Hadoop QA

2012-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447579#comment-13447579
 ] 

Hadoop QA commented on HBASE-6674:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543637/5processes.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 110 warning 
messages.

-1 javac.  The applied patch generated 5 javac compiler warnings (more than 
the trunk's current 4 warnings).

-1 findbugs.  The patch appears to introduce 7 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2770//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2770//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2770//console

This message is automatically generated.

 Check behavior of current surefire trunk on Hadoop QA
 -

 Key: HBASE-6674
 URL: https://issues.apache.org/jira/browse/HBASE-6674
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Trivial
 Attachments: 5processes.patch, 5processes.patch, 5processes.patch, 
 6674.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch, 6674.v2.patch


 Not to be committed.
 Surefire 2.13 is in progress. Let's check that it works for us before it's 
 released. Locally it's acceptable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jie Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jie Huang updated HBASE-6516:
-

Attachment: hbase-6516-v4.patch

Thanks. How about this one?

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516.patch, hbase-6516-v2.patch, 
 hbase-6516-v3.patch, hbase-6516-v4.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4364) Filters applied to columns not in the selected column list are ignored

2012-09-04 Thread Jie Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447693#comment-13447693
 ] 

Jie Huang commented on HBASE-4364:
--

[~alexb], [~lucjb] I have checked those code related to the problem you 
mentioned. According to the comments, it seems that this topic has been 
discussed before. The conclusion is that
{code}
/**
 * Filters should be checked before checking column trackers. If we do
 * otherwise, as was previously being done, ColumnTracker may increment its
 * counter for even that KV which may be discarded later on by Filter. This
 * would lead to incorrect results in certain cases.
 */
if (filter != null) {
  ReturnCode filterResponse = filter.filterKeyValue(kv);
  if (filterResponse == ReturnCode.SKIP) {
return MatchCode.SKIP;
  } else if (filterResponse == ReturnCode.NEXT_COL) {
return columns.getNextRowOrNextColumn(bytes, offset, qualLength);
  } else if (filterResponse == ReturnCode.NEXT_ROW) {
stickyNextRow = true;
return MatchCode.SEEK_NEXT_ROW;
  } else if (filterResponse == ReturnCode.SEEK_NEXT_USING_HINT) {
return MatchCode.SEEK_NEXT_USING_HINT;
  }
}

MatchCode colChecker = columns.checkColumn(bytes, offset, qualLength,
timestamp, type, kv.getMemstoreTS()  maxReadPointToTrackVersions);
{code}

If both of you are still interested in this problem, we may try to figure out 
some potential solution. Any comment?

 Filters applied to columns not in the selected column list are ignored
 --

 Key: HBASE-4364
 URL: https://issues.apache.org/jira/browse/HBASE-4364
 Project: HBase
  Issue Type: Bug
  Components: filters
Affects Versions: 0.90.4, 0.92.0, 0.94.0
Reporter: Todd Lipcon
Priority: Critical
 Attachments: 
 HBASE-4364-failing-test-with-simplest-custom-filter.patch, 
 hbase-4364_trunk.patch, hbase-4364_trunk-v2.patch


 For a scan, if you select some set of columns using addColumns(), and then 
 apply a SingleColumnValueFilter that restricts the results based on some 
 other columns which aren't selected, then those filter conditions are ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447743#comment-13447743
 ] 

Jonathan Hsieh commented on HBASE-6516:
---

Looks pretty good.  I'm going to fix some nits and spacing issues on and let 
the hadoopqa bot give a try.

One major thing was that this change was missing (in v3 but not v4) which 
prevented compilation: (I've going to fix for the next version.).

{code}
@@ -258,7 +270,7 @@
* @return The 'current' tableinfo file.
* @throws IOException
*/
-  private static FileStatus getTableInfoPath(final FileSystem fs,
+  public static FileStatus getTableInfoPath(final FileSystem fs,
   final Path tabledir)
   throws IOException {
 FileStatus [] status = FSUtils.listStatus(fs, tabledir, new PathFilter() {
{code}

{code}
+  @Test
+  public void testHbckMissingTableinfo() throws Exception {
+String table = tabeInfo;
+FileSystem fs = null;
+Path tableinfo = null;
+ 
{code}

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516.patch, hbase-6516-v2.patch, 
 hbase-6516-v3.patch, hbase-6516-v4.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6516:
--

Attachment: hbase-6516-v5.patch

fixed some nits.

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516.patch, hbase-6516-v2.patch, 
 hbase-6516-v3.patch, hbase-6516-v4.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447743#comment-13447743
 ] 

Jonathan Hsieh edited comment on HBASE-6516 at 9/5/12 2:14 AM:
---

Looks pretty good.  I'm going to fix some nits and spacing issues on and let 
the hadoopqa bot give a try.

One major thing was that this change was missing (in v3 but not v4) which 
prevented compilation: (I've going to fix for the next version.).

{code}
@@ -258,7 +270,7 @@
* @return The 'current' tableinfo file.
* @throws IOException
*/
-  private static FileStatus getTableInfoPath(final FileSystem fs,
+  public static FileStatus getTableInfoPath(final FileSystem fs,
   final Path tabledir)
   throws IOException {
 FileStatus [] status = FSUtils.listStatus(fs, tabledir, new PathFilter() {
{code}

Fixed a few minor nits like this spelling error.
{code}
+  @Test
+  public void testHbckMissingTableinfo() throws Exception {
+String table = tabeInfo;
+FileSystem fs = null;
+Path tableinfo = null;
+ 
{code}

  was (Author: jmhsieh):
Looks pretty good.  I'm going to fix some nits and spacing issues on and 
let the hadoopqa bot give a try.

One major thing was that this change was missing (in v3 but not v4) which 
prevented compilation: (I've going to fix for the next version.).

{code}
@@ -258,7 +270,7 @@
* @return The 'current' tableinfo file.
* @throws IOException
*/
-  private static FileStatus getTableInfoPath(final FileSystem fs,
+  public static FileStatus getTableInfoPath(final FileSystem fs,
   final Path tabledir)
   throws IOException {
 FileStatus [] status = FSUtils.listStatus(fs, tabledir, new PathFilter() {
{code}

{code}
+  @Test
+  public void testHbckMissingTableinfo() throws Exception {
+String table = tabeInfo;
+FileSystem fs = null;
+Path tableinfo = null;
+ 
{code}
  
 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516.patch, hbase-6516-v2.patch, 
 hbase-6516-v3.patch, hbase-6516-v4.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6358) Bulkloading from remote filesystem is problematic

2012-09-04 Thread Dave Revell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Revell reassigned HBASE-6358:
--

Assignee: (was: Dave Revell)

Unassigning this ticket from myself. I think it will need a new tool, and I'm 
not able to spend the time to do a good job.

 Bulkloading from remote filesystem is problematic
 -

 Key: HBASE-6358
 URL: https://issues.apache.org/jira/browse/HBASE-6358
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.0
Reporter: Dave Revell
 Attachments: 6358-suggestion.txt, HBASE-6358-trunk-v1.diff, 
 HBASE-6358-trunk-v2.diff, HBASE-6358-trunk-v3.diff


 Bulk loading hfiles that don't live on the same filesystem as HBase can cause 
 problems for subtle reasons.
 In Store.bulkLoadHFile(), the regionserver will copy the source hfile to its 
 own filesystem if it's not already there. Since this can take a long time for 
 large hfiles, it's likely that the client will timeout and retry. When the 
 client retries repeatedly, there may be several bulkload operations in flight 
 for the same hfile, causing lots of unnecessary IO and tying up handler 
 threads. This can seriously impact performance. In my case, the cluster 
 became unusable and the regionservers had to be kill -9'ed.
 Possible solutions:
  # Require that hfiles already be on the same filesystem as HBase in order 
 for bulkloading to succeed. The copy could be handled by 
 LoadIncrementalHFiles before the regionserver is called.
  # Others? I'm not familiar with Hadoop IPC so there may be tricks to extend 
 the timeout or something else.
 I'm willing to write a patch but I'd appreciate recommendations on how to 
 proceed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447821#comment-13447821
 ] 

Jimmy Xiang commented on HBASE-6691:


If cause is UndeclaredThrowableException, could cause.getCause() be null?
If so, it is better to move this before checking is cause is null; otherwise, 
it is fine with me.

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 

[jira] [Updated] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6711:
-

Attachment: (was: 6711-0.96-v1.txt)

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6711:
-

Attachment: (was: 6711-0.96-v1.txt)

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447821#comment-13447821
 ] 

Jimmy Xiang edited comment on HBASE-6691 at 9/5/12 4:15 AM:


If cause is UndeclaredThrowableException, could cause.getCause() be null?
If so, it is better to move this before checking if cause is null; otherwise, 
it is fine with me.

{noformat}
+  if (cause != null  cause instanceof 
UndeclaredThrowableException) {
+cause = cause.getCause();
+  }
+  if (cause == null){
+throw new RuntimeException(
+  Proxy invocation failed and getCause is null, e);
+  }
{noformat}


  was (Author: jxiang):
If cause is UndeclaredThrowableException, could cause.getCause() be null?
If so, it is better to move this before checking is cause is null; otherwise, 
it is fine with me.
  
 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 

[jira] [Created] (HBASE-6714) TestMultiSlaveReplication#testMultiSlaveReplication may fail

2012-09-04 Thread Himanshu Vashishtha (JIRA)
Himanshu Vashishtha created HBASE-6714:
--

 Summary: TestMultiSlaveReplication#testMultiSlaveReplication may 
fail
 Key: HBASE-6714
 URL: https://issues.apache.org/jira/browse/HBASE-6714
 Project: HBase
  Issue Type: Bug
  Components: replication, test
Affects Versions: 0.94.0, 0.92.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha


TestMultiSlaveReplication-testMultiSlaveReplication failed in our local build 
citing that row was not replicated to second peer. This is because after 
inserting row, log is rolled and we look for row2 in both the clusters and 
then we check for existence of row in both clusters. Meanwhile, Replication 
thread was sleeping for the second cluster and Row row2 is not present in the 
second cluster from the very beginning. So, the row2 existence check succeeds 
and control move on to find row in both clusters where it fails for the 
second cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6691:
--

Attachment: hbase-6691-v2.patch

Addressed Jimmy's issue.

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:384)
   at 
 

[jira] [Updated] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6691:
--

Attachment: (was: hbase-6691-v2.patch)

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:384)
   at 
 

[jira] [Commented] (HBASE-6714) TestMultiSlaveReplication#testMultiSlaveReplication may fail

2012-09-04 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447827#comment-13447827
 ] 

Himanshu Vashishtha commented on HBASE-6714:


Since we are interested in seeing whether a row is replicated while log is 
rolled, we can add a method checkAndWait to see if it is replicated.

 TestMultiSlaveReplication#testMultiSlaveReplication may fail
 

 Key: HBASE-6714
 URL: https://issues.apache.org/jira/browse/HBASE-6714
 Project: HBase
  Issue Type: Bug
  Components: replication, test
Affects Versions: 0.92.0, 0.94.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha

 TestMultiSlaveReplication-testMultiSlaveReplication failed in our local 
 build citing that row was not replicated to second peer. This is because 
 after inserting row, log is rolled and we look for row2 in both the 
 clusters and then we check for existence of row in both clusters. 
 Meanwhile, Replication thread was sleeping for the second cluster and Row 
 row2 is not present in the second cluster from the very beginning. So, the 
 row2 existence check succeeds and control move on to find row in both 
 clusters where it fails for the second cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6516:
--

Attachment: hbase-6516-v5a.patch

prevoius version didn't include an added file.

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516.patch, hbase-6516-v2.patch, 
 hbase-6516-v3.patch, hbase-6516-v4.patch, hbase-6516-v5a.patch, 
 hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2600) Change how we do meta tables; from tablename+STARTROW+randomid to instead, tablename+ENDROW+randomid

2012-09-04 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447846#comment-13447846
 ] 

Alex Newman commented on HBASE-2600:


Sorry everyone, I was lost at burning man.

 perhaps putting the daughters into the same row adds some transactional 
 benefits that we didn't previously have?
Indeed. Currently we can't split meta, and even still, I think we can do atomic 
operations within a region easily.

@Stack I like the info:regionid idea. I'll also put on my thinking cap about 
it. This patch requires a big rework to get it to work.

 Change how we do meta tables; from tablename+STARTROW+randomid to instead, 
 tablename+ENDROW+randomid
 

 Key: HBASE-2600
 URL: https://issues.apache.org/jira/browse/HBASE-2600
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Alex Newman
 Attachments: 
 0001-Changed-regioninfo-format-to-use-endKey-instead-of-s.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v2.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v4.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v6.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v7.2.patch, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8.1, 
 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v9.patch, 
 0001-HBASE-2600.v10.patch, 0001-HBASE-2600-v11.patch, 2600-trunk-01-17.txt, 
 HBASE-2600+5217-Sun-Mar-25-2012-v3.patch, 
 HBASE-2600+5217-Sun-Mar-25-2012-v4.patch, hbase-2600-root.dir.tgz, jenkins.pdf


 This is an idea that Ryan and I have been kicking around on and off for a 
 while now.
 If regionnames were made of tablename+endrow instead of tablename+startrow, 
 then in the metatables, doing a search for the region that contains the 
 wanted row, we'd just have to open a scanner using passed row and the first 
 row found by the scan would be that of the region we need (If offlined 
 parent, we'd have to scan to the next row).
 If we redid the meta tables in this format, we'd be using an access that is 
 natural to hbase, a scan as opposed to the perverse, expensive 
 getClosestRowBefore we currently have that has to walk backward in meta 
 finding a containing region.
 This issue is about changing the way we name regions.
 If we were using scans, prewarming client cache would be near costless (as 
 opposed to what we'll currently have to do which is first a 
 getClosestRowBefore and then a scan from the closestrowbefore forward).
 Converting to the new method, we'd have to run a migration on startup 
 changing the content in meta.
 Up to this, the randomid component of a region name has been the timestamp of 
 region creation.   HBASE-2531 32-bit encoding of regionnames waaay 
 too susceptible to hash clashes proposes changing the randomid so that it 
 contains actual name of the directory in the filesystem that hosts the 
 region.  If we had this in place, I think it would help with the migration to 
 this new way of doing the meta because as is, the region name in fs is a hash 
 of regionname... changing the format of the regionname would mean we generate 
 a different hash... so we'd need hbase-2531 to be in place before we could do 
 this change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447847#comment-13447847
 ] 

Jonathan Hsieh commented on HBASE-6516:
---

Builds against trunk tested and all test passed for me locally.  Did some minor 
tweaks for 92/94, TestHBaseFsck passes.

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6516:
--

Attachment: hbase-6516-94-v5.patch

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447852#comment-13447852
 ] 

Lars Hofhansl commented on HBASE-6711:
--

I ran all the tests locally. All pass.
Any objections to committing this? It's a simple change.

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5547) Don't delete HFiles when in backup mode

2012-09-04 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447854#comment-13447854
 ] 

Jesse Yates commented on HBASE-5547:


@Matteo - TimeToLiveHFileCleaner.instantiateFS() definitely shouldn't be in a 
pre/post hook for chore() - you only want to instantiate the FS once. WRT 
refreshCache() as a use-case, that seems viable, but probably should be 
something that a subclass handles with its own hooks (and for refreshCache in 
particular, you wouldn't want to run it everytime anyways, making putting it in 
pre/post a little tricky).

If you still think its worth doing, want to file a new ticket and we can move 
discussion there?

 Don't delete HFiles when in backup mode
 -

 Key: HBASE-5547
 URL: https://issues.apache.org/jira/browse/HBASE-5547
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Jesse Yates
 Fix For: 0.96.0, 0.94.3

 Attachments: 5547.addendum-v3, 5547-addendum-v4.txt, 5547-v12.txt, 
 5547-v16.txt, hbase-5447-v8.patch, hbase-5447-v8.patch, hbase-5547-v9.patch, 
 java_HBASE-5547.addendum, java_HBASE-5547.addendum-v1, 
 java_HBASE-5547.addendum-v2, java_HBASE-5547_v13.patch, 
 java_HBASE-5547_v14.patch, java_HBASE-5547_v15.patch, 
 java_HBASE-5547_v4.patch, java_HBASE-5547_v5.patch, java_HBASE-5547_v6.patch, 
 java_HBASE-5547_v7.patch


 This came up in a discussion I had with Stack.
 It would be nice if HBase could be notified that a backup is in progress (via 
 a znode for example) and in that case either:
 1. rename HFiles to be delete to file.bck
 2. rename the HFiles into a special directory
 3. rename them to a general trash directory (which would not need to be tied 
 to backup mode).
 That way it should be able to get a consistent backup based on HFiles (HDFS 
 snapshots or hard links would be better options here, but we do not have 
 those).
 #1 makes cleanup a bit harder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6516:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Jie Huang!  I've commited to trunk/0.94/0.92

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6715) TestFromClientSide.testCacheOnWriteEvictOnClose is flaky

2012-09-04 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-6715:
--

 Summary: TestFromClientSide.testCacheOnWriteEvictOnClose is flaky
 Key: HBASE-6715
 URL: https://issues.apache.org/jira/browse/HBASE-6715
 Project: HBase
  Issue Type: Test
Reporter: Jimmy Xiang
Priority: Minor


Occasionally, this test fails:

{noformat}

expected:2049 but was:2069
Stacktrace

java.lang.AssertionError: expected:2049 but was:2069
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.hadoop.hbase.client.TestFromClientSide.testCacheOnWriteEvictOnClose(TestFromClientSide.java:4248)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

{noformat}

It could be because there is other thread still accessing the cache.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6516:
--

Fix Version/s: 0.94.2
   0.96.0
   0.92.2

 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Fix For: 0.92.2, 0.96.0, 0.94.2

 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6568) Extract daemon thread factory from HTable into its own class

2012-09-04 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6568:
---

  Resolution: Fixed
Release Note: Adding DaemonThreadFactory (extracted from HTable)  - a 
thread factory that creates properly named, daemon threads.
  Status: Resolved  (was: Patch Available)

Closing. Remaining work is in HBASE-6637.

 Extract daemon thread factory from HTable into its own class
 

 Key: HBASE-6568
 URL: https://issues.apache.org/jira/browse/HBASE-6568
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.96.0

 Attachments: hbase-6568-addendum.patch, java_HBASE-6568-v0.patch


 The DaemonThreadFactory in HTable is a really nice utility that is useful in 
 multiple places. We should pull out into a standalone class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6421) [pom] add jettison and fix netty specification

2012-09-04 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6421:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 [pom] add jettison and fix netty specification
 --

 Key: HBASE-6421
 URL: https://issues.apache.org/jira/browse/HBASE-6421
 Project: HBase
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: hbase-6421-v0.patch, hbase-6421-v1.patch


 Currently, jettison isn't required for testing hbase-server, but 
 TestSchemaConfigured requires it, causing the compile phase (at least on my 
 MBP) to fail. Further, in cleaning up the poms, netty should be declared in 
 the parent hbase/pom.xml and then inherited in the subclass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5354) Source to standalone deployment script

2012-09-04 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved HBASE-5354.


Resolution: Won't Fix

No one really seems to need this, particularly with being able to run from the 
source. Dropping it.

 Source to standalone deployment script
 --

 Key: HBASE-5354
 URL: https://issues.apache.org/jira/browse/HBASE-5354
 Project: HBase
  Issue Type: New Feature
  Components: build, scripts
Affects Versions: 0.94.0
Reporter: Jesse Yates
Assignee: Jesse Yates
Priority: Minor
 Attachments: bash_HBASE-5354.patch


 Automating the testing of source code in a 'real' instance can be a bit of a 
 pain, even getting it into standalone mode.
 Steps you need to go through:
 1) Build the project
 2) Copy it to the deployment directory
 3) Shutdown the current cluster (if it is running)
 4) Untar the tar
 5) Update the configs to point to a local data cluster
 6) Startup the new deployment
 Yeah, its not super difficult, but it would be nice to just have a script to 
 make it button push easy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6690) TestZooKeeperTableArchiveClient.testMultipleTables is flapping

2012-09-04 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447871#comment-13447871
 ] 

Jesse Yates commented on HBASE-6690:


Oh, its a dup of HBASE-6707.

 TestZooKeeperTableArchiveClient.testMultipleTables is flapping
 --

 Key: HBASE-6690
 URL: https://issues.apache.org/jira/browse/HBASE-6690
 Project: HBase
  Issue Type: Test
Reporter: Chris Trezzo
Assignee: Jesse Yates
Priority: Minor

 TestZooKeeperTableArchiveClient.testMultipleTables is a flapping test. It is 
 complaining that some archived HFiles were not deleted.
 Test history: 
 https://builds.apache.org/job/HBase-TRUNK/3293/testReport/junit/org.apache.hadoop.hbase.backup.example/TestZooKeeperTableArchiveClient/testMultipleTables/history/
 Error message:
 Archived HFiles 
 (hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam)
  should have gotten deleted, but didn't, remaining 
 files:\[hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam/fc872572a1f5443eb55b6e2567cfeb1c\]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6707) TEST org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient.testMultipleTables flaps

2012-09-04 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447873#comment-13447873
 ] 

Jesse Yates commented on HBASE-6707:


@Ted marked 6690 as a dup of this since this stayed open :)

Here's the link to the overall history (definitely flapping):
* 
https://builds.apache.org/job/HBase-TRUNK/3293/testReport/junit/org.apache.hadoop.hbase.backup.example/TestZooKeeperTableArchiveClient/testMultipleTables/history/

And the most recent failure:
* 
https://builds.apache.org/job/HBase-TRUNK/3299/testReport/org.apache.hadoop.hbase.backup.example/TestZooKeeperTableArchiveClient/testMultipleTables/

 TEST 
 org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient.testMultipleTables
  flaps
 

 Key: HBASE-6707
 URL: https://issues.apache.org/jira/browse/HBASE-6707
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Sameer Vaishampayan
Priority: Critical

 https://builds.apache.org/job/HBase-TRUNK/3293/
 Error Message
 Archived HFiles 
 (hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam)
  should have gotten deleted, but didn't, remaining 
 files:[hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam/fc872572a1f5443eb55b6e2567cfeb1c]
 Stacktrace
 java.lang.AssertionError: Archived HFiles 
 (hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam)
  should have gotten deleted, but didn't, remaining 
 files:[hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam/fc872572a1f5443eb55b6e2567cfeb1c]
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertNull(Assert.java:551)
   at 
 org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient.testMultipleTables(TestZooKeeperTableArchiveClient.java:291)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6707) TEST org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient.testMultipleTables flaps

2012-09-04 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates reassigned HBASE-6707:
--

Assignee: Jesse Yates

 TEST 
 org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient.testMultipleTables
  flaps
 

 Key: HBASE-6707
 URL: https://issues.apache.org/jira/browse/HBASE-6707
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Sameer Vaishampayan
Assignee: Jesse Yates
Priority: Critical

 https://builds.apache.org/job/HBase-TRUNK/3293/
 Error Message
 Archived HFiles 
 (hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam)
  should have gotten deleted, but didn't, remaining 
 files:[hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam/fc872572a1f5443eb55b6e2567cfeb1c]
 Stacktrace
 java.lang.AssertionError: Archived HFiles 
 (hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam)
  should have gotten deleted, but didn't, remaining 
 files:[hdfs://localhost:59986/user/jenkins/hbase/.archive/otherTable/01ced3b55d7220a9c460273a4a57b198/fam/fc872572a1f5443eb55b6e2567cfeb1c]
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertNull(Assert.java:551)
   at 
 org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient.testMultipleTables(TestZooKeeperTableArchiveClient.java:291)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6691:
--

Attachment: hbase-6691-v2.patch

Updated to be fixed the way jimmy suggested.  Test passes.

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:384)
   at 
 

[jira] [Commented] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447889#comment-13447889
 ] 

Jimmy Xiang commented on HBASE-6691:


+1

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:384)
   at 
 

[jira] [Commented] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447898#comment-13447898
 ] 

Hudson commented on HBASE-6516:
---

Integrated in HBase-0.94 #447 (See 
[https://builds.apache.org/job/HBase-0.94/447/])
HBASE-6516 hbck cannot detect any IOException while .tableinfo file is 
missing (Jie Huang) (Revision 1380759)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Fix For: 0.92.2, 0.96.0, 0.94.2

 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6691:
--

Tags: 0.96
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:384)
   at 
 

[jira] [Updated] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6691:
--

Component/s: hbck

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, hbck
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:384)
   at 
 

[jira] [Commented] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447901#comment-13447901
 ] 

Hudson commented on HBASE-6516:
---

Integrated in HBase-0.92 #553 (See 
[https://builds.apache.org/job/HBase-0.92/553/])
HBASE-6516 hbck cannot detect any IOException while .tableinfo file is 
missing (Jie Huang) (Revision 1380760)

 Result = FAILURE
jmhsieh : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
* /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
/hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Fix For: 0.92.2, 0.96.0, 0.94.2

 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-6691:
--

Component/s: hadoop2
   Tags:   (was: 0.96)

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, hbck
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:384)
   at 
 

[jira] [Commented] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447903#comment-13447903
 ] 

Jonathan Hsieh commented on HBASE-6691:
---

Thanks for the review jimmy.  committed to trunk.

 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, hbck
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1095)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1067)
   at 
 

[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447915#comment-13447915
 ] 

Lars Hofhansl commented on HBASE-6711:
--

In a microbenchmark I do see improvements after all:
Locally scanning rows with 10.000 columns... Variances are high, but without 
the patch each scan (including returning to the client) takes 9-13ms. With the 
patch it takes 8.5-10ms (averages per row).


 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6716) Hadoopqa is hosed

2012-09-04 Thread stack (JIRA)
stack created HBASE-6716:


 Summary: Hadoopqa is hosed
 Key: HBASE-6716
 URL: https://issues.apache.org/jira/browse/HBASE-6716
 Project: HBase
  Issue Type: Bug
  Components: build
Reporter: stack


See this thread on list: 
http://search-hadoop.com/m/PtDLC19vEd62/%2522Looks+like+HadoopQA+is+hosed%2522subj=Looks+like+HadoopQA+is+hosed+

Lots of the hadoopqa builds are failing complaining about missing dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6716) Hadoopqa is hosed

2012-09-04 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6716:
-

Attachment: rm_support_dir.txt

Let me try this patch.  It just removes the notion of a 'support_dir'... It 
seems unused and Todd says the dir we're copying into doesn't exist (could 
figure another place to copy too if was wanted -- for now just doing clean 
remove of the facility).

I went back and look at hadoop, the place this script came from.  It has the 
support_dir thing going on so it looks like we inherited it from there way 
back.

 Hadoopqa is hosed
 -

 Key: HBASE-6716
 URL: https://issues.apache.org/jira/browse/HBASE-6716
 Project: HBase
  Issue Type: Bug
  Components: build
Reporter: stack
 Attachments: rm_support_dir.txt


 See this thread on list: 
 http://search-hadoop.com/m/PtDLC19vEd62/%2522Looks+like+HadoopQA+is+hosed%2522subj=Looks+like+HadoopQA+is+hosed+
 Lots of the hadoopqa builds are failing complaining about missing dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6716) Hadoopqa is hosed

2012-09-04 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6716:
-

Assignee: stack
  Status: Patch Available  (was: Open)

Let me try this patch first...

 Hadoopqa is hosed
 -

 Key: HBASE-6716
 URL: https://issues.apache.org/jira/browse/HBASE-6716
 Project: HBase
  Issue Type: Bug
  Components: build
Reporter: stack
Assignee: stack
 Attachments: rm_support_dir.txt


 See this thread on list: 
 http://search-hadoop.com/m/PtDLC19vEd62/%2522Looks+like+HadoopQA+is+hosed%2522subj=Looks+like+HadoopQA+is+hosed+
 Lots of the hadoopqa builds are failing complaining about missing dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447935#comment-13447935
 ] 

Elliott Clark commented on HBASE-6711:
--

Looks good to me.  Nice find.  
A lot of array lists are created all over the place.  I wonder how many are 
un-needed like this one.

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447952#comment-13447952
 ] 

Hudson commented on HBASE-6516:
---

Integrated in HBase-TRUNK #3300 (See 
[https://builds.apache.org/job/HBase-TRUNK/3300/])
HBASE-6516 hbck cannot detect any IOException while .tableinfo file is 
missing (Jie Huang) (Revision 1380761)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Fix For: 0.92.2, 0.96.0, 0.94.2

 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447956#comment-13447956
 ] 

Hudson commented on HBASE-6691:
---

Integrated in HBase-TRUNK #3301 (See 
[https://builds.apache.org/job/HBase-TRUNK/3301/])
HBASE-6691 HFile quarantine fails with missing files in hadoop 2.0 
(Revision 1380790)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java


 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, hbck
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 

[jira] [Commented] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447967#comment-13447967
 ] 

Anil Gupta commented on HBASE-6669:
---

[~v.himanshu]
Please find attached the patch for BigDeciamlColumnInterpreter for review. I 
haven't worked on Unit Test and formatting the source code yet. I hope it's ok 
for reviewing the code.

@Julian Wissmann,
 I am attaching the java file for BigDeciamlColumnInterpreter file. You wont 
need to recompile HBase since you can use it directly at client side. Let me 
know if you face any problem.

Thanks,
Anil Gupta
Software Engineer II, Intuit, Inc

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated HBASE-6669:
--

Attachment: BigDecimalColumnInterpreter.patch

Initial Patch for BigDecimalColumnInterpreter.

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated HBASE-6669:
--

Attachment: BigDecimalColumnInterpreter.java

Source file for BigDecimalColumnInterpreter.

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.java, 
 BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated HBASE-6669:
--

Attachment: (was: BigDecimalColumnInterpreter.java)

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors

 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated HBASE-6669:
--

Attachment: (was: BigDecimalColumnInterpreter.patch)

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors

 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated HBASE-6669:
--

Attachment: BigDecimalColumnInterpreter.java

Source file

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.java, 
 BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated HBASE-6669:
--

Attachment: BigDecimalColumnInterpreter.patch

Patch file.

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.java, 
 BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447983#comment-13447983
 ] 

Ted Yu commented on HBASE-6669:
---

@Anil:
Can you attach patch for trunk where BigDeciamlColumnInterpreter resides in 
hbase-server module ?
Please add following annotation to BigDeciamlColumnInterpreter class:
{code}
@InterfaceAudience.Public
@InterfaceStability.Evolving
{code}
Why did you choose 0.0D / 0.0D in divideForAvg() ?
{code}
+ public double divideForAvg(BigDecimal val1, Long paramLong) {
+   return (((paramLong == null) || (val1 == null)) ? (0.0D / 0.0D) : 
val1.doubleValue()/paramLong.doubleValue());
{code}
See HBASE-3678 for an Eclipse formatter.
Limit line length to 100 characters.

Thanks

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.java, 
 BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6659) Port HBASE-6508 Filter out edits at log split time

2012-09-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13447995#comment-13447995
 ] 

Ted Yu commented on HBASE-6659:
---

@Stack:
What do you think of my proposal above ?

Thanks

 Port HBASE-6508 Filter out edits at log split time
 --

 Key: HBASE-6659
 URL: https://issues.apache.org/jira/browse/HBASE-6659
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Ted Yu
Assignee: Zhihong Ted Yu
 Fix For: 0.96.0

 Attachments: 6508-v2.txt, 6508-v3.txt, 6508-v4.txt, 6508-v5.txt, 
 6508-v7.txt, 6508-v7.txt


 HBASE-6508 is for 0.89-fb branch.
 This JIRA ports the feature to trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448011#comment-13448011
 ] 

Lars Hofhansl commented on HBASE-6711:
--

Probably a lot. We'll whack them one by one. :)

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6711:
-

Attachment: 6711-0.94-v1.txt

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.94-v1.txt, 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6711:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 0.94 and 0.96.

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.94-v1.txt, 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6412) Move external servers to metrics2 (thrift,thrift2,rest)

2012-09-04 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6412:
-

Attachment: HBASE-6412-1.patch

Talked to Stack and he and I agree that code cleanliness is the most important, 
so requiring mvn package as the replacement for mvn compile seems like a fair 
trade.  Because of that I have updated the developer docbook.

Latest patch adds an init method so that tests have the ability to re-set 
sources.  This allows all of the sources to be singletons and still be pretty 
testable. (hadoop2 really complains if sources aren't singletons.)

Added a lot of comments.

Fixed file headers.

Moved constants to the interfaces so all strings are the same.

Changed test-patch so subsequent patches will use package rather than compile. 
(Note this won't apply for this patch as hadoop qa starts the script before 
patching.)

 Move external servers to metrics2 (thrift,thrift2,rest)
 ---

 Key: HBASE-6412
 URL: https://issues.apache.org/jira/browse/HBASE-6412
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Blocker
 Attachments: HBASE-6412-0.patch, HBASE-6412-1.patch


 Implement metrics2 for all the external servers:
 * Thrift
 * Thrift2
 * Rest

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6066) some low hanging read path improvement ideas

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448065#comment-13448065
 ] 

Lars Hofhansl commented on HBASE-6066:
--

Looked at #2. Turns out that this actually needed to retain the group of KVs 
that represent the current row to be used with Filter.filterRow(...)
(But see HBASE-6711, which removes an unnecessary temporary result array from 
StoreScanner)

 some low hanging read path improvement ideas 
 -

 Key: HBASE-6066
 URL: https://issues.apache.org/jira/browse/HBASE-6066
 Project: HBase
  Issue Type: Improvement
Reporter: Kannan Muthukkaruppan
Assignee: Michal Gregorczyk
Priority: Critical
  Labels: noob
 Attachments: metric-stringbuilder-fix.patch


 I was running some single threaded scan performance tests for a table with 
 small sized rows that is fully cached. Some observations...
 We seem to be doing several wasteful iterations over and/or building of 
 temporary lists.
 1) One such is the following code in HRegionServer.next():
 {code}
boolean moreRows = s.next(values, HRegion.METRIC_NEXTSIZE);
if (!values.isEmpty()) {
  for (KeyValue kv : values) {  --  wasteful in most 
 cases
currentScanResultSize += kv.heapSize();
}
results.add(new Result(values));
 {code}
 By default the maxScannerResultSize is Long.MAX_VALUE. In those cases,
 we can avoid the unnecessary iteration to compute currentScanResultSize.
 2) An example of a wasteful temporary array, is results in
 RegionScanner.next().
 {code}
   results.clear();
   boolean returnResult = nextInternal(limit, metric);
   outResults.addAll(results);
 {code}
 results then gets copied over to outResults via an addAll(). Not sure why we 
 can not directly collect the results in outResults.
 3) Another almost similar exmaple of a wasteful array is results in 
 StoreScanner.next(), which eventually also copies its results into 
 outResults.
 4) Reduce overhead of size metric maintained in StoreScanner.next().
 {code}
   if (metric != null) {
  HRegion.incrNumericMetric(this.metricNamePrefix + metric,
copyKv.getLength());
   }
   results.add(copyKv);
 {code}
 A single call to next() might fetch a lot of KVs. We can first add up the 
 size of those KVs in a local variable and then in a finally clause increment 
 the metric one shot, rather than updating AtomicLongs for each KV.
 5) RegionScanner.next() calls a helper RegionScanner.next() on the same 
 object. Both are synchronized methods. Synchronized methods calling nested 
 synchronized methods on the same object are probably adding some small 
 overhead. The inner next() calls isFilterDone() which is a also a 
 synchronized method. We should factor the code to avoid these nested 
 synchronized methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6717) Remove hadoop-metrics.properties when everything has moved over.

2012-09-04 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-6717:


 Summary: Remove hadoop-metrics.properties when everything has 
moved over.
 Key: HBASE-6717
 URL: https://issues.apache.org/jira/browse/HBASE-6717
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark
Assignee: Elliott Clark




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6408) Naming and documenting of the hadoop-metrics2.properties file

2012-09-04 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6408:
-

Status: Patch Available  (was: Open)

 Naming and documenting of the hadoop-metrics2.properties file
 -

 Key: HBASE-6408
 URL: https://issues.apache.org/jira/browse/HBASE-6408
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Blocker
 Attachments: HBASE-6408-0.patch


 hadoop-metrics2.properties is currently where metrics2 loads it's sinks.
 This file could be better named, hadoop-hbase-metrics2.properties
 In addition it needs examples like the current hadoop-metrics.properties has.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6408) Naming and documenting of the hadoop-metrics2.properties file

2012-09-04 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6408:
-

Attachment: HBASE-6408-0.patch

Add some examples for hadoop metrics2 properties.  Also changed the name to 
hadoop-metrics2-hbase.properties.

 Naming and documenting of the hadoop-metrics2.properties file
 -

 Key: HBASE-6408
 URL: https://issues.apache.org/jira/browse/HBASE-6408
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Blocker
 Attachments: HBASE-6408-0.patch


 hadoop-metrics2.properties is currently where metrics2 loads it's sinks.
 This file could be better named, hadoop-hbase-metrics2.properties
 In addition it needs examples like the current hadoop-metrics.properties has.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6649) [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]

2012-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448117#comment-13448117
 ] 

Hadoop QA commented on HBASE-6649:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543752/6649-1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2779//console

This message is automatically generated.

 [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]
 ---

 Key: HBASE-6649
 URL: https://issues.apache.org/jira/browse/HBASE-6649
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.92.3

 Attachments: 6649-1.patch, 6649-2.txt, HBase-0.92 #495 test - 
 queueFailover [Jenkins].html, HBase-0.92 #502 test - queueFailover 
 [Jenkins].html


 Have seen it twice in the recent past: http://bit.ly/MPCykB  
 http://bit.ly/O79Dq7 .. 
 Looking briefly at the logs hints at a pattern - in both the failed test 
 instances, there was an RS crash while the test was running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448122#comment-13448122
 ] 

Hudson commented on HBASE-6711:
---

Integrated in HBase-TRUNK #3302 (See 
[https://builds.apache.org/job/HBase-TRUNK/3302/])
HBASE-6711 Avoid local results copy in StoreScanner (Revision 1380868)

 Result = FAILURE
larsh : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.94-v1.txt, 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448125#comment-13448125
 ] 

Hudson commented on HBASE-6711:
---

Integrated in HBase-0.94 #448 (See 
[https://builds.apache.org/job/HBase-0.94/448/])
HBASE-6711 Avoid local results copy in StoreScanner (Revision 1380867)

 Result = FAILURE
larsh : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.94-v1.txt, 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448126#comment-13448126
 ] 

Anil Gupta commented on HBASE-6669:
---

[~ted_yu]

This time i created the patch from hbase-server-src/main/java. I hope it's 
ok this time. Sorry, this is the first time i am submitting patch.
I changed 0.0D/0.0D to Double.NaN in divideForAvg(). Is this fine? Should i 
create a separate class for Unit Tests or put my test cases in 
TestAggregateProtocol?

Thanks,
Anil Gupta
Software Engineer II, Intuit, Inc

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.java, 
 BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3976) Disable Block Cache On Compactions

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448131#comment-13448131
 ] 

Lars Hofhansl commented on HBASE-3976:
--

Any comment on my comment :) ?
The use case we're the most interested in is transferring the hotness of the 
memstore to the blockcache (i.e. cache on flush).
If there's interest I'll look into implementing the flags I mentioned in my 
previous comment.

 Disable Block Cache On Compactions
 --

 Key: HBASE-3976
 URL: https://issues.apache.org/jira/browse/HBASE-3976
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.90.3
Reporter: Karthick Sankarachary
Assignee: Mikhail Bautin
Priority: Minor
 Attachments: HBASE-3976.patch, HBASE-3976-unconditional.patch, 
 HBASE-3976-V3.patch


 Is there a good reason to believe that caching blocks during compactions is 
 beneficial? Currently, if block cache is enabled on a certain family, then 
 every time it's compacted, we load all of its blocks into the (LRU) cache, at 
 the expense of the legitimately hot ones.
 As a matter of fact, this concern was raised earlier in HBASE-1597, which 
 rightly points out that, we should not bog down the LRU with unneccessary 
 blocks during compaction. Even though that issue has been marked as fixed, 
 it looks like it ought to be reopened.
 Should we err on the side of caution and not cache blocks during compactions 
 period (as illustrated in the attached patch)? Or, can we be selectively 
 aggressive about what blocks do get cached during compaction (e.g., only 
 cache those blocks from the recent files)?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated HBASE-6669:
--

Attachment: BigDecimalColumnInterpreter.patch

New Patch.

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.java, 
 BigDecimalColumnInterpreter.patch, BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6669) Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient

2012-09-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448142#comment-13448142
 ] 

Ted Yu commented on HBASE-6669:
---

To generate patch, from the root of your workspace, type:
{code}
svn diff 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java
{code}
Year is not needed for license:
{code}
+ * Copyright 2011 The Apache Software Foundation
{code}
Remove the following comment:
{code}
+// TODO Auto-generated method stub
{code}
Either move the return statement to the end of if statement or enclose it in 
curly braces:
{code}
+if (val1 == null)
+  return null;
{code}
The rest looks fine.
TestAggregateProtocol tests LongColumnInterpreter. You should create a new test 
file to test your class.

Thanks

 Add BigDecimalColumnInterpreter for doing aggregations using AggregationClient
 --

 Key: HBASE-6669
 URL: https://issues.apache.org/jira/browse/HBASE-6669
 Project: HBase
  Issue Type: New Feature
  Components: client, coprocessors
Reporter: Anil Gupta
Priority: Minor
  Labels: client, coprocessors
 Attachments: BigDecimalColumnInterpreter.java, 
 BigDecimalColumnInterpreter.patch, BigDecimalColumnInterpreter.patch


 I recently created a Class for doing aggregations(sum,min,max,std) on values 
 stored as BigDecimal in HBase. I would like to commit the 
 BigDecimalColumnInterpreter into HBase. In my opinion this class can be used 
 by a wide variety of users. Please let me know if its not appropriate to add 
 this class in HBase.
 Thanks,
 Anil Gupta
 Software Engineer II, Intuit, Inc 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3918) When assigning regions to an address, check the regionserver is actually online first

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448167#comment-13448167
 ] 

Lars Hofhansl commented on HBASE-3918:
--

Is this still an issue? Being able to copy an HBase root directory to another 
cluster and start HBase there seems important.

 When assigning regions to an address, check the regionserver is actually 
 online first
 -

 Key: HBASE-3918
 URL: https://issues.apache.org/jira/browse/HBASE-3918
 Project: HBase
  Issue Type: Bug
Reporter: stack

 This one came up in the case where the data was copied from one cluster to 
 another.  The first cluster was running 0.89.x.  The second 0.90.x.  On 
 startup of 0.90.x, it wanted to verify .META. was in the location -ROOT- said 
 it was at, so it tried connect to the FIRST cluster.  The attempt failed 
 because of mismatched RPCs.  The master then actually aborted.
 {code}
 org.apache.hadoop.hbase.ipc.HBaseRPC$VersionMismatch: Protocol 
 org.apache.hadoop.hbase.ipc.HRegionInterface version mismatch. (client = 27, 
 server = 24)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:424)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:349)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:965)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getCachedConnection(CatalogTracker.java:386)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:285)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.verifyMetaRegionLocation(CatalogTracker.java:486)
 at org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:442)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:389)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:283)
 2011-05-23 22:38:07,720 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6691) HFile quarantine fails with missing files in hadoop 2.0

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448174#comment-13448174
 ] 

Hudson commented on HBASE-6691:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #159 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/159/])
HBASE-6691 HFile quarantine fails with missing files in hadoop 2.0 
(Revision 1380790)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java


 HFile quarantine fails with missing files in hadoop 2.0
 ---

 Key: HBASE-6691
 URL: https://issues.apache.org/jira/browse/HBASE-6691
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, hbck
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.96.0

 Attachments: 6691.patch, hbase-6691-v2.patch


 Trunk/0.96 has a specific issue mentioned in HBASE-6686 when run against 
 hadoop 2.0.   This addresses this problem.
 {code}
 2012-08-29 12:55:26,031 ERROR [IPC Server handler 0 on 41070] 
 security.UserGroupInformation(1235): PriviledgedActionException as:jon 
 (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
 2012-08-29 12:55:26,085 WARN  [Thread-2994] hbck.HFileCorruptionChecker(253): 
 Failed to quaratine an HFile in regiondir 
 hdfs://localhost:41070/user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy23.getBlockLocations(Unknown Source)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:882)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:152)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.lt;initgt;(DFSInputStream.java:112)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:955)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:664)
   at 
 org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:575)
   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:605)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:94)
   at 
 org.apache.hadoop.hbase.util.TestHBaseFsck$1$1.checkHFile(TestHBaseFsck.java:1401)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:175)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:208)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:290)
   at 
 org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:281)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:261)
   ... 27 more
 Caused by: java.io.FileNotFoundException: File does not exist: 
 /user/jon/hbase/testQuarantineMissingHFile/4332ea87d02d33e443550537722ff4fc/fam/befbe65ff30e4a46866f04a5671f0e44
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1133)
   at 
 

[jira] [Commented] (HBASE-6516) hbck cannot detect any IOException while .tableinfo file is missing

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448173#comment-13448173
 ] 

Hudson commented on HBASE-6516:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #159 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/159/])
HBASE-6516 hbck cannot detect any IOException while .tableinfo file is 
missing (Jie Huang) (Revision 1380761)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 hbck cannot detect any IOException while .tableinfo file is missing
 -

 Key: HBASE-6516
 URL: https://issues.apache.org/jira/browse/HBASE-6516
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.0, 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang
 Fix For: 0.92.2, 0.96.0, 0.94.2

 Attachments: hbase-6516-94-v5.patch, hbase-6516.patch, 
 hbase-6516-v2.patch, hbase-6516-v3.patch, hbase-6516-v4.patch, 
 hbase-6516-v5a.patch, hbase-6516-v5.patch


 HBaseFsck checks those missing .tableinfo files in loadHdfsRegionInfos() 
 function. However, no IoException will be catched while .tableinfo is 
 missing, since FSTableDescriptors.getTableDescriptor doesn't throw any 
 IoException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448175#comment-13448175
 ] 

Hudson commented on HBASE-6711:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #159 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/159/])
HBASE-6711 Avoid local results copy in StoreScanner (Revision 1380868)

 Result = FAILURE
larsh : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.94-v1.txt, 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3913) Expose ColumnPaginationFilter to the Thrift Server

2012-09-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-3913:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

I believe this was done more generally in HBASE-4176.

 Expose ColumnPaginationFilter to the Thrift Server
 --

 Key: HBASE-3913
 URL: https://issues.apache.org/jira/browse/HBASE-3913
 Project: HBase
  Issue Type: New Feature
  Components: thrift
Reporter: Matthew Ward
Priority: Minor
  Labels: filter, thrift
 Attachments: YF-3913.patch


 Expose the ColumnPaginationFilter to the thrift server by implementing the 
 following methods:
 public ListTRowResult getRowWithColumnsPaginated(byte[] tableName, byte[] 
 row, Listbyte[] columns, int limit, int offset);
 public ListTRowResult getRowWithColumnsTsPaginated(byte[] tableName, byte[] 
 row, Listbyte[] columns, long timestamp, int limit, int offset)
 Also look into exposing a thrift method for exposing the number of columns in 
 a particular row's family. 
 Original improvement Idea submitted on dev list and approved by Stack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6711) Avoid local results copy in StoreScanner

2012-09-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448195#comment-13448195
 ] 

stack commented on HBASE-6711:
--

+1 on patch

 Avoid local results copy in StoreScanner
 

 Key: HBASE-6711
 URL: https://issues.apache.org/jira/browse/HBASE-6711
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.2

 Attachments: 6711-0.94-v1.txt, 6711-0.96-v1.txt


 In StoreScanner the number of results is limited to avoid OOMs.
 However, this is done by first adding the KV into a local ArrayList and then 
 copying the entries in this list to the final result list.
 In turns out the this temporary list is only used to keep track of the size 
 of the result set in this loop. Can use a simple int instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3907) make it easier to add per-CF metrics; add some key per-CF metrics to start with

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448197#comment-13448197
 ] 

Lars Hofhansl commented on HBASE-3907:
--

@Kannan: Any plans of working on this? Should we keep it open?

 make it easier to add per-CF metrics; add some key per-CF metrics to start 
 with
 ---

 Key: HBASE-3907
 URL: https://issues.apache.org/jira/browse/HBASE-3907
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Kannan Muthukkaruppan
Assignee: Kannan Muthukkaruppan

 Add plumbing needed to add various types of per ColumnFamily metrics. And to 
 start with add a bunch per-CF metrics such as:
 1) Blocks read, cache hit, avg time of read for a column family.
 2) Similar stats for compaction related reads.
 3) Stats for meta block reads per CF
 4) Bloom Filter stats per CF
 etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6408) Naming and documenting of the hadoop-metrics2.properties file

2012-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448199#comment-13448199
 ] 

Hadoop QA commented on HBASE-6408:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543754/HBASE-6408-0.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 110 warning 
messages.

-1 javac.  The applied patch generated 5 javac compiler warnings (more than 
the trunk's current 4 warnings).

-1 findbugs.  The patch appears to introduce 7 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.replication.TestReplication
  org.apache.hadoop.hbase.master.TestSplitLogManager

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2780//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2780//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2780//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2780//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2780//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2780//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2780//console

This message is automatically generated.

 Naming and documenting of the hadoop-metrics2.properties file
 -

 Key: HBASE-6408
 URL: https://issues.apache.org/jira/browse/HBASE-6408
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Blocker
 Attachments: HBASE-6408-0.patch


 hadoop-metrics2.properties is currently where metrics2 loads it's sinks.
 This file could be better named, hadoop-hbase-metrics2.properties
 In addition it needs examples like the current hadoop-metrics.properties has.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3891) TaskMonitor is used wrong in some places

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448201#comment-13448201
 ] 

Lars Hofhansl commented on HBASE-3891:
--

There seems to be little interest in this. Keep open?

 TaskMonitor is used wrong in some places
 

 Key: HBASE-3891
 URL: https://issues.apache.org/jira/browse/HBASE-3891
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.92.0
Reporter: Lars George
 Fix For: 0.96.0


 I have a long running log replay in progress but none of the updates show. 
 This is caused by reusing the MonitorTask references wrong, and manifests 
 itself like this in the logs:
 {noformat}
 2011-05-16 15:22:18,127 WARN org.apache.hadoop.hbase.monitoring.TaskMonitor: 
 Status org.apache.hadoop.hbase.monitoring.MonitoredTaskImpl@51bfa303 appears 
 to have been leaked
 2011-05-16 15:22:18,128 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: cleanup.
 {noformat}
 The cleanup sets the completion timestamp and causes the task to be purged 
 from the list. After that the UI for example does not show any further 
 running tasks, although from the logs I can see (with my log additions):
 {noformat}
 2011-05-16 15:29:52,296 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: setStatus: Compaction 
 complete: 103.1m in 18542ms
 2011-05-16 15:29:52,296 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: setStatus: Running 
 coprocessor post-compact hooks
 2011-05-16 15:29:52,296 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: setStatus: Compaction 
 complete
 2011-05-16 15:29:52,297 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: markComplete: Compaction 
 complete
 {noformat}
 They are silently ignored as the TaskMonitor has dropped their reference. We 
 need to figure out why a supposedly completed task monitor was reused.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3887) Add region deletion tool

2012-09-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-3887.
--

Resolution: Duplicate

Thanks for the script Ophir. I'm marking this as DUP of HBASE-5504. I linked 
back to this issue from there.

 Add region deletion tool
 

 Key: HBASE-3887
 URL: https://issues.apache.org/jira/browse/HBASE-3887
 Project: HBase
  Issue Type: New Feature
  Components: regionserver
Reporter: Ophir Cohen
Priority: Minor
 Attachments: online_delete.rb


 A region deletion tool can be very useful to remove large amount of data.
 For example, it can be used to remove all data older than specific date 
 (assuming your data sorted by dates) etc...
 This tool should be something as follows:
 Input: region key or (even better!) start  end key.
 1. Split region to isolate the keys.
 2. Disable the relevant regions.
 3. Delete files from the file system.
 4. Update .META. table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3886) HServerInfo (and ServerName) equate the same if the hostname and port are same even if IP has changed

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448207#comment-13448207
 ] 

Lars Hofhansl commented on HBASE-3886:
--

@Stack: Are you planning to work on this? Should we keep it open?

 HServerInfo (and ServerName) equate the same if the hostname and port are 
 same even if IP has changed
 -

 Key: HBASE-3886
 URL: https://issues.apache.org/jira/browse/HBASE-3886
 Project: HBase
  Issue Type: Improvement
Reporter: stack

 This is an interesting one.  HServerInfo is deprecated in TRUNK and replaced 
 effectively by a new class ServerName.  Both equate instances of HSI or SN if 
 the two instances have the same hostname and port.  Well, thats well and good 
 but what if we are getting signals from a server whose IP has changed?  In 
 this case, we'll see the server in its new location come in but we'll treat 
 it as though we'd seen it already, thought its IP had changed.  We don't want 
 this.
 This facility is needed for rare case where a server is moved from one IP to 
 another.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3918) When assigning regions to an address, check the regionserver is actually online first

2012-09-04 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-3918.
--

Resolution: Not A Problem

Resolving as 'not a problem'.  When I look at bulk assigning code that tries to 
retain assignments, I see that for each region we are to assign, we check that 
the ServerName is known -- that it is a server that has registered itself on 
cluster start:

{code}
int numRandomAssignments = 0;
int numRetainedAssigments = 0;
for (Map.EntryHRegionInfo, ServerName entry : regions.entrySet()) {
  HRegionInfo region = entry.getKey();
  ServerName oldServerName = entry.getValue();
  ListServerName localServers = new ArrayListServerName();
  if (oldServerName != null) {
localServers = serversByHostname.get(oldServerName.getHostname());
  }
  if (localServers.isEmpty()) {
// No servers on the new cluster match up with this hostname,
// assign randomly.
ServerName randomServer = servers.get(RANDOM.nextInt(servers.size()));
...
{code}

We can open new issue if we run into this again.

 When assigning regions to an address, check the regionserver is actually 
 online first
 -

 Key: HBASE-3918
 URL: https://issues.apache.org/jira/browse/HBASE-3918
 Project: HBase
  Issue Type: Bug
Reporter: stack

 This one came up in the case where the data was copied from one cluster to 
 another.  The first cluster was running 0.89.x.  The second 0.90.x.  On 
 startup of 0.90.x, it wanted to verify .META. was in the location -ROOT- said 
 it was at, so it tried connect to the FIRST cluster.  The attempt failed 
 because of mismatched RPCs.  The master then actually aborted.
 {code}
 org.apache.hadoop.hbase.ipc.HBaseRPC$VersionMismatch: Protocol 
 org.apache.hadoop.hbase.ipc.HRegionInterface version mismatch. (client = 27, 
 server = 24)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:424)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:349)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:965)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getCachedConnection(CatalogTracker.java:386)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:285)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.verifyMetaRegionLocation(CatalogTracker.java:486)
 at org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:442)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:389)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:283)
 2011-05-23 22:38:07,720 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3886) HServerInfo (and ServerName) equate the same if the hostname and port are same even if IP has changed

2012-09-04 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-3886.
--

Resolution: Won't Fix

Resolving.  I don't remember why I filed this originally.

If a server comes in w/ a new IP and its 0.96, its ServerName should be 
different because it should have a different startcode.

If a server has its IP changed and its still serving the regions it had before 
the IP change (and no restart), we shouldn't care the IP changed.

 HServerInfo (and ServerName) equate the same if the hostname and port are 
 same even if IP has changed
 -

 Key: HBASE-3886
 URL: https://issues.apache.org/jira/browse/HBASE-3886
 Project: HBase
  Issue Type: Improvement
Reporter: stack

 This is an interesting one.  HServerInfo is deprecated in TRUNK and replaced 
 effectively by a new class ServerName.  Both equate instances of HSI or SN if 
 the two instances have the same hostname and port.  Well, thats well and good 
 but what if we are getting signals from a server whose IP has changed?  In 
 this case, we'll see the server in its new location come in but we'll treat 
 it as though we'd seen it already, thought its IP had changed.  We don't want 
 this.
 This facility is needed for rare case where a server is moved from one IP to 
 another.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3891) TaskMonitor is used wrong in some places

2012-09-04 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3891:
-

  Tags: noob
Labels: noob  (was: )

Keep it open I'd say.  Seems like a trivial bug.  Marking noob.

 TaskMonitor is used wrong in some places
 

 Key: HBASE-3891
 URL: https://issues.apache.org/jira/browse/HBASE-3891
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.92.0
Reporter: Lars George
  Labels: noob
 Fix For: 0.96.0


 I have a long running log replay in progress but none of the updates show. 
 This is caused by reusing the MonitorTask references wrong, and manifests 
 itself like this in the logs:
 {noformat}
 2011-05-16 15:22:18,127 WARN org.apache.hadoop.hbase.monitoring.TaskMonitor: 
 Status org.apache.hadoop.hbase.monitoring.MonitoredTaskImpl@51bfa303 appears 
 to have been leaked
 2011-05-16 15:22:18,128 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: cleanup.
 {noformat}
 The cleanup sets the completion timestamp and causes the task to be purged 
 from the list. After that the UI for example does not show any further 
 running tasks, although from the logs I can see (with my log additions):
 {noformat}
 2011-05-16 15:29:52,296 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: setStatus: Compaction 
 complete: 103.1m in 18542ms
 2011-05-16 15:29:52,296 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: setStatus: Running 
 coprocessor post-compact hooks
 2011-05-16 15:29:52,296 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: setStatus: Compaction 
 complete
 2011-05-16 15:29:52,297 DEBUG 
 org.apache.hadoop.hbase.monitoring.MonitoredTask: markComplete: Compaction 
 complete
 {noformat}
 They are silently ignored as the TaskMonitor has dropped their reference. We 
 need to figure out why a supposedly completed task monitor was reused.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3886) HServerInfo (and ServerName) equate the same if the hostname and port are same even if IP has changed

2012-09-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448222#comment-13448222
 ] 

Lars Hofhansl commented on HBASE-3886:
--

Maybe the issue was a restarted server with a different IP but same name? In 
that case the master would still think the server has all these regions, when 
in fact it doesn't?

 HServerInfo (and ServerName) equate the same if the hostname and port are 
 same even if IP has changed
 -

 Key: HBASE-3886
 URL: https://issues.apache.org/jira/browse/HBASE-3886
 Project: HBase
  Issue Type: Improvement
Reporter: stack

 This is an interesting one.  HServerInfo is deprecated in TRUNK and replaced 
 effectively by a new class ServerName.  Both equate instances of HSI or SN if 
 the two instances have the same hostname and port.  Well, thats well and good 
 but what if we are getting signals from a server whose IP has changed?  In 
 this case, we'll see the server in its new location come in but we'll treat 
 it as though we'd seen it already, thought its IP had changed.  We don't want 
 this.
 This facility is needed for rare case where a server is moved from one IP to 
 another.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3879) Add way to control the split position of region

2012-09-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-3879.
--

Resolution: Fixed

We have this now with RegionSplitPolicy

 Add way to control the split position of region
 ---

 Key: HBASE-3879
 URL: https://issues.apache.org/jira/browse/HBASE-3879
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Ophir Cohen
Priority: Minor

 Currently the split point picked to be the mid key of the HFile. 
 From the code comments: 
 Midkey for this file.  We work with block boundaries only so
  * returned midkey is an approximation only.
 It could be good to have a way to override the default option and use other 
 method.
 Possible implementation:
 Create interface (similar to Hadoop's partitioner) that gets StoreFile / 
 StoreFile.Reader and return the splitting point.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6443) HLogSplitter should ignore 0 length files

2012-09-04 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang resolved HBASE-6443.


Resolution: Won't Fix

Won't fix is in HBase layer.  It is kind of a HDFS issue.  If the hlog has size 
0, but not corrupted, HLogSplitter can handle it properly.  Only if the hlog 
file is corrupted, HLogSplitter can't handle it.

 HLogSplitter should ignore 0 length files
 -

 Key: HBASE-6443
 URL: https://issues.apache.org/jira/browse/HBASE-6443
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0, 0.94.1


 Somehow, some WAL files have size 0. Distributed log splitting can't handle 
 it.
 HLogSplitter should ignore them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-6443) HLogSplitter should ignore 0 length files

2012-09-04 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448226#comment-13448226
 ] 

Jimmy Xiang edited comment on HBASE-6443 at 9/5/12 11:07 AM:
-

Won't fix this in HBase layer.  It is kind of a HDFS issue.  If the hlog has 
size 0, but not corrupted, HLogSplitter can handle it properly.  Only if the 
hlog file is corrupted, HLogSplitter can't handle it.

  was (Author: jxiang):
Won't fix is in HBase layer.  It is kind of a HDFS issue.  If the hlog has 
size 0, but not corrupted, HLogSplitter can handle it properly.  Only if the 
hlog file is corrupted, HLogSplitter can't handle it.
  
 HLogSplitter should ignore 0 length files
 -

 Key: HBASE-6443
 URL: https://issues.apache.org/jira/browse/HBASE-6443
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0, 0.94.1


 Somehow, some WAL files have size 0. Distributed log splitting can't handle 
 it.
 HLogSplitter should ignore them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >