[jira] [Updated] (HBASE-7437) Improve CompactSelection

2013-04-11 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-7437:
-

Attachment: HBASE-7437-V4.patch

Patch v4 from review board.

 Improve CompactSelection
 

 Key: HBASE-7437
 URL: https://issues.apache.org/jira/browse/HBASE-7437
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: Hiroshi Ikeda
Assignee: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-7437.patch, HBASE-7437-V2.patch, 
 HBASE-7437-V3.patch, HBASE-7437-V4.patch


 1. Using AtomicLong makes CompactSelection simple and improve its performance.
 2. There are unused fields and methods.
 3. The fields should be private.
 4. Assertion in the method finishRequest seems wrong:
 {code}
   public void finishRequest() {
 if (isOffPeakCompaction) {
   long newValueToLog = -1;
   synchronized(compactionCountLock) {
 assert !isOffPeakCompaction : Double-counting off-peak count for 
 compaction;
 {code}
 The above assertion seems almost always false.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8119) Optimize StochasticLoadBalancer

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628717#comment-13628717
 ] 

Hadoop QA commented on HBASE-8119:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578136/hbase-8119_v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 hadoop2.0{color}.  The patch failed to compile against the 
hadoop 2.0 profile.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5257//console

This message is automatically generated.

 Optimize StochasticLoadBalancer
 ---

 Key: HBASE-8119
 URL: https://issues.apache.org/jira/browse/HBASE-8119
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.95.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.95.1

 Attachments: hbase-8119_v2.patch


 On a 5 node trunk cluster, I ran into a weird problem with 
 StochasticLoadBalancer:
 server1   Thu Mar 14 03:42:50 UTC 20130.0 33
 server2   Thu Mar 14 03:47:53 UTC 20130.0 34
 server3   Thu Mar 14 03:46:53 UTC 2013465.0   42
 server4   Thu Mar 14 03:47:53 UTC 201311455.0 282
 server5   Thu Mar 14 03:47:53 UTC 20130.0 34
 Total:5   11920   425
 Notice that server4 has 282 regions, while the others have much less. Plus 
 for one table with 260 regions has been super imbalanced:
 {code}
 Regions by Region Server
 Region Server Region Count
 http://server3:60030/ 10
 http://server4:60030/ 250
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7658) grant with an empty string as permission should throw an exception

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628722#comment-13628722
 ] 

Hudson commented on HBASE-7658:
---

Integrated in HBase-0.94 #955 (See 
[https://builds.apache.org/job/HBase-0.94/955/])
HBASE-7658 grant with an empty string as permission should throw an 
exception (Revision 1466723)

 Result = SUCCESS
mbertozzi : 
Files : 
* 
/hbase/branches/0.94/security/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
* /hbase/branches/0.94/src/main/ruby/hbase/security.rb
* /hbase/branches/0.94/src/main/ruby/shell/commands/grant.rb
* /hbase/branches/0.94/src/main/ruby/shell/commands/revoke.rb


 grant with an empty string as permission should throw an exception
 --

 Key: HBASE-7658
 URL: https://issues.apache.org/jira/browse/HBASE-7658
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.94.7, 0.95.1

 Attachments: HBASE-7658-0.94.patch, HBASE-7658-v0.patch, 
 HBASE-7658-v1.patch


 If someone specify an empty permission
 {code}grant 'user', ''{code}
 AccessControlLists.addUserPermission() output a log message and doesn't 
 change the permission, but the user doesn't know about it.
 {code}
 if ((actions == null) || (actions.length == 0)) {
   LOG.warn(No actions associated with user 
 '+Bytes.toString(userPerm.getUser())+');
   return;
 }
 {code}
 I think we should throw an exception instead of just logging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7824) Improve master start up time when there is log splitting work

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628721#comment-13628721
 ] 

Hudson commented on HBASE-7824:
---

Integrated in HBase-0.94 #955 (See 
[https://builds.apache.org/job/HBase-0.94/955/])
HBASE-7824 Improve master start up time when there is log splitting work 
(Jeffrey Zhong) (Revision 1466725)

 Result = SUCCESS
tedyu : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenMasterInitializing.java


 Improve master start up time when there is log splitting work
 -

 Key: HBASE-7824
 URL: https://issues.apache.org/jira/browse/HBASE-7824
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 0.94.7

 Attachments: hbase-7824.patch, hbase-7824-v10.patch, 
 hbase-7824_v2.patch, hbase-7824_v3.patch, hbase-7824-v7.patch, 
 hbase-7824-v8.patch, hbase-7824-v9.patch


 When there is log split work going on, master start up waits till all log 
 split work completes even though the log split has nothing to do with meta 
 region servers.
 It's a bad behavior considering a master node can run when log split is 
 happening while its start up is blocking by log split work. 
 Since master is kind of single point of failure, we should start it ASAP.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628749#comment-13628749
 ] 

Hadoop QA commented on HBASE-7704:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578162/HBase-7704-v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5258//console

This message is automatically generated.

 migration tool that checks presence of HFile V1 files
 -

 Key: HBASE-7704
 URL: https://issues.apache.org/jira/browse/HBASE-7704
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Himanshu Vashishtha
Priority: Blocker
 Fix For: 0.95.1

 Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
 HBase-7704-v3.patch, HBase-7704-v4.patch


 Below was Stack's comment from HBASE-7660:
 Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
 imagine it as an addition to the hfile tool 
 http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
 bunch of args including printing out meta. We could add an option to print 
 out version only – or return 1 if version 1 or some such – and then do a bit 
 of code to just list all hfiles and run this script against each. Could MR it 
 if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628750#comment-13628750
 ] 

Matt Corgan commented on HBASE-8317:


[~zjushch] thanks again for debugging - I hope it didn't take you all day.  
Your fix looks good.

It was due to missing test coverage that I is added in attached patch v1.  I 
was previously testing only the PrefixTreeArraySearcher.positionAtOrBefore 
method, but not positionAtOrAfter, which would have revealed the bug.

I modified TestPrefixTreeSearcher.testRandomSeekMisses() to test both methods.  
There is a simplified test data set in TestRowDataNub.java (was already there) 
that passes/fails with/without your fix.

 Seek returns wrong result with PREFIX_TREE Encoding
 ---

 Key: HBASE-8317
 URL: https://issues.apache.org/jira/browse/HBASE-8317
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch


 TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
 the bug.
 An example of the bug case:
 Suppose the following rows:
 1.row3/c1:q1/
 2.row3/c1:q2/
 3.row3/c1:q3/
 4.row4/c1:q1/
 5.row4/c1:q2/
 After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
 actual is row3/c1:q1/.
 I just fix this bug case in the patch, 
 Maybe we can do more for other potential problems if anyone is familiar with 
 the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread Matt Corgan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Corgan updated HBASE-8317:
---

Attachment: HBASE-8317-v1.patch

 Seek returns wrong result with PREFIX_TREE Encoding
 ---

 Key: HBASE-8317
 URL: https://issues.apache.org/jira/browse/HBASE-8317
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch


 TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
 the bug.
 An example of the bug case:
 Suppose the following rows:
 1.row3/c1:q1/
 2.row3/c1:q2/
 3.row3/c1:q3/
 4.row4/c1:q1/
 5.row4/c1:q2/
 After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
 actual is row3/c1:q1/.
 I just fix this bug case in the patch, 
 Maybe we can do more for other potential problems if anyone is familiar with 
 the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8220) can we record the count opened HTable for HTablePool

2013-04-11 Thread cuijianwei (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628751#comment-13628751
 ] 

cuijianwei commented on HBASE-8220:
---

[~Jean-Marc Spaggiari] doest the newest patch test successfully on your side? 
thanks.

 can we record the count opened HTable for HTablePool
 

 Key: HBASE-8220
 URL: https://issues.apache.org/jira/browse/HBASE-8220
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.94.3
Reporter: cuijianwei
 Attachments: HBASE-8220-0.94.3.txt, HBASE-8220-0.94.3.txt, 
 HBASE-8220-0.94.3.txt-v2, HBASE-8220-0.94.3-v2.txt, HBASE-8220-0.94.3-v3.txt, 
 HBASE-8220-0.94.3-v4.txt


 In HTablePool, we have a method getCurrentPoolSize(...) to get how many 
 opened HTable has been pooled. However, we don't know ConcurrentOpenedHTable 
 which means the count of HTable get from HTablePool.getTable(...) and don't 
 return to HTablePool by PooledTable.close(). The ConcurrentOpenedHTable may 
 be meaningful because it indicates how many HTables should be opened for the 
 application which may help us set the appropriate MaxSize of HTablePool. 
 Therefore, we can and a ConcurrentOpenedHTable as a counter in HTablePool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8256) Add category Flaky for tests which are flaky

2013-04-11 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628752#comment-13628752
 ] 

Nicolas Liochon commented on HBASE-8256:


bq. This could be related to SUREFIRE-862. Does our surefire have this fix?
No, the fix is from august '12, and our fork from january '12.

 Add category Flaky for tests which are flaky
 

 Key: HBASE-8256
 URL: https://issues.apache.org/jira/browse/HBASE-8256
 Project: HBase
  Issue Type: Bug
  Components: build, test
Affects Versions: 0.98.0, 0.95.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: trunk-8256_v0.patch, trunk-8256_v1.patch


 To make the Jenkin build more useful, it is good to keep it blue/green. We 
 can mark those flaky tests flaky, and don't run them by default.  However, 
 people can still run them.  We can also set up a Jekin build just for those 
 flaky tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7636) TestDistributedLogSplitting#testThreeRSAbort fails against hadoop 2.0

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628765#comment-13628765
 ] 

Hadoop QA commented on HBASE-7636:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578137/hbase-7636.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5259//console

This message is automatically generated.

 TestDistributedLogSplitting#testThreeRSAbort fails against hadoop 2.0
 -

 Key: HBASE-7636
 URL: https://issues.apache.org/jira/browse/HBASE-7636
 Project: HBase
  Issue Type: Sub-task
  Components: hadoop2, test
Affects Versions: 0.95.0
Reporter: Ted Yu
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-7636.patch


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:34,276 DEBUG 
 [MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
 client.HConnectionManager$HConnectionImplementation(956): Looked up root 
 region location, connection=hconnection 0x12f19fe; 
 serverName=juno.apache.org,55531,1358768819479
 2013-01-21 11:49:34,278 INFO  
 [MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
 catalog.CatalogTracker(576): Failed verification of .META.,,1 at 
 address=juno.apache.org,57582,1358768819456; 
 org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
 in the failed servers list: juno.apache.org/67.195.138.61:57582
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628764#comment-13628764
 ] 

Hadoop QA commented on HBASE-7122:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12578161/HBase-7122-95-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5260//console

This message is automatically generated.

 Proper warning message when opening a log file with no entries (idle cluster)
 -

 Key: HBASE-7122
 URL: https://issues.apache.org/jira/browse/HBASE-7122
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.2
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.95.1

 Attachments: HBase-7122-94.patch, HBase-7122-95.patch, 
 HBase-7122-95-v2.patch, HBase-7122-95-v3.patch, HBase-7122.patch, 
 HBASE-7122.v2.patch


 In case the cluster is idle and the log has rolled (offset to 0), 
 replicationSource tries to open the log and gets an EOF exception. This gets 
 printed after every 10 sec until an entry is inserted in it.
 {code}
 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(487)) - Opening log for replication 
 c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(543)) - 1 Got: 
 java.io.EOFException
   at java.io.DataInputStream.readFully(DataInputStream.java:180)
   at java.io.DataInputStream.readFully(DataInputStream.java:152)
   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1475)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1470)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:55)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
   at 
 

[jira] [Updated] (HBASE-8279) Performance Evaluation does not consider the args passed in case of more than one client

2013-04-11 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-8279:
--

Status: Patch Available  (was: Open)

 Performance Evaluation does not consider the args passed in case of more than 
 one client
 

 Key: HBASE-8279
 URL: https://issues.apache.org/jira/browse/HBASE-8279
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 0.98.0, 0.94.8, 0.95.1

 Attachments: HBASE-8279_1.patch, HBASE-8279.patch


 Performance evaluation gives a provision to pass the table name.
 The table name is considered when we first initialize the table - like the 
 disabling and creation of tables happens with the name that we pass.
 But the write and read test again uses only the default table and so the perf 
 evaluation fails.
 I think the problem is like this
 {code}
  ./hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
 --table=MyTable2  --presplit=70 randomRead 2
 {code}
 {code}
 13/04/04 21:42:07 DEBUG hbase.HRegionInfo: Current INFO from scan results = 
 {NAME = 
 'MyTable2,0002067171,1365126124904.bc9e936f4f8ca8ee55eb90091d4a13b6.',
  STARTKEY = '0002067171', ENDKEY = '', ENCODED = 
 bc9e936f4f8ca8ee55eb90091d4a13b6,}
 13/04/04 21:42:07 INFO hbase.PerformanceEvaluation: Table created with 70 
 splits
 {code}
 You can see that the specified table is created with the splits.
 But when the read starts
 {code}
 Caused by: org.apache.hadoop.hbase.exceptions.TableNotFoundException: 
 TestTable
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1157)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1034)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:984)
 at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:246)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:187)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$Test.testSetup(PerformanceEvaluation.java:851)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:869)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1495)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$1.run(PerformanceEvaluation.java:590)
 {code}
 It says TestTable not found which is the default table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8279) Performance Evaluation does not consider the args passed in case of more than one client

2013-04-11 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-8279:
--

Status: Open  (was: Patch Available)

 Performance Evaluation does not consider the args passed in case of more than 
 one client
 

 Key: HBASE-8279
 URL: https://issues.apache.org/jira/browse/HBASE-8279
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 0.98.0, 0.94.8, 0.95.1

 Attachments: HBASE-8279_1.patch, HBASE-8279.patch


 Performance evaluation gives a provision to pass the table name.
 The table name is considered when we first initialize the table - like the 
 disabling and creation of tables happens with the name that we pass.
 But the write and read test again uses only the default table and so the perf 
 evaluation fails.
 I think the problem is like this
 {code}
  ./hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
 --table=MyTable2  --presplit=70 randomRead 2
 {code}
 {code}
 13/04/04 21:42:07 DEBUG hbase.HRegionInfo: Current INFO from scan results = 
 {NAME = 
 'MyTable2,0002067171,1365126124904.bc9e936f4f8ca8ee55eb90091d4a13b6.',
  STARTKEY = '0002067171', ENDKEY = '', ENCODED = 
 bc9e936f4f8ca8ee55eb90091d4a13b6,}
 13/04/04 21:42:07 INFO hbase.PerformanceEvaluation: Table created with 70 
 splits
 {code}
 You can see that the specified table is created with the splits.
 But when the read starts
 {code}
 Caused by: org.apache.hadoop.hbase.exceptions.TableNotFoundException: 
 TestTable
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1157)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1034)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:984)
 at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:246)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:187)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$Test.testSetup(PerformanceEvaluation.java:851)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:869)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1495)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$1.run(PerformanceEvaluation.java:590)
 {code}
 It says TestTable not found which is the default table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8279) Performance Evaluation does not consider the args passed in case of more than one client

2013-04-11 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-8279:
--

Attachment: HBASE-8279_1.patch

Patch updated as per Anoop's comment.

 Performance Evaluation does not consider the args passed in case of more than 
 one client
 

 Key: HBASE-8279
 URL: https://issues.apache.org/jira/browse/HBASE-8279
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 0.98.0, 0.94.8, 0.95.1

 Attachments: HBASE-8279_1.patch, HBASE-8279.patch


 Performance evaluation gives a provision to pass the table name.
 The table name is considered when we first initialize the table - like the 
 disabling and creation of tables happens with the name that we pass.
 But the write and read test again uses only the default table and so the perf 
 evaluation fails.
 I think the problem is like this
 {code}
  ./hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
 --table=MyTable2  --presplit=70 randomRead 2
 {code}
 {code}
 13/04/04 21:42:07 DEBUG hbase.HRegionInfo: Current INFO from scan results = 
 {NAME = 
 'MyTable2,0002067171,1365126124904.bc9e936f4f8ca8ee55eb90091d4a13b6.',
  STARTKEY = '0002067171', ENDKEY = '', ENCODED = 
 bc9e936f4f8ca8ee55eb90091d4a13b6,}
 13/04/04 21:42:07 INFO hbase.PerformanceEvaluation: Table created with 70 
 splits
 {code}
 You can see that the specified table is created with the splits.
 But when the read starts
 {code}
 Caused by: org.apache.hadoop.hbase.exceptions.TableNotFoundException: 
 TestTable
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1157)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1034)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:984)
 at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:246)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:187)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$Test.testSetup(PerformanceEvaluation.java:851)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:869)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1495)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation$1.run(PerformanceEvaluation.java:590)
 {code}
 It says TestTable not found which is the default table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-8317:


Status: Patch Available  (was: Open)

 Seek returns wrong result with PREFIX_TREE Encoding
 ---

 Key: HBASE-8317
 URL: https://issues.apache.org/jira/browse/HBASE-8317
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch


 TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
 the bug.
 An example of the bug case:
 Suppose the following rows:
 1.row3/c1:q1/
 2.row3/c1:q2/
 3.row3/c1:q3/
 4.row4/c1:q1/
 5.row4/c1:q2/
 After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
 actual is row3/c1:q1/.
 I just fix this bug case in the patch, 
 Maybe we can do more for other potential problems if anyone is familiar with 
 the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628773#comment-13628773
 ] 

chunhui shen commented on HBASE-8317:
-

[~mcorgan]
Thanks for confirming it.

Run HadoopQA first...

 Seek returns wrong result with PREFIX_TREE Encoding
 ---

 Key: HBASE-8317
 URL: https://issues.apache.org/jira/browse/HBASE-8317
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch


 TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
 the bug.
 An example of the bug case:
 Suppose the following rows:
 1.row3/c1:q1/
 2.row3/c1:q2/
 3.row3/c1:q3/
 4.row4/c1:q1/
 5.row4/c1:q2/
 After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
 actual is row3/c1:q1/.
 I just fix this bug case in the patch, 
 Maybe we can do more for other potential problems if anyone is familiar with 
 the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7437) Improve CompactSelection

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628774#comment-13628774
 ] 

Hadoop QA commented on HBASE-7437:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578164/HBASE-7437-V4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.wal.TestHLog

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5261//console

This message is automatically generated.

 Improve CompactSelection
 

 Key: HBASE-7437
 URL: https://issues.apache.org/jira/browse/HBASE-7437
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: Hiroshi Ikeda
Assignee: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-7437.patch, HBASE-7437-V2.patch, 
 HBASE-7437-V3.patch, HBASE-7437-V4.patch


 1. Using AtomicLong makes CompactSelection simple and improve its performance.
 2. There are unused fields and methods.
 3. The fields should be private.
 4. Assertion in the method finishRequest seems wrong:
 {code}
   public void finishRequest() {
 if (isOffPeakCompaction) {
   long newValueToLog = -1;
   synchronized(compactionCountLock) {
 assert !isOffPeakCompaction : Double-counting off-peak count for 
 compaction;
 {code}
 The above assertion seems almost always false.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8279) Performance Evaluation does not consider the args passed in case of more than one client

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628794#comment-13628794
 ] 

Hadoop QA commented on HBASE-8279:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578179/HBASE-8279_1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFromClientSide

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5263//console

This message is automatically generated.

 Performance Evaluation does not consider the args passed in case of more than 
 one client
 

 Key: HBASE-8279
 URL: https://issues.apache.org/jira/browse/HBASE-8279
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 0.98.0, 0.94.8, 0.95.1

 Attachments: HBASE-8279_1.patch, HBASE-8279.patch


 Performance evaluation gives a provision to pass the table name.
 The table name is considered when we first initialize the table - like the 
 disabling and creation of tables happens with the name that we pass.
 But the write and read test again uses only the default table and so the perf 
 evaluation fails.
 I think the problem is like this
 {code}
  ./hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
 --table=MyTable2  --presplit=70 randomRead 2
 {code}
 {code}
 13/04/04 21:42:07 DEBUG hbase.HRegionInfo: Current INFO from scan results = 
 {NAME = 
 'MyTable2,0002067171,1365126124904.bc9e936f4f8ca8ee55eb90091d4a13b6.',
  STARTKEY = '0002067171', ENDKEY = '', ENCODED = 
 bc9e936f4f8ca8ee55eb90091d4a13b6,}
 13/04/04 21:42:07 INFO hbase.PerformanceEvaluation: Table created with 70 
 splits
 {code}
 You can see that the specified table is created with the splits.
 But when the read starts
 {code}
 Caused by: org.apache.hadoop.hbase.exceptions.TableNotFoundException: 
 TestTable
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1157)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1034)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:984)
 at 

[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628798#comment-13628798
 ] 

Hadoop QA commented on HBASE-8317:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578176/HBASE-8317-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5262//console

This message is automatically generated.

 Seek returns wrong result with PREFIX_TREE Encoding
 ---

 Key: HBASE-8317
 URL: https://issues.apache.org/jira/browse/HBASE-8317
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch


 TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
 the bug.
 An example of the bug case:
 Suppose the following rows:
 1.row3/c1:q1/
 2.row3/c1:q2/
 3.row3/c1:q3/
 4.row4/c1:q1/
 5.row4/c1:q2/
 After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
 actual is row3/c1:q1/.
 I just fix this bug case in the patch, 
 Maybe we can do more for other potential problems if anyone is familiar with 
 the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread zavakid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zavakid updated HBASE-8325:
---

Description: 
I'm using  the replication of Hbase in my test environment.

When a replicationSource open a empty HLog, the EOFException throws. Should we 
detect the empty file and processed it, like we process the 
FileNotFoundException?

here's the code:
{code:java}
/**
   * Open a reader on the current path
   *
   * @param sleepMultiplier by how many times the default sleeping time is 
augmented
   * @return true if we should continue with that file, false if we are over 
with it
   */
  protected boolean openReader(int sleepMultiplier) {
try {
  LOG.debug(Opening log for replication  + this.currentPath.getName() +
   at  + this.repLogReader.getPosition());
  try {
this.reader = repLogReader.openReader(this.currentPath);
  } catch (FileNotFoundException fnfe) {
if (this.queueRecovered) {
  // We didn't find the log in the archive directory, look if it still
  // exists in the dead RS folder (there could be a chain of failures
  // to look at)
  LOG.info(NB dead servers :  + deadRegionServers.length);
  for (int i = this.deadRegionServers.length - 1; i = 0; i--) {

Path deadRsDirectory =
new Path(manager.getLogDir().getParent(), 
this.deadRegionServers[i]);
Path[] locs = new Path[] {
new Path(deadRsDirectory, currentPath.getName()),
new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
  currentPath.getName()),
};
for (Path possibleLogLocation : locs) {
  LOG.info(Possible location  + 
possibleLogLocation.toUri().toString());
  if (this.manager.getFs().exists(possibleLogLocation)) {
// We found the right new location
LOG.info(Log  + this.currentPath +  still exists at  +
possibleLogLocation);
// Breaking here will make us sleep since reader is null
return true;
  }
}
  }
  // TODO What happens if the log was missing from every single 
location?
  // Although we need to check a couple of times as the log could have
  // been moved by the master between the checks
  // It can also happen if a recovered queue wasn't properly cleaned,
  // such that the znode pointing to a log exists but the log was
  // deleted a long time ago.
  // For the moment, we'll throw the IO and processEndOfFile
  throw new IOException(File from recovered queue is  +
  nowhere to be found, fnfe);
} else {
  // If the log was archived, continue reading from there
  Path archivedLogLocation =
  new Path(manager.getOldLogDir(), currentPath.getName());
  if (this.manager.getFs().exists(archivedLogLocation)) {
currentPath = archivedLogLocation;
LOG.info(Log  + this.currentPath +  was moved to  +
archivedLogLocation);
// Open the log at the new location
this.openReader(sleepMultiplier);

  }
  // TODO What happens the log is missing in both places?
}
  }
} catch (IOException ioe) {
  LOG.warn(peerClusterZnode +  Got: , ioe);
  this.reader = null;
  // TODO Need a better way to determinate if a file is really gone but
  // TODO without scanning all logs dir
  if (sleepMultiplier == this.maxRetriesMultiplier) {
LOG.warn(Waited too long for this file, considering dumping);
return !processEndOfFile();
  }
}
return true;
  }
{code}

I find the TODO label :  // TODO What happens the log is missing in both places?
maybe we need to add this case?

  was:
I'm using  the replication of Hbase in my test environment.

When a replicationSource open a empty HLog, the EOFException throws. Should we 
detect the empty file and processed it, like we process the 
FileNotFoundException?

here's the code:
```
/**
   * Open a reader on the current path
   *
   * @param sleepMultiplier by how many times the default sleeping time is 
augmented
   * @return true if we should continue with that file, false if we are over 
with it
   */
  protected boolean openReader(int sleepMultiplier) {
try {
  LOG.debug(Opening log for replication  + this.currentPath.getName() +
   at  + this.repLogReader.getPosition());
  try {
this.reader = repLogReader.openReader(this.currentPath);
  } catch (FileNotFoundException fnfe) {
if (this.queueRecovered) {
  // We didn't find the log in the archive directory, look if it still
  // exists in the dead RS folder (there could be a chain of failures
  // to look at)
  

[jira] [Created] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread zavakid (JIRA)
zavakid created HBASE-8325:
--

 Summary: ReplicationSource read a empty HLog throws EOFException
 Key: HBASE-8325
 URL: https://issues.apache.org/jira/browse/HBASE-8325
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.5
 Environment: replication enabled
Reporter: zavakid
Priority: Critical


I'm using  the replication of Hbase in my test environment.

When a replicationSource open a empty HLog, the EOFException throws. Should we 
detect the empty file and processed it, like we process the 
FileNotFoundException?

here's the code:
```
/**
   * Open a reader on the current path
   *
   * @param sleepMultiplier by how many times the default sleeping time is 
augmented
   * @return true if we should continue with that file, false if we are over 
with it
   */
  protected boolean openReader(int sleepMultiplier) {
try {
  LOG.debug(Opening log for replication  + this.currentPath.getName() +
   at  + this.repLogReader.getPosition());
  try {
this.reader = repLogReader.openReader(this.currentPath);
  } catch (FileNotFoundException fnfe) {
if (this.queueRecovered) {
  // We didn't find the log in the archive directory, look if it still
  // exists in the dead RS folder (there could be a chain of failures
  // to look at)
  LOG.info(NB dead servers :  + deadRegionServers.length);
  for (int i = this.deadRegionServers.length - 1; i = 0; i--) {

Path deadRsDirectory =
new Path(manager.getLogDir().getParent(), 
this.deadRegionServers[i]);
Path[] locs = new Path[] {
new Path(deadRsDirectory, currentPath.getName()),
new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
  currentPath.getName()),
};
for (Path possibleLogLocation : locs) {
  LOG.info(Possible location  + 
possibleLogLocation.toUri().toString());
  if (this.manager.getFs().exists(possibleLogLocation)) {
// We found the right new location
LOG.info(Log  + this.currentPath +  still exists at  +
possibleLogLocation);
// Breaking here will make us sleep since reader is null
return true;
  }
}
  }
  // TODO What happens if the log was missing from every single 
location?
  // Although we need to check a couple of times as the log could have
  // been moved by the master between the checks
  // It can also happen if a recovered queue wasn't properly cleaned,
  // such that the znode pointing to a log exists but the log was
  // deleted a long time ago.
  // For the moment, we'll throw the IO and processEndOfFile
  throw new IOException(File from recovered queue is  +
  nowhere to be found, fnfe);
} else {
  // If the log was archived, continue reading from there
  Path archivedLogLocation =
  new Path(manager.getOldLogDir(), currentPath.getName());
  if (this.manager.getFs().exists(archivedLogLocation)) {
currentPath = archivedLogLocation;
LOG.info(Log  + this.currentPath +  was moved to  +
archivedLogLocation);
// Open the log at the new location
this.openReader(sleepMultiplier);

  }
  // TODO What happens the log is missing in both places?
}
  }
} catch (IOException ioe) {
  LOG.warn(peerClusterZnode +  Got: , ioe);
  this.reader = null;
  // TODO Need a better way to determinate if a file is really gone but
  // TODO without scanning all logs dir
  if (sleepMultiplier == this.maxRetriesMultiplier) {
LOG.warn(Waited too long for this file, considering dumping);
return !processEndOfFile();
  }
}
return true;
  }
``` 

I find the TODO label :  // TODO What happens the log is missing in both places?
maybe we need to add this case?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread zavakid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zavakid updated HBASE-8325:
---

Description: 
I'm using  the replication of Hbase in my test environment.

When a replicationSource open a empty HLog, the EOFException throws. 
It is because the Reader can't read the SequenceFile's meta data, but there's 
no data at all, so it throws the EOFException.
Should we detect the empty file and processed it, like we process the 
FileNotFoundException?

here's the code:
{code:java}
/**
   * Open a reader on the current path
   *
   * @param sleepMultiplier by how many times the default sleeping time is 
augmented
   * @return true if we should continue with that file, false if we are over 
with it
   */
  protected boolean openReader(int sleepMultiplier) {
try {
  LOG.debug(Opening log for replication  + this.currentPath.getName() +
   at  + this.repLogReader.getPosition());
  try {
this.reader = repLogReader.openReader(this.currentPath);
  } catch (FileNotFoundException fnfe) {
if (this.queueRecovered) {
  // We didn't find the log in the archive directory, look if it still
  // exists in the dead RS folder (there could be a chain of failures
  // to look at)
  LOG.info(NB dead servers :  + deadRegionServers.length);
  for (int i = this.deadRegionServers.length - 1; i = 0; i--) {

Path deadRsDirectory =
new Path(manager.getLogDir().getParent(), 
this.deadRegionServers[i]);
Path[] locs = new Path[] {
new Path(deadRsDirectory, currentPath.getName()),
new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
  currentPath.getName()),
};
for (Path possibleLogLocation : locs) {
  LOG.info(Possible location  + 
possibleLogLocation.toUri().toString());
  if (this.manager.getFs().exists(possibleLogLocation)) {
// We found the right new location
LOG.info(Log  + this.currentPath +  still exists at  +
possibleLogLocation);
// Breaking here will make us sleep since reader is null
return true;
  }
}
  }
  // TODO What happens if the log was missing from every single 
location?
  // Although we need to check a couple of times as the log could have
  // been moved by the master between the checks
  // It can also happen if a recovered queue wasn't properly cleaned,
  // such that the znode pointing to a log exists but the log was
  // deleted a long time ago.
  // For the moment, we'll throw the IO and processEndOfFile
  throw new IOException(File from recovered queue is  +
  nowhere to be found, fnfe);
} else {
  // If the log was archived, continue reading from there
  Path archivedLogLocation =
  new Path(manager.getOldLogDir(), currentPath.getName());
  if (this.manager.getFs().exists(archivedLogLocation)) {
currentPath = archivedLogLocation;
LOG.info(Log  + this.currentPath +  was moved to  +
archivedLogLocation);
// Open the log at the new location
this.openReader(sleepMultiplier);

  }
  // TODO What happens the log is missing in both places?
}
  }
} catch (IOException ioe) {
  LOG.warn(peerClusterZnode +  Got: , ioe);
  this.reader = null;
  // TODO Need a better way to determinate if a file is really gone but
  // TODO without scanning all logs dir
  if (sleepMultiplier == this.maxRetriesMultiplier) {
LOG.warn(Waited too long for this file, considering dumping);
return !processEndOfFile();
  }
}
return true;
  }
{code}

I find the TODO label :  // TODO What happens the log is missing in both places?
maybe we need to add this case?

  was:
I'm using  the replication of Hbase in my test environment.

When a replicationSource open a empty HLog, the EOFException throws. Should we 
detect the empty file and processed it, like we process the 
FileNotFoundException?

here's the code:
{code:java}
/**
   * Open a reader on the current path
   *
   * @param sleepMultiplier by how many times the default sleeping time is 
augmented
   * @return true if we should continue with that file, false if we are over 
with it
   */
  protected boolean openReader(int sleepMultiplier) {
try {
  LOG.debug(Opening log for replication  + this.currentPath.getName() +
   at  + this.repLogReader.getPosition());
  try {
this.reader = repLogReader.openReader(this.currentPath);
  } catch (FileNotFoundException fnfe) {
if (this.queueRecovered) {
  // We didn't find the log in the archive directory, 

[jira] [Updated] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread zavakid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zavakid updated HBASE-8325:
---

Description: 
I'm using  the replication of Hbase in my test environment.

When a replicationSource open a empty HLog, the EOFException throws. 
It is because the Reader can't read the SequenceFile's meta data, but there's 
no data at all, so it throws the EOFException.
Should we detect the empty file and processed it, like we process the 
FileNotFoundException?

here's the code:
{code:java}
/**
   * Open a reader on the current path
   *
   * @param sleepMultiplier by how many times the default sleeping time is 
augmented
   * @return true if we should continue with that file, false if we are over 
with it
   */
  protected boolean openReader(int sleepMultiplier) {
try {
  LOG.debug(Opening log for replication  + this.currentPath.getName() +
   at  + this.repLogReader.getPosition());
  try {
this.reader = repLogReader.openReader(this.currentPath);
  } catch (FileNotFoundException fnfe) {
if (this.queueRecovered) {
  // We didn't find the log in the archive directory, look if it still
  // exists in the dead RS folder (there could be a chain of failures
  // to look at)
  LOG.info(NB dead servers :  + deadRegionServers.length);
  for (int i = this.deadRegionServers.length - 1; i = 0; i--) {

Path deadRsDirectory =
new Path(manager.getLogDir().getParent(), 
this.deadRegionServers[i]);
Path[] locs = new Path[] {
new Path(deadRsDirectory, currentPath.getName()),
new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
  currentPath.getName()),
};
for (Path possibleLogLocation : locs) {
  LOG.info(Possible location  + 
possibleLogLocation.toUri().toString());
  if (this.manager.getFs().exists(possibleLogLocation)) {
// We found the right new location
LOG.info(Log  + this.currentPath +  still exists at  +
possibleLogLocation);
// Breaking here will make us sleep since reader is null
return true;
  }
}
  }
  // TODO What happens if the log was missing from every single 
location?
  // Although we need to check a couple of times as the log could have
  // been moved by the master between the checks
  // It can also happen if a recovered queue wasn't properly cleaned,
  // such that the znode pointing to a log exists but the log was
  // deleted a long time ago.
  // For the moment, we'll throw the IO and processEndOfFile
  throw new IOException(File from recovered queue is  +
  nowhere to be found, fnfe);
} else {
  // If the log was archived, continue reading from there
  Path archivedLogLocation =
  new Path(manager.getOldLogDir(), currentPath.getName());
  if (this.manager.getFs().exists(archivedLogLocation)) {
currentPath = archivedLogLocation;
LOG.info(Log  + this.currentPath +  was moved to  +
archivedLogLocation);
// Open the log at the new location
this.openReader(sleepMultiplier);

  }
  // TODO What happens the log is missing in both places?
}
  }
} catch (IOException ioe) {
  LOG.warn(peerClusterZnode +  Got: , ioe);
  this.reader = null;
  // TODO Need a better way to determinate if a file is really gone but
  // TODO without scanning all logs dir
  if (sleepMultiplier == this.maxRetriesMultiplier) {
LOG.warn(Waited too long for this file, considering dumping);
return !processEndOfFile();
  }
}
return true;
  }
{code}

there's a method called {code:java}processEndOfFile(){code}
should we add this case in it?

  was:
I'm using  the replication of Hbase in my test environment.

When a replicationSource open a empty HLog, the EOFException throws. 
It is because the Reader can't read the SequenceFile's meta data, but there's 
no data at all, so it throws the EOFException.
Should we detect the empty file and processed it, like we process the 
FileNotFoundException?

here's the code:
{code:java}
/**
   * Open a reader on the current path
   *
   * @param sleepMultiplier by how many times the default sleeping time is 
augmented
   * @return true if we should continue with that file, false if we are over 
with it
   */
  protected boolean openReader(int sleepMultiplier) {
try {
  LOG.debug(Opening log for replication  + this.currentPath.getName() +
   at  + this.repLogReader.getPosition());
  try {
this.reader = repLogReader.openReader(this.currentPath);
  } catch (FileNotFoundException 

[jira] [Commented] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628871#comment-13628871
 ] 

Jean-Marc Spaggiari commented on HBASE-8325:


Makes sense. Also, there is many TODO here too... Might be good to see if 
there is a say to clear some of them?

 ReplicationSource read a empty HLog throws EOFException
 ---

 Key: HBASE-8325
 URL: https://issues.apache.org/jira/browse/HBASE-8325
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.5
 Environment: replication enabled
Reporter: zavakid
Priority: Critical

 I'm using  the replication of Hbase in my test environment.
 When a replicationSource open a empty HLog, the EOFException throws. 
 It is because the Reader can't read the SequenceFile's meta data, but there's 
 no data at all, so it throws the EOFException.
 Should we detect the empty file and processed it, like we process the 
 FileNotFoundException?
 here's the code:
 {code:java}
 /**
* Open a reader on the current path
*
* @param sleepMultiplier by how many times the default sleeping time is 
 augmented
* @return true if we should continue with that file, false if we are over 
 with it
*/
   protected boolean openReader(int sleepMultiplier) {
 try {
   LOG.debug(Opening log for replication  + this.currentPath.getName() +
at  + this.repLogReader.getPosition());
   try {
 this.reader = repLogReader.openReader(this.currentPath);
   } catch (FileNotFoundException fnfe) {
 if (this.queueRecovered) {
   // We didn't find the log in the archive directory, look if it still
   // exists in the dead RS folder (there could be a chain of failures
   // to look at)
   LOG.info(NB dead servers :  + deadRegionServers.length);
   for (int i = this.deadRegionServers.length - 1; i = 0; i--) {
 Path deadRsDirectory =
 new Path(manager.getLogDir().getParent(), 
 this.deadRegionServers[i]);
 Path[] locs = new Path[] {
 new Path(deadRsDirectory, currentPath.getName()),
 new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
   currentPath.getName()),
 };
 for (Path possibleLogLocation : locs) {
   LOG.info(Possible location  + 
 possibleLogLocation.toUri().toString());
   if (this.manager.getFs().exists(possibleLogLocation)) {
 // We found the right new location
 LOG.info(Log  + this.currentPath +  still exists at  +
 possibleLogLocation);
 // Breaking here will make us sleep since reader is null
 return true;
   }
 }
   }
   // TODO What happens if the log was missing from every single 
 location?
   // Although we need to check a couple of times as the log could have
   // been moved by the master between the checks
   // It can also happen if a recovered queue wasn't properly cleaned,
   // such that the znode pointing to a log exists but the log was
   // deleted a long time ago.
   // For the moment, we'll throw the IO and processEndOfFile
   throw new IOException(File from recovered queue is  +
   nowhere to be found, fnfe);
 } else {
   // If the log was archived, continue reading from there
   Path archivedLogLocation =
   new Path(manager.getOldLogDir(), currentPath.getName());
   if (this.manager.getFs().exists(archivedLogLocation)) {
 currentPath = archivedLogLocation;
 LOG.info(Log  + this.currentPath +  was moved to  +
 archivedLogLocation);
 // Open the log at the new location
 this.openReader(sleepMultiplier);
   }
   // TODO What happens the log is missing in both places?
 }
   }
 } catch (IOException ioe) {
   LOG.warn(peerClusterZnode +  Got: , ioe);
   this.reader = null;
   // TODO Need a better way to determinate if a file is really gone but
   // TODO without scanning all logs dir
   if (sleepMultiplier == this.maxRetriesMultiplier) {
 LOG.warn(Waited too long for this file, considering dumping);
 return !processEndOfFile();
   }
 }
 return true;
   }
 {code}
 there's a method called {code:java}processEndOfFile(){code}
 should we add this case in it?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Update our protobuf to 2.5 from 2.4.1

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628872#comment-13628872
 ] 

Hudson commented on HBASE-8165:
---

Integrated in hbase-0.95 #140 (See 
[https://builds.apache.org/job/hbase-0.95/140/])
HBASE-8165 Update our protobuf to 2.5 from 2.4.1; REVERT (Revision 1466762)
HBASE-8165 Update our protobuf to 2.5 from 2.4.1; REVERT (Revision 1466761)

 Result = SUCCESS
stack : 
Files : 
* /hbase/branches/0.95/hbase-server/src/test/protobuf/README.txt

stack : 
Files : 
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AggregateProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AuthenticationProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterIdProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ComparatorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ErrorHandlingProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FSProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FilterProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/LoadBalancerProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterMonitorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutation.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutationProcessorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RegionServerStatusProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RowProcessorProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/SecureBulkLoadProtos.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/Tracing.java
* 
/hbase/branches/0.95/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* /hbase/branches/0.95/hbase-protocol/src/main/protobuf/MasterAdmin.proto
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
* 

[jira] [Commented] (HBASE-8220) can we record the count opened HTable for HTablePool

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628875#comment-13628875
 ] 

Jean-Marc Spaggiari commented on HBASE-8220:


Let me retry. I will let you know. Also, any chance to add a test to check that 
returned value is as expected?

 can we record the count opened HTable for HTablePool
 

 Key: HBASE-8220
 URL: https://issues.apache.org/jira/browse/HBASE-8220
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.94.3
Reporter: cuijianwei
 Attachments: HBASE-8220-0.94.3.txt, HBASE-8220-0.94.3.txt, 
 HBASE-8220-0.94.3.txt-v2, HBASE-8220-0.94.3-v2.txt, HBASE-8220-0.94.3-v3.txt, 
 HBASE-8220-0.94.3-v4.txt


 In HTablePool, we have a method getCurrentPoolSize(...) to get how many 
 opened HTable has been pooled. However, we don't know ConcurrentOpenedHTable 
 which means the count of HTable get from HTablePool.getTable(...) and don't 
 return to HTablePool by PooledTable.close(). The ConcurrentOpenedHTable may 
 be meaningful because it indicates how many HTables should be opened for the 
 application which may help us set the appropriate MaxSize of HTablePool. 
 Therefore, we can and a ConcurrentOpenedHTable as a counter in HTablePool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8300) TestSplitTransaction fails to delete files due to open handles left when region is split

2013-04-11 Thread Malie Yin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628910#comment-13628910
 ] 

Malie Yin commented on HBASE-8300:
--

No new tests are included because three existing tests were broken, and the 
patch fixes the broken tests.


 TestSplitTransaction fails to delete files due to open handles left when 
 region is split
 

 Key: HBASE-8300
 URL: https://issues.apache.org/jira/browse/HBASE-8300
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.0
 Environment: Windows
Reporter: Malie Yin
  Labels: patch
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-8300_v1-0.95.patch, hbase-8300_v1-0.95.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 This issue is related to HBASE-6823. logs below.
 TestSplitTransaction
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction
 testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/e5089331-c2bf-43d0-816d-25c6bed71f26/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/4851a041b5e9befef50c135b5659243b
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/9140a440-3925-4eaf-8d5d-62744609d775/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/6f0ef0cbe59b3fb02c081ad1ffc78a9d
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   

[jira] [Commented] (HBASE-8300) TestSplitTransaction fails to delete files due to open handles left when region is split

2013-04-11 Thread Malie Yin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628911#comment-13628911
 ] 

Malie Yin commented on HBASE-8300:
--

any idea why The patch does not appear to apply with p0 to p2?

 TestSplitTransaction fails to delete files due to open handles left when 
 region is split
 

 Key: HBASE-8300
 URL: https://issues.apache.org/jira/browse/HBASE-8300
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.0
 Environment: Windows
Reporter: Malie Yin
  Labels: patch
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-8300_v1-0.95.patch, hbase-8300_v1-0.95.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 This issue is related to HBASE-6823. logs below.
 TestSplitTransaction
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction
 testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/e5089331-c2bf-43d0-816d-25c6bed71f26/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/4851a041b5e9befef50c135b5659243b
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/9140a440-3925-4eaf-8d5d-62744609d775/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/6f0ef0cbe59b3fb02c081ad1ffc78a9d
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at 

[jira] [Commented] (HBASE-8300) TestSplitTransaction fails to delete files due to open handles left when region is split

2013-04-11 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628918#comment-13628918
 ] 

Matteo Bertozzi commented on HBASE-8300:


[~my7604] The patch seems taken from hbase-server instead of the root so you 
get src/main/java/org/... instead of hbase-server/src/main/java/org/...

I'm +1 on the patch, just add a comment that says that the method will close 
the StoreFile.

 TestSplitTransaction fails to delete files due to open handles left when 
 region is split
 

 Key: HBASE-8300
 URL: https://issues.apache.org/jira/browse/HBASE-8300
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.0
 Environment: Windows
Reporter: Malie Yin
  Labels: patch
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-8300_v1-0.95.patch, hbase-8300_v1-0.95.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 This issue is related to HBASE-6823. logs below.
 TestSplitTransaction
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction
 testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/e5089331-c2bf-43d0-816d-25c6bed71f26/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/4851a041b5e9befef50c135b5659243b
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/9140a440-3925-4eaf-8d5d-62744609d775/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/6f0ef0cbe59b3fb02c081ad1ffc78a9d
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 

[jira] [Commented] (HBASE-7658) grant with an empty string as permission should throw an exception

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628927#comment-13628927
 ] 

Hudson commented on HBASE-7658:
---

Integrated in HBase-TRUNK #4052 (See 
[https://builds.apache.org/job/HBase-TRUNK/4052/])
HBASE-7658 grant with an empty string as permission should throw an 
exception (addendum) (Revision 1466824)

 Result = FAILURE
mbertozzi : 
Files : 
* /hbase/trunk/hbase-server/src/main/ruby/hbase/security.rb


 grant with an empty string as permission should throw an exception
 --

 Key: HBASE-7658
 URL: https://issues.apache.org/jira/browse/HBASE-7658
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.94.7, 0.95.1

 Attachments: HBASE-7658-0.94.patch, HBASE-7658-v0.patch, 
 HBASE-7658-v1.patch


 If someone specify an empty permission
 {code}grant 'user', ''{code}
 AccessControlLists.addUserPermission() output a log message and doesn't 
 change the permission, but the user doesn't know about it.
 {code}
 if ((actions == null) || (actions.length == 0)) {
   LOG.warn(No actions associated with user 
 '+Bytes.toString(userPerm.getUser())+');
   return;
 }
 {code}
 I think we should throw an exception instead of just logging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Update our protobuf to 2.5 from 2.4.1

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628926#comment-13628926
 ] 

Hudson commented on HBASE-8165:
---

Integrated in HBase-TRUNK #4052 (See 
[https://builds.apache.org/job/HBase-TRUNK/4052/])
HBASE-8165 Update our protobuf to 2.5 from 2.4.1; REVERT (Revision 1466760)
HBASE-8165 Update our protobuf to 2.5 from 2.4.1; REVERT (Revision 1466759)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/hbase-server/src/test/protobuf/README.txt

stack : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AggregateProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AuthenticationProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterIdProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ComparatorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ErrorHandlingProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FSProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FilterProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/LoadBalancerProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterMonitorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutation.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutationProcessorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RegionServerStatusProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RowProcessorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/SecureBulkLoadProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/Tracing.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/MasterAdmin.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableListMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableSchemaMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/VersionMessage.java

[jira] [Commented] (HBASE-4955) Use the official versions of surefire junit

2013-04-11 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628952#comment-13628952
 ] 

Nicolas Liochon commented on HBASE-4955:


And the winner is: SUREFIRE-985


 Use the official versions of surefire  junit
 -

 Key: HBASE-4955
 URL: https://issues.apache.org/jira/browse/HBASE-4955
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.94.0
 Environment: all
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.95.1

 Attachments: 4955.v1.patch, 4955.v2.patch, 4955.v2.patch, 
 4955.v2.patch, 4955.v2.patch, 4955.v3.patch, 4955.v3.patch, 4955.v3.patch, 
 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 
 4955.v4.patch, 4955.v5.patch, 8204.v4.patch


 We currently use private versions for Surefire  JUnit since HBASE-4763.
 This JIRA traks what we need to move to official versions.
 Surefire 2.11 is just out, but, after some tests, it does not contain all 
 what we need.
 JUnit. Could be for JUnit 4.11. Issue to monitor:
 https://github.com/KentBeck/junit/issues/359: fixed in our version, no 
 feedback for an integration on trunk
 Surefire: Could be for Surefire 2.12. Issues to monitor are:
 329 (category support): fixed, we use the official implementation from the 
 trunk
 786 (@Category with forkMode=always): fixed, we use the official 
 implementation from the trunk
 791 (incorrect elapsed time on test failure): fixed, we use the official 
 implementation from the trunk
 793 (incorrect time in the XML report): Not fixed (reopen) on trunk, fixed on 
 our version.
 760 (does not take into account the test method): fixed in trunk, not fixed 
 in our version
 798 (print immediately the test class name): not fixed in trunk, not fixed in 
 our version
 799 (Allow test parallelization when forkMode=always): not fixed in trunk, 
 not fixed in our version
 800 (redirectTestOutputToFile not taken into account): not yet fix on trunk, 
 fixed on our version
 800  793 are the more important to monitor, it's the only ones that are 
 fixed in our version but not on trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628955#comment-13628955
 ] 

Ted Yu commented on HBASE-7704:
---

Which release should this tool be packaged with ?
For users running 0.94, they would expect this tool in the same release so that 
proper action can be taken before upgrade.

 migration tool that checks presence of HFile V1 files
 -

 Key: HBASE-7704
 URL: https://issues.apache.org/jira/browse/HBASE-7704
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Himanshu Vashishtha
Priority: Blocker
 Fix For: 0.95.1

 Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
 HBase-7704-v3.patch, HBase-7704-v4.patch


 Below was Stack's comment from HBASE-7660:
 Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
 imagine it as an addition to the hfile tool 
 http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
 bunch of args including printing out meta. We could add an option to print 
 out version only – or return 1 if version 1 or some such – and then do a bit 
 of code to just list all hfiles and run this script against each. Could MR it 
 if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8220) can we record the count opened HTable for HTablePool

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628962#comment-13628962
 ] 

Jean-Marc Spaggiari commented on HBASE-8220:


{code}
Tests in error: 
  testBasicRollingRestart(org.apache.hadoop.hbase.master.TestRollingRestart): 
test timed out after 30 milliseconds

Tests run: 1335, Failures: 0, Errors: 1, Skipped: 13
{code}

I retried the failed test and it passed:

{code}
---
 T E S T S
---
Running org.apache.hadoop.hbase.master.TestRollingRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 186.049 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
{code}

So it seems to be good. +1 for me, even better if you can add a test.

 can we record the count opened HTable for HTablePool
 

 Key: HBASE-8220
 URL: https://issues.apache.org/jira/browse/HBASE-8220
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.94.3
Reporter: cuijianwei
 Attachments: HBASE-8220-0.94.3.txt, HBASE-8220-0.94.3.txt, 
 HBASE-8220-0.94.3.txt-v2, HBASE-8220-0.94.3-v2.txt, HBASE-8220-0.94.3-v3.txt, 
 HBASE-8220-0.94.3-v4.txt


 In HTablePool, we have a method getCurrentPoolSize(...) to get how many 
 opened HTable has been pooled. However, we don't know ConcurrentOpenedHTable 
 which means the count of HTable get from HTablePool.getTable(...) and don't 
 return to HTablePool by PooledTable.close(). The ConcurrentOpenedHTable may 
 be meaningful because it indicates how many HTables should be opened for the 
 application which may help us set the appropriate MaxSize of HTablePool. 
 Therefore, we can and a ConcurrentOpenedHTable as a counter in HTablePool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Status: Open  (was: Patch Available)

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Attachment: HBASE-8318-v1-trunk.patch

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Status: Patch Available  (was: Open)

Thanks for the review. I changed the text in 0.95 but forgot for the trunk.

Just attached the updated version.

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629037#comment-13629037
 ] 

Hadoop QA commented on HBASE-8318:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12578230/HBASE-8318-v1-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 hadoop2.0{color}.  The patch failed to compile against the 
hadoop 2.0 profile.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5264//console

This message is automatically generated.

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Status: Open  (was: Patch Available)

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Attachment: HBASE-8318-v1-trunk.patch

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Attachment: (was: HBASE-8318-v1-trunk.patch)

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Status: Patch Available  (was: Open)

Forgot TestFromClientSide...

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629049#comment-13629049
 ] 

Hadoop QA commented on HBASE-8318:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12578233/HBASE-8318-v1-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 hadoop2.0{color}.  The patch failed to compile against the 
hadoop 2.0 profile.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5265//console

This message is automatically generated.

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8303) Increse the test timeout to 60s when they are less than 20s

2013-04-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629050#comment-13629050
 ] 

Andrew Purtell commented on HBASE-8303:
---

Going to commit this today if no objection. Looking for green tests on EC2 
jenkins.

 Increse the test timeout to 60s when they are less than 20s
 ---

 Key: HBASE-8303
 URL: https://issues.apache.org/jira/browse/HBASE-8303
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.7, 0.95.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.95.1

 Attachments: 8303-0.94.patch, 8303.v1.patch, 8303.v1.patch


 Short test timeouts are dangerous because:
  - if the test is executed in the same jvm as another, GC, thread priority 
 can play a role
  - we don't know the machine used to execute the tests, nor what's running on 
 it;.
 For this reason, a test timeout of 60s allows us to be on the safe side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8300) TestSplitTransaction fails to delete files due to open handles left when region is split

2013-04-11 Thread Malie Yin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629051#comment-13629051
 ] 

Malie Yin commented on HBASE-8300:
--

do you mean to add a comment in the code or just here?


 TestSplitTransaction fails to delete files due to open handles left when 
 region is split
 

 Key: HBASE-8300
 URL: https://issues.apache.org/jira/browse/HBASE-8300
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.0
 Environment: Windows
Reporter: Malie Yin
  Labels: patch
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-8300_v1-0.95.patch, hbase-8300_v1-0.95.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 This issue is related to HBASE-6823. logs below.
 TestSplitTransaction
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction
 testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/e5089331-c2bf-43d0-816d-25c6bed71f26/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/4851a041b5e9befef50c135b5659243b
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction)
 java.io.IOException: Failed delete of 
 C:/springSpace/org.apache.hbase.hbase-0.95.0-SNAPSHOT/hbase-server/target/test-data/9140a440-3925-4eaf-8d5d-62744609d775/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/6f0ef0cbe59b3fb02c081ad1ffc78a9d
   at 
 org.apache.hadoop.hbase.regionserver.TestSplitTransaction.teardown(TestSplitTransaction.java:100)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at 

[jira] [Commented] (HBASE-8324) TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile

2013-04-11 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629066#comment-13629066
 ] 

Jonathan Hsieh commented on HBASE-8324:
---

jps says the MRAppMaster and YarnChild processes are spawned but these 
processes are essentially black boxes.  

 TestHFileOutputFormat.testMRIncremental* fails against hadoop2 profile
 --

 Key: HBASE-8324
 URL: https://issues.apache.org/jira/browse/HBASE-8324
 Project: HBase
  Issue Type: Sub-task
  Components: hadoop2, test
Affects Versions: 0.95.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.95.1


 Two tests cases are failing:
 testMRIncrementalLoad, testMRIncrementalloadWithSplit
 {code}
 testcase time=33.942 
 classname=org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat 
 name=testMRIncrementalLoad
 failure type=java.lang.AssertionErrorjava.lang.AssertionError
 at org.junit.Assert.fail(Assert.java:86)
 at org.junit.Assert.assertTrue(Assert.java:41)
 at org.junit.Assert.assertTrue(Assert.java:52)
 at 
 org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
 at 
 org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
 at 
 org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoad(TestHFileOutputFormat.java:348)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 ...
   testcase time=34.324 
 classname=org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat 
 name=testMRIncrementalLoadWithSplit
 failure type=java.lang.AssertionErrorjava.lang.AssertionError
 at org.junit.Assert.fail(Assert.java:86)
 at org.junit.Assert.assertTrue(Assert.java:41)
 at org.junit.Assert.assertTrue(Assert.java:52)
 at 
 org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.runIncrementalPELoad(TestHFileOutputFormat.java:468)
 at 
 org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.doIncrementalLoadTest(TestHFileOutputFormat.java:378)
 at 
 org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:354)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 ...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629076#comment-13629076
 ] 

Himanshu Vashishtha commented on HBASE-7704:


One can point a hbase installation directory path to find any hfile v1. I think 
0.95.x should be sufficient.

 migration tool that checks presence of HFile V1 files
 -

 Key: HBASE-7704
 URL: https://issues.apache.org/jira/browse/HBASE-7704
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Himanshu Vashishtha
Priority: Blocker
 Fix For: 0.95.1

 Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
 HBase-7704-v3.patch, HBase-7704-v4.patch


 Below was Stack's comment from HBASE-7660:
 Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
 imagine it as an addition to the hfile tool 
 http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
 bunch of args including printing out meta. We could add an option to print 
 out version only – or return 1 if version 1 or some such – and then do a bit 
 of code to just list all hfiles and run this script against each. Could MR it 
 if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Status: Open  (was: Patch Available)

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch, 
 HBASE-8318-v2-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Attachment: HBASE-8318-v2-trunk.patch

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch, 
 HBASE-8318-v2-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-8318:
---

Status: Patch Available  (was: Open)

Fixed on issue withing Increment constructor. Thanks [~te...@apache.org] for 
pointing me to the issue.

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch, 
 HBASE-8318-v2-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7658) grant with an empty string as permission should throw an exception

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629093#comment-13629093
 ] 

Hudson commented on HBASE-7658:
---

Integrated in HBase-0.94 #956 (See 
[https://builds.apache.org/job/HBase-0.94/956/])
HBASE-7658 grant with an empty string as permission should throw an 
exception (addendum) (Revision 1466826)

 Result = SUCCESS
mbertozzi : 
Files : 
* /hbase/branches/0.94/src/main/ruby/hbase/security.rb


 grant with an empty string as permission should throw an exception
 --

 Key: HBASE-7658
 URL: https://issues.apache.org/jira/browse/HBASE-7658
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.94.7, 0.95.1

 Attachments: HBASE-7658-0.94.patch, HBASE-7658-v0.patch, 
 HBASE-7658-v1.patch


 If someone specify an empty permission
 {code}grant 'user', ''{code}
 AccessControlLists.addUserPermission() output a log message and doesn't 
 change the permission, but the user doesn't know about it.
 {code}
 if ((actions == null) || (actions.length == 0)) {
   LOG.warn(No actions associated with user 
 '+Bytes.toString(userPerm.getUser())+');
   return;
 }
 {code}
 I think we should throw an exception instead of just logging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629095#comment-13629095
 ] 

Ted Yu commented on HBASE-7704:
---

Can you add release notes, illustrating how this tool should be used ?

 migration tool that checks presence of HFile V1 files
 -

 Key: HBASE-7704
 URL: https://issues.apache.org/jira/browse/HBASE-7704
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Himanshu Vashishtha
Priority: Blocker
 Fix For: 0.95.1

 Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
 HBase-7704-v3.patch, HBase-7704-v4.patch


 Below was Stack's comment from HBASE-7660:
 Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
 imagine it as an addition to the hfile tool 
 http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
 bunch of args including printing out meta. We could add an option to print 
 out version only – or return 1 if version 1 or some such – and then do a bit 
 of code to just list all hfiles and run this script against each. Could MR it 
 if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8140) TableMapReduceUtils#addDependencyJar fails when nested inside another MR job

2013-04-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629101#comment-13629101
 ] 

Lars Hofhansl commented on HBASE-8140:
--

It looks like since this change 
org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan has not passed a 
single time in the EC2 builds (http://54.241.6.143/job/HBase-0.94/).
Change was introduced in Build #72, since then this tests times out (or doesn't 
finish)


 TableMapReduceUtils#addDependencyJar fails when nested inside another MR job
 

 Key: HBASE-8140
 URL: https://issues.apache.org/jira/browse/HBASE-8140
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.95.0

 Attachments: 0001-HBASE-8140-addendum-add-test-category.patch, 
 8140-port-jarfinder-0.94.patch, 8140-port-jarfinder-trunk.patch


 TableMapReduceUtils#addDependencyJar is used when configuring a mapreduce job 
 to make sure dependencies of the job are shipped to the cluster. The code 
 depends on finding an actual jar file containing the necessary classes. This 
 is not always the case, for instance, when run at the end of another 
 mapreduce job. In that case, dependency jars have already been shipped to the 
 cluster and expanded in the parent job's run folder. Those dependencies are 
 there, just not available as jars.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8140) TableMapReduceUtils#addDependencyJar fails when nested inside another MR job

2013-04-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629108#comment-13629108
 ] 

Nick Dimiduk commented on HBASE-8140:
-

Taking a look.

 TableMapReduceUtils#addDependencyJar fails when nested inside another MR job
 

 Key: HBASE-8140
 URL: https://issues.apache.org/jira/browse/HBASE-8140
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.95.0

 Attachments: 0001-HBASE-8140-addendum-add-test-category.patch, 
 8140-port-jarfinder-0.94.patch, 8140-port-jarfinder-trunk.patch


 TableMapReduceUtils#addDependencyJar is used when configuring a mapreduce job 
 to make sure dependencies of the job are shipped to the cluster. The code 
 depends on finding an actual jar file containing the necessary classes. This 
 is not always the case, for instance, when run at the end of another 
 mapreduce job. In that case, dependency jars have already been shipped to the 
 cluster and expanded in the parent job's run folder. Those dependencies are 
 there, just not available as jars.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7605) TestMiniClusterLoadSequential fails in trunk build on hadoop 2

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629113#comment-13629113
 ] 

Ted Yu commented on HBASE-7605:
---

+1

 TestMiniClusterLoadSequential fails in trunk build on hadoop 2
 --

 Key: HBASE-7605
 URL: https://issues.apache.org/jira/browse/HBASE-7605
 Project: HBase
  Issue Type: Sub-task
  Components: hadoop2, test
Reporter: Ted Yu
Assignee: Jonathan Hsieh
Priority: Critical
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-7605.patch


 From HBase-TRUNK-on-Hadoop-2.0.0 #354:
   loadTest[0](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
 test timed out after 12 milliseconds
   loadTest[1](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
 test timed out after 12 milliseconds
   loadTest[2](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
 test timed out after 12 milliseconds
   loadTest[3](org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential): 
 test timed out after 12 milliseconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls with no retries when regions move

2013-04-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629119#comment-13629119
 ] 

Lars Hofhansl commented on HBASE-8285:
--

Fair enough. Let's get this in. +1

 HBaseClient never recovers for single HTable.get() calls with no retries when 
 regions move
 --

 Key: HBASE-8285
 URL: https://issues.apache.org/jira/browse/HBASE-8285
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.6.1
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Critical
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 8285-0.94.txt, 8285-0.94-v2.txt, 8285-0.94-v3.txt, 
 8285-trunk.txt, 8285-trunk-v2.txt


 Steps to reproduce this bug:
 1) Gracefull restart a region server causing regions to get redistributed.
 2) Client call to this region keeps failing since Meta Cache is never purged 
 on the client for the region that moved.
 Reason behind the bug:
 1) Client continues to hit the old region server.
 2) The old region server throws NotServingRegionException which is not 
 handled correctly and the META cache entries are never purged for that server 
 causing the client to keep hitting the old server.
 The reason lies in ServerCallable code since we only purge META cache entries 
 when there is a RetriesExhaustedException, SocketTimeoutException or 
 ConnectException. However, there is no case check for 
 NotServingRegionException(s).
 Why is this not a problem for Scan(s) and Put(s) ?
 a) If a region server is not hosting a region/scanner, then an 
 UnknownScannerException is thrown which causes a relocateRegion() call 
 causing a refresh of the META cache for that particular region.
 b) For put(s), the processBatchCallback() interface in HConnectionManager is 
 used which clears out META cache entries for all kinds of exceptions except 
 DoNotRetryException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6295) Possible performance improvement in client batch operations: presplit and send in background

2013-04-11 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-6295:
---

Status: Open  (was: Patch Available)

 Possible performance improvement in client batch operations: presplit and 
 send in background
 

 Key: HBASE-6295
 URL: https://issues.apache.org/jira/browse/HBASE-6295
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
  Labels: noob
 Attachments: 6295.v1.patch, 6295.v2.patch


 today batch algo is:
 {noformat}
 for Operation o: ListOp{
   add o to todolist
   if todolist  maxsize or o last in list
 split todolist per location
 send split lists to region servers
 clear todolist
 wait
 }
 {noformat}
 We could:
 - create immediately the final object instead of an intermediate array
 - split per location immediately
 - instead of sending when the list as a whole is full, send it when there is 
 enough data for a single location
 It would be:
 {noformat}
 for Operation o: ListOp{
   get location
   add o to todo location.todolist
   if (location.todolist  maxLocationSize)
 send location.todolist to region server 
 clear location.todolist
 // don't wait, continue the loop
 }
 send remaining
 wait
 {noformat}
 It's not trivial to write if you add error management: retried list must be 
 shared with the operations added in the todolist. But it's doable.
 It's interesting mainly for 'big' writes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6295) Possible performance improvement in client batch operations: presplit and send in background

2013-04-11 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-6295:
---

Attachment: 6295.v2.patch

 Possible performance improvement in client batch operations: presplit and 
 send in background
 

 Key: HBASE-6295
 URL: https://issues.apache.org/jira/browse/HBASE-6295
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
  Labels: noob
 Attachments: 6295.v1.patch, 6295.v2.patch


 today batch algo is:
 {noformat}
 for Operation o: ListOp{
   add o to todolist
   if todolist  maxsize or o last in list
 split todolist per location
 send split lists to region servers
 clear todolist
 wait
 }
 {noformat}
 We could:
 - create immediately the final object instead of an intermediate array
 - split per location immediately
 - instead of sending when the list as a whole is full, send it when there is 
 enough data for a single location
 It would be:
 {noformat}
 for Operation o: ListOp{
   get location
   add o to todo location.todolist
   if (location.todolist  maxLocationSize)
 send location.todolist to region server 
 clear location.todolist
 // don't wait, continue the loop
 }
 send remaining
 wait
 {noformat}
 It's not trivial to write if you add error management: retried list must be 
 shared with the operations added in the todolist. But it's doable.
 It's interesting mainly for 'big' writes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6295) Possible performance improvement in client batch operations: presplit and send in background

2013-04-11 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-6295:
---

Status: Patch Available  (was: Open)

 Possible performance improvement in client batch operations: presplit and 
 send in background
 

 Key: HBASE-6295
 URL: https://issues.apache.org/jira/browse/HBASE-6295
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
  Labels: noob
 Attachments: 6295.v1.patch, 6295.v2.patch


 today batch algo is:
 {noformat}
 for Operation o: ListOp{
   add o to todolist
   if todolist  maxsize or o last in list
 split todolist per location
 send split lists to region servers
 clear todolist
 wait
 }
 {noformat}
 We could:
 - create immediately the final object instead of an intermediate array
 - split per location immediately
 - instead of sending when the list as a whole is full, send it when there is 
 enough data for a single location
 It would be:
 {noformat}
 for Operation o: ListOp{
   get location
   add o to todo location.todolist
   if (location.todolist  maxLocationSize)
 send location.todolist to region server 
 clear location.todolist
 // don't wait, continue the loop
 }
 send remaining
 wait
 {noformat}
 It's not trivial to write if you add error management: retried list must be 
 shared with the operations added in the todolist. But it's doable.
 It's interesting mainly for 'big' writes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7636) TestDistributedLogSplitting#testThreeRSAbort fails against hadoop 2.0

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629126#comment-13629126
 ] 

Ted Yu commented on HBASE-7636:
---

bq. understand why hfds short-circuit-read causes this test to fail.
I think we should do the above.

I was trying to do some analysis yesterday but unit test against hadoop 2.0 was 
broken.

If we (HBase dev, hdfs dev) can discover something which needs fixing, it 
better go to 2.0.4-alpha.

 TestDistributedLogSplitting#testThreeRSAbort fails against hadoop 2.0
 -

 Key: HBASE-7636
 URL: https://issues.apache.org/jira/browse/HBASE-7636
 Project: HBase
  Issue Type: Sub-task
  Components: hadoop2, test
Affects Versions: 0.95.0
Reporter: Ted Yu
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-7636.patch


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:34,276 DEBUG 
 [MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
 client.HConnectionManager$HConnectionImplementation(956): Looked up root 
 region location, connection=hconnection 0x12f19fe; 
 serverName=juno.apache.org,55531,1358768819479
 2013-01-21 11:49:34,278 INFO  
 [MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
 catalog.CatalogTracker(576): Failed verification of .META.,,1 at 
 address=juno.apache.org,57582,1358768819456; 
 org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
 in the failed servers list: juno.apache.org/67.195.138.61:57582
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls with no retries when regions move

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629128#comment-13629128
 ] 

Ted Yu commented on HBASE-8285:
---

@Lars:
Can you clarify your +1 is on trunk patch v1 or v2 ?

Thanks

 HBaseClient never recovers for single HTable.get() calls with no retries when 
 regions move
 --

 Key: HBASE-8285
 URL: https://issues.apache.org/jira/browse/HBASE-8285
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.6.1
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Critical
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 8285-0.94.txt, 8285-0.94-v2.txt, 8285-0.94-v3.txt, 
 8285-trunk.txt, 8285-trunk-v2.txt


 Steps to reproduce this bug:
 1) Gracefull restart a region server causing regions to get redistributed.
 2) Client call to this region keeps failing since Meta Cache is never purged 
 on the client for the region that moved.
 Reason behind the bug:
 1) Client continues to hit the old region server.
 2) The old region server throws NotServingRegionException which is not 
 handled correctly and the META cache entries are never purged for that server 
 causing the client to keep hitting the old server.
 The reason lies in ServerCallable code since we only purge META cache entries 
 when there is a RetriesExhaustedException, SocketTimeoutException or 
 ConnectException. However, there is no case check for 
 NotServingRegionException(s).
 Why is this not a problem for Scan(s) and Put(s) ?
 a) If a region server is not hosting a region/scanner, then an 
 UnknownScannerException is thrown which causes a relocateRegion() call 
 causing a refresh of the META cache for that particular region.
 b) For put(s), the processBatchCallback() interface in HConnectionManager is 
 used which clears out META cache entries for all kinds of exceptions except 
 DoNotRetryException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6295) Possible performance improvement in client batch operations: presplit and send in background

2013-04-11 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629131#comment-13629131
 ] 

Nicolas Liochon commented on HBASE-6295:


Still hacky. I workarounded some issues coming from calculateBackoffTime, I 
will do to add them back. Issues were:
 - when the server is not in the list, the pause time is zero
 - when a server crash, the number of error is equals to the number of actions 
we wanted to send to this server
 - the end time is the timeout + the date of the first error, which is not 
suitable if we have an infinite flow of actions.

next version will solve this.

 Possible performance improvement in client batch operations: presplit and 
 send in background
 

 Key: HBASE-6295
 URL: https://issues.apache.org/jira/browse/HBASE-6295
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
  Labels: noob
 Attachments: 6295.v1.patch, 6295.v2.patch


 today batch algo is:
 {noformat}
 for Operation o: ListOp{
   add o to todolist
   if todolist  maxsize or o last in list
 split todolist per location
 send split lists to region servers
 clear todolist
 wait
 }
 {noformat}
 We could:
 - create immediately the final object instead of an intermediate array
 - split per location immediately
 - instead of sending when the list as a whole is full, send it when there is 
 enough data for a single location
 It would be:
 {noformat}
 for Operation o: ListOp{
   get location
   add o to todo location.todolist
   if (location.todolist  maxLocationSize)
 send location.todolist to region server 
 clear location.todolist
 // don't wait, continue the loop
 }
 send remaining
 wait
 {noformat}
 It's not trivial to write if you add error management: retried list must be 
 shared with the operations added in the todolist. But it's doable.
 It's interesting mainly for 'big' writes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629132#comment-13629132
 ] 

Ted Yu commented on HBASE-1936:
---

Applying 1936_v2.1.patch, I got:

1 out of 6 hunks FAILED -- saving rejects to file 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java.rej

Please rebase.

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls with no retries when regions move

2013-04-11 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma updated HBASE-8285:


Attachment: 8285-0.94-v4.txt

 HBaseClient never recovers for single HTable.get() calls with no retries when 
 regions move
 --

 Key: HBASE-8285
 URL: https://issues.apache.org/jira/browse/HBASE-8285
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.6.1
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Critical
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 8285-0.94.txt, 8285-0.94-v2.txt, 8285-0.94-v3.txt, 
 8285-0.94-v4.txt, 8285-trunk.txt, 8285-trunk-v2.txt


 Steps to reproduce this bug:
 1) Gracefull restart a region server causing regions to get redistributed.
 2) Client call to this region keeps failing since Meta Cache is never purged 
 on the client for the region that moved.
 Reason behind the bug:
 1) Client continues to hit the old region server.
 2) The old region server throws NotServingRegionException which is not 
 handled correctly and the META cache entries are never purged for that server 
 causing the client to keep hitting the old server.
 The reason lies in ServerCallable code since we only purge META cache entries 
 when there is a RetriesExhaustedException, SocketTimeoutException or 
 ConnectException. However, there is no case check for 
 NotServingRegionException(s).
 Why is this not a problem for Scan(s) and Put(s) ?
 a) If a region server is not hosting a region/scanner, then an 
 UnknownScannerException is thrown which causes a relocateRegion() call 
 causing a refresh of the META cache for that particular region.
 b) For put(s), the processBatchCallback() interface in HConnectionManager is 
 used which clears out META cache entries for all kinds of exceptions except 
 DoNotRetryException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls with no retries when regions move

2013-04-11 Thread Varun Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629133#comment-13629133
 ] 

Varun Sharma commented on HBASE-8285:
-

I attached v4 for 0.94 with Nicholas' comment incorporated into it.

[~te...@apache.org] - I still need make similar changes for trunk and attach a 
patch. Will do that today...

Thanks
Varun

 HBaseClient never recovers for single HTable.get() calls with no retries when 
 regions move
 --

 Key: HBASE-8285
 URL: https://issues.apache.org/jira/browse/HBASE-8285
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.6.1
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Critical
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 8285-0.94.txt, 8285-0.94-v2.txt, 8285-0.94-v3.txt, 
 8285-0.94-v4.txt, 8285-trunk.txt, 8285-trunk-v2.txt


 Steps to reproduce this bug:
 1) Gracefull restart a region server causing regions to get redistributed.
 2) Client call to this region keeps failing since Meta Cache is never purged 
 on the client for the region that moved.
 Reason behind the bug:
 1) Client continues to hit the old region server.
 2) The old region server throws NotServingRegionException which is not 
 handled correctly and the META cache entries are never purged for that server 
 causing the client to keep hitting the old server.
 The reason lies in ServerCallable code since we only purge META cache entries 
 when there is a RetriesExhaustedException, SocketTimeoutException or 
 ConnectException. However, there is no case check for 
 NotServingRegionException(s).
 Why is this not a problem for Scan(s) and Put(s) ?
 a) If a region server is not hosting a region/scanner, then an 
 UnknownScannerException is thrown which causes a relocateRegion() call 
 causing a refresh of the META cache for that particular region.
 b) For put(s), the processBatchCallback() interface in HConnectionManager is 
 used which clears out META cache entries for all kinds of exceptions except 
 DoNotRetryException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629135#comment-13629135
 ] 

Jeffrey Zhong commented on HBASE-8325:
--

In some cases, the latest WAL could have empty size. While I think the 
situation is covered in catch block {code}catch (IOException ioe){code}. After 
the function, we have some sleep to wait for the wal to have some data. 

To me, it seems the excessive logging from {code}LOG.warn(peerClusterZnode +  
Got: , ioe);{code} for EOF is not ideal as the situation could happen in 
normal cases.

 ReplicationSource read a empty HLog throws EOFException
 ---

 Key: HBASE-8325
 URL: https://issues.apache.org/jira/browse/HBASE-8325
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.5
 Environment: replication enabled
Reporter: zavakid
Priority: Critical

 I'm using  the replication of Hbase in my test environment.
 When a replicationSource open a empty HLog, the EOFException throws. 
 It is because the Reader can't read the SequenceFile's meta data, but there's 
 no data at all, so it throws the EOFException.
 Should we detect the empty file and processed it, like we process the 
 FileNotFoundException?
 here's the code:
 {code:java}
 /**
* Open a reader on the current path
*
* @param sleepMultiplier by how many times the default sleeping time is 
 augmented
* @return true if we should continue with that file, false if we are over 
 with it
*/
   protected boolean openReader(int sleepMultiplier) {
 try {
   LOG.debug(Opening log for replication  + this.currentPath.getName() +
at  + this.repLogReader.getPosition());
   try {
 this.reader = repLogReader.openReader(this.currentPath);
   } catch (FileNotFoundException fnfe) {
 if (this.queueRecovered) {
   // We didn't find the log in the archive directory, look if it still
   // exists in the dead RS folder (there could be a chain of failures
   // to look at)
   LOG.info(NB dead servers :  + deadRegionServers.length);
   for (int i = this.deadRegionServers.length - 1; i = 0; i--) {
 Path deadRsDirectory =
 new Path(manager.getLogDir().getParent(), 
 this.deadRegionServers[i]);
 Path[] locs = new Path[] {
 new Path(deadRsDirectory, currentPath.getName()),
 new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
   currentPath.getName()),
 };
 for (Path possibleLogLocation : locs) {
   LOG.info(Possible location  + 
 possibleLogLocation.toUri().toString());
   if (this.manager.getFs().exists(possibleLogLocation)) {
 // We found the right new location
 LOG.info(Log  + this.currentPath +  still exists at  +
 possibleLogLocation);
 // Breaking here will make us sleep since reader is null
 return true;
   }
 }
   }
   // TODO What happens if the log was missing from every single 
 location?
   // Although we need to check a couple of times as the log could have
   // been moved by the master between the checks
   // It can also happen if a recovered queue wasn't properly cleaned,
   // such that the znode pointing to a log exists but the log was
   // deleted a long time ago.
   // For the moment, we'll throw the IO and processEndOfFile
   throw new IOException(File from recovered queue is  +
   nowhere to be found, fnfe);
 } else {
   // If the log was archived, continue reading from there
   Path archivedLogLocation =
   new Path(manager.getOldLogDir(), currentPath.getName());
   if (this.manager.getFs().exists(archivedLogLocation)) {
 currentPath = archivedLogLocation;
 LOG.info(Log  + this.currentPath +  was moved to  +
 archivedLogLocation);
 // Open the log at the new location
 this.openReader(sleepMultiplier);
   }
   // TODO What happens the log is missing in both places?
 }
   }
 } catch (IOException ioe) {
   LOG.warn(peerClusterZnode +  Got: , ioe);
   this.reader = null;
   // TODO Need a better way to determinate if a file is really gone but
   // TODO without scanning all logs dir
   if (sleepMultiplier == this.maxRetriesMultiplier) {
 LOG.warn(Waited too long for this file, considering dumping);
 return !processEndOfFile();
   }
 }
 return true;
   }
 {code}
 there's a method called 

[jira] [Commented] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls with no retries when regions move

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629136#comment-13629136
 ] 

Hadoop QA commented on HBASE-8285:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578242/8285-0.94-v4.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5267//console

This message is automatically generated.

 HBaseClient never recovers for single HTable.get() calls with no retries when 
 regions move
 --

 Key: HBASE-8285
 URL: https://issues.apache.org/jira/browse/HBASE-8285
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.6.1
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Critical
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 8285-0.94.txt, 8285-0.94-v2.txt, 8285-0.94-v3.txt, 
 8285-0.94-v4.txt, 8285-trunk.txt, 8285-trunk-v2.txt


 Steps to reproduce this bug:
 1) Gracefull restart a region server causing regions to get redistributed.
 2) Client call to this region keeps failing since Meta Cache is never purged 
 on the client for the region that moved.
 Reason behind the bug:
 1) Client continues to hit the old region server.
 2) The old region server throws NotServingRegionException which is not 
 handled correctly and the META cache entries are never purged for that server 
 causing the client to keep hitting the old server.
 The reason lies in ServerCallable code since we only purge META cache entries 
 when there is a RetriesExhaustedException, SocketTimeoutException or 
 ConnectException. However, there is no case check for 
 NotServingRegionException(s).
 Why is this not a problem for Scan(s) and Put(s) ?
 a) If a region server is not hosting a region/scanner, then an 
 UnknownScannerException is thrown which causes a relocateRegion() call 
 causing a refresh of the META cache for that particular region.
 b) For put(s), the processBatchCallback() interface in HConnectionManager is 
 used which clears out META cache entries for all kinds of exceptions except 
 DoNotRetryException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-1936:
---

Attachment: trunk-1936_v2.2.patch

Trunk moves fast. [~yuzhih...@gmail.com], could you please try this one?  
Thanks.

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, 
 trunk-1936_v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8314) HLogSplitter can retry to open a 0-length hlog file

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629153#comment-13629153
 ] 

Ted Yu commented on HBASE-8314:
---

I would expect HBASE-7878 to take care of lease recovery.

Which releases are you targeting ?
There seems to be some overlap between this fix and HBASE-7878.

 HLogSplitter can retry to open a 0-length hlog file
 ---

 Key: HBASE-8314
 URL: https://issues.apache.org/jira/browse/HBASE-8314
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: region-server.log, trunk-8314.patch


 In case a HLog file is of size 0, and it is under recovery, HLogSplitter will 
 fail to open it since it can get the file length, therefore, master can't 
 start.
 {noformat}
 java.io.IOException: Cannot obtain block length for LocatedBlock{...; 
 getBlockSize()=0; corrupt=false; offset=0; locs=[...]}
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:238)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:182)
 at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124)
 at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:117)
 at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7437) Improve CompactSelection

2013-04-11 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629151#comment-13629151
 ] 

Sergey Shelukhin commented on HBASE-7437:
-

*previous code

 Improve CompactSelection
 

 Key: HBASE-7437
 URL: https://issues.apache.org/jira/browse/HBASE-7437
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: Hiroshi Ikeda
Assignee: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-7437.patch, HBASE-7437-V2.patch, 
 HBASE-7437-V3.patch, HBASE-7437-V4.patch


 1. Using AtomicLong makes CompactSelection simple and improve its performance.
 2. There are unused fields and methods.
 3. The fields should be private.
 4. Assertion in the method finishRequest seems wrong:
 {code}
   public void finishRequest() {
 if (isOffPeakCompaction) {
   long newValueToLog = -1;
   synchronized(compactionCountLock) {
 assert !isOffPeakCompaction : Double-counting off-peak count for 
 compaction;
 {code}
 The above assertion seems almost always false.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7437) Improve CompactSelection

2013-04-11 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629150#comment-13629150
 ] 

Sergey Shelukhin commented on HBASE-7437:
-

I will comment here since there's only one issue left on r :)
What is CurrentHourProvider trying to achieve? It seems complex. Why cannot we 
just have static calendar and get current hour from it, as was done in previous 
instanced?

 Improve CompactSelection
 

 Key: HBASE-7437
 URL: https://issues.apache.org/jira/browse/HBASE-7437
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: Hiroshi Ikeda
Assignee: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-7437.patch, HBASE-7437-V2.patch, 
 HBASE-7437-V3.patch, HBASE-7437-V4.patch


 1. Using AtomicLong makes CompactSelection simple and improve its performance.
 2. There are unused fields and methods.
 3. The fields should be private.
 4. Assertion in the method finishRequest seems wrong:
 {code}
   public void finishRequest() {
 if (isOffPeakCompaction) {
   long newValueToLog = -1;
   synchronized(compactionCountLock) {
 assert !isOffPeakCompaction : Double-counting off-peak count for 
 compaction;
 {code}
 The above assertion seems almost always false.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8165) Update our protobuf to 2.5 from 2.4.1

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629158#comment-13629158
 ] 

Hudson commented on HBASE-8165:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #493 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/493/])
HBASE-8165 Update our protobuf to 2.5 from 2.4.1; REVERT (Revision 1466760)
HBASE-8165 Update our protobuf to 2.5 from 2.4.1; REVERT (Revision 1466759)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/hbase-server/src/test/protobuf/README.txt

stack : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AggregateProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AuthenticationProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterIdProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ComparatorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ErrorHandlingProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FSProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/FilterProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/LoadBalancerProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterMonitorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutation.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MultiRowMutationProcessorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RPCProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RegionServerStatusProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RowProcessorProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/SecureBulkLoadProtos.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/Tracing.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/MasterAdmin.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableListMessage.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableSchemaMessage.java
* 

[jira] [Commented] (HBASE-7658) grant with an empty string as permission should throw an exception

2013-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629159#comment-13629159
 ] 

Hudson commented on HBASE-7658:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #493 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/493/])
HBASE-7658 grant with an empty string as permission should throw an 
exception (addendum) (Revision 1466824)

 Result = FAILURE
mbertozzi : 
Files : 
* /hbase/trunk/hbase-server/src/main/ruby/hbase/security.rb


 grant with an empty string as permission should throw an exception
 --

 Key: HBASE-7658
 URL: https://issues.apache.org/jira/browse/HBASE-7658
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.94.7, 0.95.1

 Attachments: HBASE-7658-0.94.patch, HBASE-7658-v0.patch, 
 HBASE-7658-v1.patch


 If someone specify an empty permission
 {code}grant 'user', ''{code}
 AccessControlLists.addUserPermission() output a log message and doesn't 
 change the permission, but the user doesn't know about it.
 {code}
 if ((actions == null) || (actions.length == 0)) {
   LOG.warn(No actions associated with user 
 '+Bytes.toString(userPerm.getUser())+');
   return;
 }
 {code}
 I think we should throw an exception instead of just logging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629163#comment-13629163
 ] 

Hadoop QA commented on HBASE-8318:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12578238/HBASE-8318-v2-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5266//console

This message is automatically generated.

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch, 
 HBASE-8318-v2-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8314) HLogSplitter can retry to open a 0-length hlog file

2013-04-11 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629165#comment-13629165
 ] 

Jimmy Xiang commented on HBASE-8314:


HBASE-7878 should take care of lease recovery. But there still a chance we 
can't open the file, right? For example, if the file is under recovery, we can 
have the lease, but still don't know the block length?

I think this fix and 7878 both help in solving some HDFS related issue.

 HLogSplitter can retry to open a 0-length hlog file
 ---

 Key: HBASE-8314
 URL: https://issues.apache.org/jira/browse/HBASE-8314
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: region-server.log, trunk-8314.patch


 In case a HLog file is of size 0, and it is under recovery, HLogSplitter will 
 fail to open it since it can get the file length, therefore, master can't 
 start.
 {noformat}
 java.io.IOException: Cannot obtain block length for LocatedBlock{...; 
 getBlockSize()=0; corrupt=false; offset=0; locs=[...]}
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:238)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:182)
 at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124)
 at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:117)
 at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629167#comment-13629167
 ] 

Hadoop QA commented on HBASE-1936:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578243/trunk-1936_v2.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:red}-1 hadoop2.0{color}.  The patch failed to compile against the 
hadoop 2.0 profile.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5269//console

This message is automatically generated.

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, 
 trunk-1936_v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6330) TestImportExport has been failing against hadoop 0.23/2.0 profile

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629168#comment-13629168
 ] 

Ted Yu commented on HBASE-6330:
---

TestImportExport failed in HBase-TRUNK-on-Hadoop-2.0.0 #493

 TestImportExport has been failing against hadoop 0.23/2.0 profile
 -

 Key: HBASE-6330
 URL: https://issues.apache.org/jira/browse/HBASE-6330
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.94.1, 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
  Labels: hadoop-2.0
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-6330-94.patch, hbase-6330-trunk.patch, 
 hbase-6330-v2.patch, hbase-6330.v4.patch


 See HBASE-5876.  I'm going to commit the v3 patches under this name since 
 there has been two months (my bad) since the first half was committed and 
 found to be incomplte.
 ---
 4/9/13 Updated - this will take the patch from HBASE-8258 to fix this 
 specific problem.  The umbrella that used to be HBASE-8258 is now handled 
 with HBASE-6891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629177#comment-13629177
 ] 

Ted Yu commented on HBASE-1936:
---

I was able to compile based on hadoop 1.0 and 2.0 without error.
The following tests passed locally:

mt -Dtest=TestGet,TestDynamicClassLoader

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, 
 trunk-1936_v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8314) HLogSplitter can retry to open a 0-length hlog file

2013-04-11 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629179#comment-13629179
 ] 

Matteo Bertozzi commented on HBASE-8314:


the patch looks good to me,
could you add a comment near msg.contains(Cannot obtain block length) saying 
where this exception came from (maybe a stack trace example) and what are the 
possible situation when this exception is raised. if the error message change 
the new test will fail, so I guess the string comparison is fine...

 HLogSplitter can retry to open a 0-length hlog file
 ---

 Key: HBASE-8314
 URL: https://issues.apache.org/jira/browse/HBASE-8314
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: region-server.log, trunk-8314.patch


 In case a HLog file is of size 0, and it is under recovery, HLogSplitter will 
 fail to open it since it can get the file length, therefore, master can't 
 start.
 {noformat}
 java.io.IOException: Cannot obtain block length for LocatedBlock{...; 
 getBlockSize()=0; corrupt=false; offset=0; locs=[...]}
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:238)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:182)
 at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124)
 at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:117)
 at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5746) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums (0.96)

2013-04-11 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629180#comment-13629180
 ] 

Sergey Shelukhin commented on HBASE-5746:
-

This is called on context in encodeDataBlock, called from diskToCacheFormat.
Context is then passed around (fixed a little bit in v1, where the same context 
was used to set and pass by accident (set called on field but the method arg 
passed on, these are currently always the same) to actually use the same one) 
until it reaches an implementation of DataBlockEncoder, which calls 
prepareEncoding, and gets output stream from it. At that time (because 
prepareEncoding was called), dummy header has already been written to that 
stream. Previously it would write a header passed at construction time, now it 
can be changed based on the version of the block.


 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums (0.96)
 -

 Key: HBASE-5746
 URL: https://issues.apache.org/jira/browse/HBASE-5746
 Project: HBase
  Issue Type: Sub-task
  Components: io, regionserver
Reporter: Lars Hofhansl
Assignee: Sergey Shelukhin
Priority: Critical
 Fix For: 0.95.1

 Attachments: 5720-trunk-v2.txt, HBASE-5746-v0.patch, 
 HBASE-5746-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5746) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums (0.96)

2013-04-11 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-5746:


Attachment: HBASE-5746-v1.patch

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums (0.96)
 -

 Key: HBASE-5746
 URL: https://issues.apache.org/jira/browse/HBASE-5746
 Project: HBase
  Issue Type: Sub-task
  Components: io, regionserver
Reporter: Lars Hofhansl
Assignee: Sergey Shelukhin
Priority: Critical
 Fix For: 0.95.1

 Attachments: 5720-trunk-v2.txt, HBASE-5746-v0.patch, 
 HBASE-5746-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8314) HLogSplitter can retry to open a 0-length hlog file

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629182#comment-13629182
 ] 

Ted Yu commented on HBASE-8314:
---

[~nkeywal]:
Can you take a look ?

 HLogSplitter can retry to open a 0-length hlog file
 ---

 Key: HBASE-8314
 URL: https://issues.apache.org/jira/browse/HBASE-8314
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: region-server.log, trunk-8314.patch


 In case a HLog file is of size 0, and it is under recovery, HLogSplitter will 
 fail to open it since it can get the file length, therefore, master can't 
 start.
 {noformat}
 java.io.IOException: Cannot obtain block length for LocatedBlock{...; 
 getBlockSize()=0; corrupt=false; offset=0; locs=[...]}
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:238)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:182)
 at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124)
 at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:117)
 at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls with no retries when regions move

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629188#comment-13629188
 ] 

Ted Yu commented on HBASE-8285:
---

{code}
+  public void deleteCachedRegionLocation(final byte [] tableName,
+  final HRegionLocation location);
{code}
HRegionLocation contains HRegionInfo which has this method:
{code}
  public byte[] getTableName() {
{code}
Do we need tableName parameter in deleteCachedRegionLocation() ?

 HBaseClient never recovers for single HTable.get() calls with no retries when 
 regions move
 --

 Key: HBASE-8285
 URL: https://issues.apache.org/jira/browse/HBASE-8285
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.6.1
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Critical
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 8285-0.94.txt, 8285-0.94-v2.txt, 8285-0.94-v3.txt, 
 8285-0.94-v4.txt, 8285-trunk.txt, 8285-trunk-v2.txt


 Steps to reproduce this bug:
 1) Gracefull restart a region server causing regions to get redistributed.
 2) Client call to this region keeps failing since Meta Cache is never purged 
 on the client for the region that moved.
 Reason behind the bug:
 1) Client continues to hit the old region server.
 2) The old region server throws NotServingRegionException which is not 
 handled correctly and the META cache entries are never purged for that server 
 causing the client to keep hitting the old server.
 The reason lies in ServerCallable code since we only purge META cache entries 
 when there is a RetriesExhaustedException, SocketTimeoutException or 
 ConnectException. However, there is no case check for 
 NotServingRegionException(s).
 Why is this not a problem for Scan(s) and Put(s) ?
 a) If a region server is not hosting a region/scanner, then an 
 UnknownScannerException is thrown which causes a relocateRegion() call 
 causing a refresh of the META cache for that particular region.
 b) For put(s), the processBatchCallback() interface in HConnectionManager is 
 used which clears out META cache entries for all kinds of exceptions except 
 DoNotRetryException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8326) mapreduce.TestTableInputFormatScan times out frequently

2013-04-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629216#comment-13629216
 ] 

Nick Dimiduk commented on HBASE-8326:
-

When it does succeed, it takes a long time. From 
http://54.241.6.143/job/HBase-TRUNK/116/console:

{noformat}
Running org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 886.332 sec
{noformat}

 mapreduce.TestTableInputFormatScan times out frequently
 ---

 Key: HBASE-8326
 URL: https://issues.apache.org/jira/browse/HBASE-8326
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, test
Affects Versions: 0.98.0, 0.94.7, 0.95.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk

 bq. It looks like since this change 
 org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan has not passed a 
 single time in the EC2 builds (http://54.241.6.143/job/HBase-0.94/). Change 
 was introduced in Build #72, since then this tests times out (or doesn't 
 finish)
 via [~lhofhansl] in [HBASE-8140 
 comment|https://issues.apache.org/jira/browse/HBASE-8140?focusedCommentId=13629101page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13629101].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8318) TableOutputFormat.TableRecordWriter should accept Increments

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629206#comment-13629206
 ] 

Ted Yu commented on HBASE-8318:
---

Looks good to me.

 TableOutputFormat.TableRecordWriter should accept Increments
 

 Key: HBASE-8318
 URL: https://issues.apache.org/jira/browse/HBASE-8318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-8318-v0-trunk.patch, HBASE-8318-v1-trunk.patch, 
 HBASE-8318-v2-trunk.patch


 TableOutputFormat.TableRecordWriter can take Puts and Deletes but it should 
 also accept Increments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6295) Possible performance improvement in client batch operations: presplit and send in background

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629203#comment-13629203
 ] 

Hadoop QA commented on HBASE-6295:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578240/6295.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5268//console

This message is automatically generated.

 Possible performance improvement in client batch operations: presplit and 
 send in background
 

 Key: HBASE-6295
 URL: https://issues.apache.org/jira/browse/HBASE-6295
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
  Labels: noob
 Attachments: 6295.v1.patch, 6295.v2.patch


 today batch algo is:
 {noformat}
 for Operation o: ListOp{
   add o to todolist
   if todolist  maxsize or o last in list
 split todolist per location
 send split lists to region servers
 clear todolist
 wait
 }
 {noformat}
 We could:
 - create immediately the final object instead of an intermediate array
 - split per location immediately
 - instead of sending when the list as a whole is full, send it when there is 
 enough data for a single location
 It would be:
 {noformat}
 for Operation o: ListOp{
   get location
   add o to todo location.todolist
   if (location.todolist  maxLocationSize)
 send location.todolist to region server 
 clear location.todolist
 // don't wait, continue the loop
 }
 send remaining
 wait
 {noformat}
 It's not trivial to write if you add error management: retried list must be 
 shared with the operations added in the todolist. But it's doable.
 It's interesting mainly for 'big' writes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8140) TableMapReduceUtils#addDependencyJar fails when nested inside another MR job

2013-04-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629228#comment-13629228
 ] 

Nick Dimiduk commented on HBASE-8140:
-

opened HBASE-8326 to track this issue.

 TableMapReduceUtils#addDependencyJar fails when nested inside another MR job
 

 Key: HBASE-8140
 URL: https://issues.apache.org/jira/browse/HBASE-8140
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.95.0

 Attachments: 0001-HBASE-8140-addendum-add-test-category.patch, 
 8140-port-jarfinder-0.94.patch, 8140-port-jarfinder-trunk.patch


 TableMapReduceUtils#addDependencyJar is used when configuring a mapreduce job 
 to make sure dependencies of the job are shipped to the cluster. The code 
 depends on finding an actual jar file containing the necessary classes. This 
 is not always the case, for instance, when run at the end of another 
 mapreduce job. In that case, dependency jars have already been shipped to the 
 cluster and expanded in the parent job's run folder. Those dependencies are 
 there, just not available as jars.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629222#comment-13629222
 ] 

Jimmy Xiang commented on HBASE-1936:


[~apurtell], are you ok with v3? We can file a follow-up jira to 
consolidate/refactory the class loaders when we are ready to fix it.

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, 
 trunk-1936_v2.patch, trunk-1936_v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8317) Seek returns wrong result with PREFIX_TREE Encoding

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629201#comment-13629201
 ] 

Ted Yu commented on HBASE-8317:
---

So the new test in patch v1 is covered by the extended test in patch v2 ?

 Seek returns wrong result with PREFIX_TREE Encoding
 ---

 Key: HBASE-8317
 URL: https://issues.apache.org/jira/browse/HBASE-8317
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.0
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-8317-v1.patch, hbase-trunk-8317.patch


 TestPrefixTreeEncoding#testSeekWithFixedData from the patch could reproduce 
 the bug.
 An example of the bug case:
 Suppose the following rows:
 1.row3/c1:q1/
 2.row3/c1:q2/
 3.row3/c1:q3/
 4.row4/c1:q1/
 5.row4/c1:q2/
 After seeking the row 'row30', the expected peek KV is row4/c1:q1/, but 
 actual is row3/c1:q1/.
 I just fix this bug case in the patch, 
 Maybe we can do more for other potential problems if anyone is familiar with 
 the code of PREFIX_TREE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-1936:
---

Attachment: trunk-1936_v3.patch

Attached v3.  Moved Base64 from hbase-server to hbase-common. Changed TestGet 
to use our version of Base64.

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, 
 trunk-1936_v2.patch, trunk-1936_v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-8326) mapreduce.TestTableInputFormatScan times out frequently

2013-04-11 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-8326:
---

 Summary: mapreduce.TestTableInputFormatScan times out frequently
 Key: HBASE-8326
 URL: https://issues.apache.org/jira/browse/HBASE-8326
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, test
Affects Versions: 0.98.0, 0.94.7, 0.95.1
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk


bq. It looks like since this change 
org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan has not passed a 
single time in the EC2 builds (http://54.241.6.143/job/HBase-0.94/). Change was 
introduced in Build #72, since then this tests times out (or doesn't finish)

via [~lhofhansl] in [HBASE-8140 
comment|https://issues.apache.org/jira/browse/HBASE-8140?focusedCommentId=13629101page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13629101].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8314) HLogSplitter can retry to open a 0-length hlog file

2013-04-11 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629224#comment-13629224
 ] 

Jimmy Xiang commented on HBASE-8314:


[~mbertozzi], sure, will add a comment.  Thanks.

 HLogSplitter can retry to open a 0-length hlog file
 ---

 Key: HBASE-8314
 URL: https://issues.apache.org/jira/browse/HBASE-8314
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: region-server.log, trunk-8314.patch


 In case a HLog file is of size 0, and it is under recovery, HLogSplitter will 
 fail to open it since it can get the file length, therefore, master can't 
 start.
 {noformat}
 java.io.IOException: Cannot obtain block length for LocatedBlock{...; 
 getBlockSize()=0; corrupt=false; offset=0; locs=[...]}
 at 
 org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:238)
 at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:182)
 at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124)
 at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:117)
 at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5746) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums (0.96)

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629263#comment-13629263
 ] 

Hadoop QA commented on HBASE-5746:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578248/HBASE-5746-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFromClientSide

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5270//console

This message is automatically generated.

 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
 checksums (0.96)
 -

 Key: HBASE-5746
 URL: https://issues.apache.org/jira/browse/HBASE-5746
 Project: HBase
  Issue Type: Sub-task
  Components: io, regionserver
Reporter: Lars Hofhansl
Assignee: Sergey Shelukhin
Priority: Critical
 Fix For: 0.95.1

 Attachments: 5720-trunk-v2.txt, HBASE-5746-v0.patch, 
 HBASE-5746-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629280#comment-13629280
 ] 

Hadoop QA commented on HBASE-1936:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12578254/trunk-1936_v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5271//console

This message is automatically generated.

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, 
 trunk-1936_v2.patch, trunk-1936_v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8325) ReplicationSource read a empty HLog throws EOFException

2013-04-11 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629281#comment-13629281
 ] 

Himanshu Vashishtha commented on HBASE-8325:


Yes, HBase-7122 takes care of that.

 ReplicationSource read a empty HLog throws EOFException
 ---

 Key: HBASE-8325
 URL: https://issues.apache.org/jira/browse/HBASE-8325
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.5
 Environment: replication enabled
Reporter: zavakid
Priority: Critical

 I'm using  the replication of Hbase in my test environment.
 When a replicationSource open a empty HLog, the EOFException throws. 
 It is because the Reader can't read the SequenceFile's meta data, but there's 
 no data at all, so it throws the EOFException.
 Should we detect the empty file and processed it, like we process the 
 FileNotFoundException?
 here's the code:
 {code:java}
 /**
* Open a reader on the current path
*
* @param sleepMultiplier by how many times the default sleeping time is 
 augmented
* @return true if we should continue with that file, false if we are over 
 with it
*/
   protected boolean openReader(int sleepMultiplier) {
 try {
   LOG.debug(Opening log for replication  + this.currentPath.getName() +
at  + this.repLogReader.getPosition());
   try {
 this.reader = repLogReader.openReader(this.currentPath);
   } catch (FileNotFoundException fnfe) {
 if (this.queueRecovered) {
   // We didn't find the log in the archive directory, look if it still
   // exists in the dead RS folder (there could be a chain of failures
   // to look at)
   LOG.info(NB dead servers :  + deadRegionServers.length);
   for (int i = this.deadRegionServers.length - 1; i = 0; i--) {
 Path deadRsDirectory =
 new Path(manager.getLogDir().getParent(), 
 this.deadRegionServers[i]);
 Path[] locs = new Path[] {
 new Path(deadRsDirectory, currentPath.getName()),
 new Path(deadRsDirectory.suffix(HLog.SPLITTING_EXT),
   currentPath.getName()),
 };
 for (Path possibleLogLocation : locs) {
   LOG.info(Possible location  + 
 possibleLogLocation.toUri().toString());
   if (this.manager.getFs().exists(possibleLogLocation)) {
 // We found the right new location
 LOG.info(Log  + this.currentPath +  still exists at  +
 possibleLogLocation);
 // Breaking here will make us sleep since reader is null
 return true;
   }
 }
   }
   // TODO What happens if the log was missing from every single 
 location?
   // Although we need to check a couple of times as the log could have
   // been moved by the master between the checks
   // It can also happen if a recovered queue wasn't properly cleaned,
   // such that the znode pointing to a log exists but the log was
   // deleted a long time ago.
   // For the moment, we'll throw the IO and processEndOfFile
   throw new IOException(File from recovered queue is  +
   nowhere to be found, fnfe);
 } else {
   // If the log was archived, continue reading from there
   Path archivedLogLocation =
   new Path(manager.getOldLogDir(), currentPath.getName());
   if (this.manager.getFs().exists(archivedLogLocation)) {
 currentPath = archivedLogLocation;
 LOG.info(Log  + this.currentPath +  was moved to  +
 archivedLogLocation);
 // Open the log at the new location
 this.openReader(sleepMultiplier);
   }
   // TODO What happens the log is missing in both places?
 }
   }
 } catch (IOException ioe) {
   LOG.warn(peerClusterZnode +  Got: , ioe);
   this.reader = null;
   // TODO Need a better way to determinate if a file is really gone but
   // TODO without scanning all logs dir
   if (sleepMultiplier == this.maxRetriesMultiplier) {
 LOG.warn(Waited too long for this file, considering dumping);
 return !processEndOfFile();
   }
 }
 return true;
   }
 {code}
 there's a method called {code:java}processEndOfFile(){code}
 should we add this case in it?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629288#comment-13629288
 ] 

Matteo Bertozzi commented on HBASE-7704:


the patch looks good to me, 
The .archive, .snapshot doesn't contain any .tableinfo so they are not 
considered in the processTable().
but if the file is a link you don't check if is v1, this means that if you've 
cloned a table and removed the original one the cloned can have all v1 files 
(not sure if this case exists, since snapshots are in 94 and 94 writes with v2, 
but maybe there is the case that the source table of the clone was not 
converted).

some other minor notes:
- Add an Example section at the end of the help, like the other tools to 
demonstrate how to use it.
- could you replace the Path p with the different Path tableDir, Path 
regionDir just to make the code a little bit more readable.

 migration tool that checks presence of HFile V1 files
 -

 Key: HBASE-7704
 URL: https://issues.apache.org/jira/browse/HBASE-7704
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Himanshu Vashishtha
Priority: Blocker
 Fix For: 0.95.1

 Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
 HBase-7704-v3.patch, HBase-7704-v4.patch


 Below was Stack's comment from HBASE-7660:
 Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
 imagine it as an addition to the hfile tool 
 http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
 bunch of args including printing out meta. We could add an option to print 
 out version only – or return 1 if version 1 or some such – and then do a bit 
 of code to just list all hfiles and run this script against each. Could MR it 
 if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7636) TestDistributedLogSplitting#testThreeRSAbort fails against hadoop 2.0

2013-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629303#comment-13629303
 ] 

Ted Yu commented on HBASE-7636:
---

I did a comparison on MacBook.
With 2.0.2-alpha, I got the following for 
TestDistributedLogSplitting#testThreeRSAbort:
{code}
Running org.apache.hadoop.hbase.master.TestDistributedLogSplitting
2013-04-11 13:11:43.140 java[86243:dc07] Unable to load realm info from 
SCDynamicStore
2013-04-11 13:11:43.210 java[86243:dc07] Unable to load realm info from 
SCDynamicStore
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 114.58 sec  
FAILURE!

Results :

Failed tests:   
testThreeRSAbort(org.apache.hadoop.hbase.master.TestDistributedLogSplitting): 
Timedout
{code}
With 2.0.4-SNAPSHOT, the test finished in 16.6 seconds.

 TestDistributedLogSplitting#testThreeRSAbort fails against hadoop 2.0
 -

 Key: HBASE-7636
 URL: https://issues.apache.org/jira/browse/HBASE-7636
 Project: HBase
  Issue Type: Sub-task
  Components: hadoop2, test
Affects Versions: 0.95.0
Reporter: Ted Yu
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.95.1

 Attachments: hbase-7636.patch


 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/364/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testThreeRSAbort/
  :
 {code}
 2013-01-21 11:49:34,276 DEBUG 
 [MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
 client.HConnectionManager$HConnectionImplementation(956): Looked up root 
 region location, connection=hconnection 0x12f19fe; 
 serverName=juno.apache.org,55531,1358768819479
 2013-01-21 11:49:34,278 INFO  
 [MASTER_SERVER_OPERATIONS-juno.apache.org,57966,1358768818594-0] 
 catalog.CatalogTracker(576): Failed verification of .META.,,1 at 
 address=juno.apache.org,57582,1358768819456; 
 org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
 in the failed servers list: juno.apache.org/67.195.138.61:57582
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7255) KV size metric went missing from StoreScanner.

2013-04-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7255:
-

Attachment: HBASE-7255-3.patch

 KV size metric went missing from StoreScanner.
 --

 Key: HBASE-7255
 URL: https://issues.apache.org/jira/browse/HBASE-7255
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Elliott Clark
Priority: Critical
 Fix For: 0.95.1

 Attachments: HBASE-7255-0.patch, HBASE-7255-1.patch, 
 HBASE-7255-2.patch, HBASE-7255-3.patch


 In trunk due to the metric refactor, at least the KV size metric went missing.
 See this code in StoreScanner.java:
 {code}
 } finally {
   if (cumulativeMetric  0  metric != null) {
   }
 }
 {code}
 Just an empty if statement, where the metric used to be collected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7704) migration tool that checks presence of HFile V1 files

2013-04-11 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629308#comment-13629308
 ] 

Himanshu Vashishtha commented on HBASE-7704:


Glad I asked you to review it Matteo... Thanks. 
Will take care of snapshot stuff in the next patch.


 migration tool that checks presence of HFile V1 files
 -

 Key: HBASE-7704
 URL: https://issues.apache.org/jira/browse/HBASE-7704
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Himanshu Vashishtha
Priority: Blocker
 Fix For: 0.95.1

 Attachments: HBase-7704-v1.patch, HBase-7704-v2.patch, 
 HBase-7704-v3.patch, HBase-7704-v4.patch


 Below was Stack's comment from HBASE-7660:
 Regards the migration 'tool', or 'tool' to check for presence of v1 files, I 
 imagine it as an addition to the hfile tool 
 http://hbase.apache.org/book.html#hfile_tool2 The hfile tool already takes a 
 bunch of args including printing out meta. We could add an option to print 
 out version only – or return 1 if version 1 or some such – and then do a bit 
 of code to just list all hfiles and run this script against each. Could MR it 
 if too many files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-04-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13629320#comment-13629320
 ] 

Andrew Purtell commented on HBASE-1936:
---

+1 on v3 patch and notion of followon jira

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.95.1

 Attachments: cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, 
 trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, 
 trunk-1936_v2.patch, trunk-1936_v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >