[jira] [Commented] (HBASE-12098) User granted namespace table create permissions can't create a table

2014-09-26 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149035#comment-14149035
 ] 

Kashif J S commented on HBASE-12098:


bq. It seems that the issue existed even in 0.98.1 release. Attaching the patch.
I am using 0.98.5. I tried to reproduce the issue but I cannot get it. I 
performed below steps in a secure cluster

1 Logged in as HBase root user and created namespace with root user.
create_namespace 'myns1'
grant 'kashif', 'RWXCA'  // kashif is other user. Different from 
master/regionserver

2 Log in as kashif(client) and tried to create a new table
create 'myns1:mytable', 'f'  // It was successful


Currently in 0.98.5 version, I do not see namespace level permission support 
from hbase shell.
Is it supported in trunk or am i missing something ?

 User granted namespace table create permissions can't create a table
 

 Key: HBASE-12098
 URL: https://issues.apache.org/jira/browse/HBASE-12098
 Project: HBase
  Issue Type: Bug
  Components: Client, security
Affects Versions: 0.98.6
Reporter: Dima Spivak
Assignee: Srikanth Srungarapu
Priority: Critical
 Fix For: 2.0.0, 0.98.7, 0.99.1

 Attachments: 12098-master.txt, HBASE-12098.patch, 
 HBASE-12098_master_v2.patch


 From the HBase shell and Java API, I am seeing
 {code}ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: 
 Insufficient permissions for user 'dima' (global, action=CREATE){code}
 when I try to create a table in a namespace to which I have been granted 
 RWXCA permissions by a global admin. Interestingly enough, this only seems to 
 extend to table creation; the same user is then allowed to disable and drop a 
 table created by a global admin in that namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11827) Encryption support for bulkloading data into table with encryption configured for hfile format 3

2014-08-29 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14114996#comment-14114996
 ] 

Kashif J S commented on HBASE-11827:


Agree with [~apurtell] that indeed it is a security glitch. 

 Encryption support for bulkloading data into table with encryption configured 
 for hfile format 3
 

 Key: HBASE-11827
 URL: https://issues.apache.org/jira/browse/HBASE-11827
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 2.0.0, 0.98.7

 Attachments: HBASE-11827-98-v1.patch, HBASE-11827-trunk-v1.patch


 The solution would be to add support to auto detect encryption parameters 
 similar to other parameters like compression, datablockencoding, etc when 
 encryption is enabled for hfile format 3. 
 The current patch does the following:
 1. Automatically detects encryption type and key in HFileOutputFormat  
 HFileOutputFormat2.
 2. Uses Base64encoder/decoder for url passing of Encryption key which is in 
 bytes format



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11827) Encryption support for bulkloading data into table with encryption configured for hfile format 3

2014-08-29 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14114997#comment-14114997
 ] 

Kashif J S commented on HBASE-11827:


Is there no alternate way apart from triggering major compaction ? Because with 
several TB of data, rewriting data all again doesn't seems to be a good idea :)

 Encryption support for bulkloading data into table with encryption configured 
 for hfile format 3
 

 Key: HBASE-11827
 URL: https://issues.apache.org/jira/browse/HBASE-11827
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 2.0.0, 0.98.7

 Attachments: HBASE-11827-98-v1.patch, HBASE-11827-trunk-v1.patch


 The solution would be to add support to auto detect encryption parameters 
 similar to other parameters like compression, datablockencoding, etc when 
 encryption is enabled for hfile format 3. 
 The current patch does the following:
 1. Automatically detects encryption type and key in HFileOutputFormat  
 HFileOutputFormat2.
 2. Uses Base64encoder/decoder for url passing of Encryption key which is in 
 bytes format



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11827) Encryption support for bulkloading data into table with encryption configured for hfile format 3

2014-08-26 Thread Kashif J S (JIRA)
Kashif J S created HBASE-11827:
--

 Summary: Encryption support for bulkloading data into table with 
encryption configured for hfile format 3
 Key: HBASE-11827
 URL: https://issues.apache.org/jira/browse/HBASE-11827
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Kashif J S
Assignee: Kashif J S


The solution would be to add support to auto detect encryption parameters 
similar to other parameters like compression, datablockencoding, etc when 
encryption is enabled for hfile format 3. 

The current patch does the following:
1. Automatically detects encryption type and key in HFileOutputFormat  
HFileOutputFormat2.
2. Uses Base64encoder/decoder for url passing of Encryption key which is in 
bytes format



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11827) Encryption support for bulkloading data into table with encryption configured for hfile format 3

2014-08-26 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-11827:
---

Attachment: HBASE-11827-trunk-v1.patch

Attaching patch for trunk. Please review

 Encryption support for bulkloading data into table with encryption configured 
 for hfile format 3
 

 Key: HBASE-11827
 URL: https://issues.apache.org/jira/browse/HBASE-11827
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 2.0.0, 0.98.7

 Attachments: HBASE-11827-98-v1.patch, HBASE-11827-trunk-v1.patch


 The solution would be to add support to auto detect encryption parameters 
 similar to other parameters like compression, datablockencoding, etc when 
 encryption is enabled for hfile format 3. 
 The current patch does the following:
 1. Automatically detects encryption type and key in HFileOutputFormat  
 HFileOutputFormat2.
 2. Uses Base64encoder/decoder for url passing of Encryption key which is in 
 bytes format



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11827) Encryption support for bulkloading data into table with encryption configured for hfile format 3

2014-08-26 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-11827:
---

Attachment: HBASE-11827-98-v1.patch

Attaching patch for 0.98 version also. Please review

 Encryption support for bulkloading data into table with encryption configured 
 for hfile format 3
 

 Key: HBASE-11827
 URL: https://issues.apache.org/jira/browse/HBASE-11827
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 2.0.0, 0.98.7

 Attachments: HBASE-11827-98-v1.patch, HBASE-11827-trunk-v1.patch


 The solution would be to add support to auto detect encryption parameters 
 similar to other parameters like compression, datablockencoding, etc when 
 encryption is enabled for hfile format 3. 
 The current patch does the following:
 1. Automatically detects encryption type and key in HFileOutputFormat  
 HFileOutputFormat2.
 2. Uses Base64encoder/decoder for url passing of Encryption key which is in 
 bytes format



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11827) Encryption support for bulkloading data into table with encryption configured for hfile format 3

2014-08-26 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-11827:
---

Fix Version/s: 0.98.7
   2.0.0
   Status: Patch Available  (was: Open)

 Encryption support for bulkloading data into table with encryption configured 
 for hfile format 3
 

 Key: HBASE-11827
 URL: https://issues.apache.org/jira/browse/HBASE-11827
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.5
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 2.0.0, 0.98.7

 Attachments: HBASE-11827-98-v1.patch, HBASE-11827-trunk-v1.patch


 The solution would be to add support to auto detect encryption parameters 
 similar to other parameters like compression, datablockencoding, etc when 
 encryption is enabled for hfile format 3. 
 The current patch does the following:
 1. Automatically detects encryption type and key in HFileOutputFormat  
 HFileOutputFormat2.
 2. Uses Base64encoder/decoder for url passing of Encryption key which is in 
 bytes format



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11589) AccessControlException handling in HBase rpc server and client. AccessControlException should be a not retriable exception

2014-07-25 Thread Kashif J S (JIRA)
Kashif J S created HBASE-11589:
--

 Summary: AccessControlException handling in HBase rpc server and 
client. AccessControlException should be a not retriable exception
 Key: HBASE-11589
 URL: https://issues.apache.org/jira/browse/HBASE-11589
 Project: HBase
  Issue Type: Bug
  Components: IPC/RPC
Affects Versions: 0.98.3
 Environment: SLES 11 SP1
Reporter: Kashif J S


RPC server does not handle the AccessControlException thrown by 
authorizeConnection failure properly and in return sends IOException to the 
HBase client. 
Ultimately the client does retries and gets RetriesExhaustedException but does 
not getting any link or information or stack trace about AccessControlException.

In short summary, upon inspection of RPCServer.java, it seems 
for the Listener, the Reader read code as below does not handle 
AccessControlException

void doRead(….
…..
…..
try {
count = c.readAndProcess(); // This readAndProcess method throws 
AccessControlException from processOneRpc(byte[] buf) which is not handled ?
  } catch (InterruptedException ieo) {
throw ieo;
  } catch (Exception e) {
LOG.warn(getName() + : count of bytes read:  + count, e);
count = -1; //so that the (count  0) block is executed
  }

Below is the client logs if authorizeConnection throws AccessControlException:


2014-07-24 19:40:58,768 INFO  [main] 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 7 of 7 
failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Call to 
host-10-18-40-101/10.18.40.101:6 failed on local exception: 
java.io.EOFException
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3302)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:113)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3329)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:605)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:496)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:430)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
at 
org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.IfNode.interpret(IfNode.java:117)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at 
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at 
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:233)
at 
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:215)
at 

[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-06-26 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14044401#comment-14044401
 ] 

Kashif J S commented on HBASE-10933:


Thanks [~jmhsieh] :)  Alright I will modify as per your comments/concerns.  
Maybe I will rename the nextkey method  to something as nextByte. Maybe more 
meaningful for the value it returns. And other methods I will hide by making 
private. 
//What do you mean by exclusive? – in the [a, b), [b, c) sense where b is in 
the second but not the first? If we did next we'd still be able to sneak 
'abc\x0' betwen 'abc' and 'abd'

Yes... if region startkey and endkey is 'abc' and 'abd', If we did next we will 
be able to sneak 'abc\x0'. So nextKey maybe a wrong method name. I will rename 
method to nextByte
Also I will add Unit test cases for the same.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.22

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, 
 HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, 
 TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-06-18 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14035109#comment-14035109
 ] 

Kashif J S commented on HBASE-10933:


Thanks for the review [~lhofhansl]  [~jmhsieh] .

bq. This specific method isn't correct. if we have a key abc, the next key is 
actually abc\x0, not abd. Rename the method or fix the code. Please add 
unit test to demonstrate expected behavior.
// In HBase the end key is always exclusive, hence the next key returns the 
next exclusive key. If you suggest, maybe we can rename to nextExclusiveKey or 
nextEndKey nextKey apparently seemed fine to me. Let me know your call

This is added and only used once in the code. Why make all the generic code 
when you only need the more specific code? Please remove.
// I just thought this can be a generic code. You never know it could be 
required later in the development. If still not required, I will remove it now ?

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.21

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, 
 HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, 
 TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-06-13 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: HBASE-10933-trunk-v2.patch

Upload trunk patch with proper formatting

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.21

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, 
 HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, 
 TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-06-04 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14018472#comment-14018472
 ] 

Kashif J S commented on HBASE-10933:


Kindly review the patch

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.21

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, 
 HBASE-10933-trunk-v1.patch, TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11157) [hbck] NotServingRegionException: Received close for regionName but we are not serving it

2014-05-21 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14004580#comment-14004580
 ] 

Kashif J S commented on HBASE-11157:


IMO, For the patch you can explicitly catch (NotServingRegionException e) 
instead of IOException

 [hbck] NotServingRegionException: Received close for regionName but we are 
 not serving it
 ---

 Key: HBASE-11157
 URL: https://issues.apache.org/jira/browse/HBASE-11157
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.13
Reporter: dailidong
Priority: Trivial
 Attachments: HBASE-11157.patch


 if hbck close a region then meet a NotServerRegionException,hbck will hang up 
 . we will close the region on the regionserver, but this regionserver is not 
 serving the region, so we should try catch this exception.
 Trying to fix unassigned region...
 Exception in thread main org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.NotServingRegionException: Received close for 
 regionName but we are not serving it
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:3204)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:3185)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1012)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:87)
 at com.sun.proxy.$Proxy7.closeRegion(Unknown Source)
 at 
 org.apache.hadoop.hbase.util.HBaseFsckRepair.closeRegionSilentlyAndWait(HBaseFsckRepair.java:150)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.closeRegion(HBaseFsck.java:1565)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1704)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1406)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:419)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:438)
 at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3670)
 at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3489)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3483)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-21 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: HBASE-10933-0.94-v2.patch

Modified Junit TCs for 0.94 version for TestOfflineMetaRebuildHole  
TestOfflineMetaRebuildOverlap.  Please review the v2 patch for 0.94. 

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.21

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, 
 HBASE-10933-trunk-v1.patch, TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: HBASE-10933-0.94-v1.patch

Patch for 0.94.20 version.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: TestResults-0.94.txt

Test result for 0.94 version

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: HBASE-10933-trunk-v1.patch

Patch also for trunk version to solve the issue of Single region with single KV 
generating wrong .regioninfo file with same start and end key.  Also added 
JUnit TCs for test

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: TestResults-trunk.txt

Server Test results for trunk version.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003359#comment-14003359
 ] 

Kashif J S commented on HBASE-10933:


For 0.94.* version the patch fixes 3 issues
1 NullPointerException as reported above
2 When table has 1 region and zero KV, then if regioninfo is deleted, running 
hbck will corrupt the table and TableNotFoundException will be thrown for any 
subsequent operation on table
3 When a table contains 1 region, and only 1 KV, then if regioninfo is missing 
and hbck repair is run, this will create invalid regioninfo with same start and 
end key. 
4 Table with multiple regions, all the regions dir missing (check 
testNoHdfsTable modification)

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003362#comment-14003362
 ] 

Kashif J S commented on HBASE-10933:


For the TRUNK version the patch fixes the issue of 
1 When table has 1 region and zero KV, then if regioninfo is deleted, running 
hbck will corrupt the table and TableNotFoundException will be thrown for any 
subsequent operation on table
3 JUnit TCs have been added for various scenarios

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003365#comment-14003365
 ] 

Kashif J S commented on HBASE-10933:


For the TRUNK version the patch fixes the issue of 
1 When a table contains 1 region, and only 1 KV, then if regioninfo is missing 
and hbck repair is run, this will create invalid regioninfo with same start and 
end key. 
2 JUnit TCs have been added for various scenarios


 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Fix Version/s: 0.94.21
   0.99.0
   Status: Patch Available  (was: In Progress)

Please review the patch. The trunk patch may also be applicable for 0.96 
version IMHO

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.98.2, 0.94.16
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.21

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-10933 started by Kashif J S.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-04 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13989272#comment-13989272
 ] 

Kashif J S commented on HBASE-10933:


I have the patch ready for 0.94.19 version.  Probably this problem of NUll 
Pointer Exception does not exist for 0.98.* versions and the trunk. Deepak 
Sharma :  Can you please confirm for 0.98.2.
Also, there is a problem with HBaseFsck when Table containing regions do not 
have data Or Data is in Mem, then HBaseFsck will fail to resolve the 
INCONSISTENCY.  I am working on the fix for that. I will update the patch for 
review soon.  

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Y. SREENIVASULU REDDY
Priority: Critical

 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-04 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S reassigned HBASE-10933:
--

Assignee: Kashif J S  (was: Y. SREENIVASULU REDDY)

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical

 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-10 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10921:
---

Attachment: 0.94_Test_results.txt

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Affects Versions: 0.96.2, 0.94.18
Reporter: Ted Yu
Assignee: Kashif J S
 Fix For: 0.94.19, 0.96.3

 Attachments: 0.94_Test_results.txt, 0.96_Test_results.txt, 
 HBASE-10921-0.94-v1.patch, HBASE-10921-0.96-v1.patch


 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-10 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10921:
---

Attachment: 0.96_Test_results.txt

Attached is the Junit test result for the patch for versions 0.96 and 0.94

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Affects Versions: 0.96.2, 0.94.18
Reporter: Ted Yu
Assignee: Kashif J S
 Fix For: 0.94.19, 0.96.3

 Attachments: 0.94_Test_results.txt, 0.96_Test_results.txt, 
 HBASE-10921-0.94-v1.patch, HBASE-10921-0.96-v1.patch


 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-09 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963854#comment-13963854
 ] 

Kashif J S commented on HBASE-10921:


Ishan Chhabra - The patch attached here is similar since it is Backporting. 
Except that its created on top of 0.94 branch code and  0.96 branch code is 
added as well.

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Affects Versions: 0.96.2, 0.94.18
Reporter: Ted Yu
Assignee: Kashif J S
 Fix For: 0.94.19, 0.96.3

 Attachments: HBASE-10921-0.94-v1.patch, HBASE-10921-0.96-v1.patch


 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10921:
---

Fix Version/s: 0.96.3
   0.94.19
Affects Version/s: 0.96.2
   0.94.18
   Status: Patch Available  (was: In Progress)

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Affects Versions: 0.94.18, 0.96.2
Reporter: Ted Yu
Assignee: Kashif J S
 Fix For: 0.94.19, 0.96.3

 Attachments: HBASE-10921-0.94-v1.patch, HBASE-10921-0.96-v1.patch


 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10921:
---

Attachment: HBASE-10921-0.96-v1.patch

Patch for 0.96 version also attached

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Affects Versions: 0.96.2, 0.94.18
Reporter: Ted Yu
Assignee: Kashif J S
 Fix For: 0.94.19, 0.96.3

 Attachments: HBASE-10921-0.94-v1.patch, HBASE-10921-0.96-v1.patch


 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-04-07 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961825#comment-13961825
 ] 

Kashif J S commented on HBASE-10323:


Any reason why this has not been integrated to 0.94.* versions yet ? 

 Auto detect data block encoding in HFileOutputFormat
 

 Key: HBASE-10323
 URL: https://issues.apache.org/jira/browse/HBASE-10323
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE_10323-0.94.15-v1.patch, 
 HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
 HBASE_10323-0.94.15-v4.patch, HBASE_10323-0.94.15-v5.patch, 
 HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch, 
 HBASE_10323-trunk-v3.patch, HBASE_10323-trunk-v4.patch


 Currently, one has to specify the data block encoding of the table explicitly 
 using the config parameter 
 hbase.mapreduce.hfileoutputformat.datablock.encoding when doing a bulkload 
 load. This option is easily missed, not documented and also works differently 
 than compression, block size and bloom filter type, which are auto detected. 
 The solution would be to add support to auto detect datablock encoding 
 similar to other parameters. 
 The current patch does the following:
 1. Automatically detects datablock encoding in HFileOutputFormat.
 2. Keeps the legacy option of manually specifying the datablock encoding
 around as a method to override auto detections.
 3. Moves string conf parsing to the start of the program so that it fails
 fast during starting up instead of failing during record writes. It also
 makes the internals of the program type safe.
 4. Adds missing doc strings and unit tests for code serializing and
 deserializing config paramerters for bloom filer type, block size and
 datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-07 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S reassigned HBASE-10921:
--

Assignee: Kashif J S

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Kashif J S

 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-07 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961895#comment-13961895
 ] 

Kashif J S commented on HBASE-10921:


I have the patch ready for 0.94 version. But it seems some Unknown Server 
error while uploading the patch. I will try tomorrow. Also I will submit the 
patch for 0.96 tomorrow.

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Kashif J S

 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-07 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-10921 started by Kashif J S.

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Kashif J S

 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96

2014-04-07 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10921:
---

Attachment: HBASE-10921-0.94-v1.patch

Patch for 0.94 versions

 Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 
 0.94 / 0.96
 --

 Key: HBASE-10921
 URL: https://issues.apache.org/jira/browse/HBASE-10921
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Kashif J S
 Attachments: HBASE-10921-0.94-v1.patch


 This issue is to backport auto detection of data block encoding in 
 HFileOutputFormat to 0.94 and 0.96 branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10748) hbase-daemon.sh fails to execute with 'sh' command

2014-03-14 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13934852#comment-13934852
 ] 

Kashif J S commented on HBASE-10748:


For consistency across shell instead of using name hbase-daemon.sh, you can 
make use of 'basename' cmd like below 
thiscmd=$bin/`basename ${BASH_SOURCE-$0}`
OR
thiscmd=$bin/$(basename ${BASH_SOURCE-$0})

 hbase-daemon.sh fails to execute with 'sh' command
 --

 Key: HBASE-10748
 URL: https://issues.apache.org/jira/browse/HBASE-10748
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Ashish Singhi
 Attachments: HBASE-10748.patch


 hostname:HBASE_HOME/bin # sh hbase-daemon.sh restart master
 *hbase-daemon.sh: line 188: hbase-daemon.sh: command not found*
 *hbase-daemon.sh: line 196: hbase-daemon.sh: command not found*



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-24 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13880811#comment-13880811
 ] 

Kashif J S commented on HBASE-10383:


Yes Lars Hofhansl.
I have tested the patch, by first only modifying only the secure test cases and 
having old SecureBulkLoadProtocol and SecureBulkLoadEndPoint. The secure test 
cases FAIL in that case. So the TCs LG as they detect the failure now unlike 
before.

After applying patch in SecureBulkLoadProtocol and SecureBulkLoadEndPoint, the 
test cases pass Successfully.

Also in my cluster setup with Kerberos, the patch works fine and secure bulk 
load is Fine.
Thanks.

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch, 
 hbase-10383-jyates-0.94-v0.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-23 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13880772#comment-13880772
 ] 

Kashif J S commented on HBASE-10383:


LGTM,  I will do a test to confirm if all works well today soon. 
TestLoadIncrementalHFiles  TestLoadIncrementalHFilesSplitRecovery really need 
to introduce protected static boolean useSecureHBaseOverride = false; ?  
IMO, the default constructor for non secure mode is already setting it to non 
secure 
public LoadIncrementalHFiles(Configuration conf) throws Exception {
   this(conf, false);
  }

Is there a need to modifyTestLoadIncrementalHFiles  
TestLoadIncrementalHFilesSplitRecovery ? 

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch, 
 hbase-10383-jyates-0.94-v0.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10383) Secure Bulk Load fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)
Kashif J S created HBASE-10383:
--

 Summary: Secure Bulk Load fails for version 0.94.15
 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S


Secure Bulk Load with kerberos enabled fails for LoadIncrementalHfile with 
following exception



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10383:
---

Description: 
Secure Bulk Load with kerberos enabled fails for Complete Bulk 
LoadLoadIncrementalHfile with following exception ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer: 
org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
handler for protocol 
org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
 at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
 at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
 at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
 at java.lang.reflect.Method.invoke(Method.java)
 at 
org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
 at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



  was:Secure Bulk Load with kerberos enabled fails for LoadIncrementalHfile 
with following exception

Summary: Secure Bulk Load for 'completebulkload' fails for version 
0.94.15  (was: Secure Bulk Load fails for version 0.94.15)

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S

 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876474#comment-13876474
 ] 

Kashif J S commented on HBASE-10383:


This happens because the SecureBulkLoadClient tries to invoke bulkLoadHfiles 
with arguments passed as List, Token, String, Boolean  . But the 
SecureBulkLoadProtocol does not define any method with such signature.  
SecureBulkLoadProtocol has an interface bulkLoadHFiles(ListPairbyte[], 
String familyPaths,
 Token? userToken, String bulkToken)

Hence the method invocation fails. 

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S

 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877204#comment-13877204
 ] 

Kashif J S commented on HBASE-10383:


It still fails with the same exception probably bacuse it tries to invoke 
method with Boolean and not boolean. I am providing another patch which works 
in my Test.

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10383:
---

Attachment: HBASE-10383-v2.patch

Complete bulk load in Secure mode fix updated

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10383:
---

Priority: Critical  (was: Major)

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-11 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Attachment: HBASE-9902_v2-0.94.patch

Patch for 0.94 version

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9902.patch, HBASE-9902_v2-0.94.patch, 
 HBASE-9902_v2.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-11 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Attachment: (was: HBASE-9902_v2-0.94.patch)

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-11 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Attachment: HBASE-9902_v2-0.94.patch

Patch for 0.94

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9902.patch, HBASE-9902_v2-0.94.patch, 
 HBASE-9902_v2.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Fix Version/s: 0.94.14

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Status: Open  (was: Patch Available)

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Attachment: HBASE-9902_v2.patch

Junit TC modified for different hostport for all regionservers test

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Status: Patch Available  (was: Open)

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.

2013-11-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S reassigned HBASE-9850:
-

Assignee: Kashif J S

Assign for fix

 Issues with UI for table compact/split operation completion. After 
 split/compaction operation using UI, the page is not automatically 
 redirecting back using IE8/Firefox.
 -

 Key: HBASE-9850
 URL: https://issues.apache.org/jira/browse/HBASE-9850
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9850.patch


 Steps:
 1. create table with regions.
 2. insert some amount of data in such a way that make some Hfiles which is 
 lessthan min.compacted files size (say 3 hfiles are there but min compaction 
 files 10)
 3. from ui perform compact operation on the table.
 TABLE ACTION REQUEST Accepted page is displayed
 4. operation is failing becoz compaction criteria is not meeting. but from ui 
 compaction requests are continously sending to server.  This happens using 
 IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
 this case.
 5. Later no automatic redirection to main hamster page occurs.
 SOLUTION:
 table.jsp in hbase master.
 The meta tag for HTML contains refresh with javascript:history.back().
 A javascript cannot execute in the meta refresh tag like above in table.jsp 
 and snapshot.jsp
 meta http-equiv=refresh content=5,javascript:history.back() /
 This will FAIL.
 W3 school also suggests to use refresh in JAVAscript rather than meta tag.
 If above meta is replaced as below, the behavior is OK verified for 
 IE8/Firefox.
   script type=text/javascript
   !--
   setTimeout(history.back(),5000);
   --
   /script
 Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.

2013-11-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9850:
--

Attachment: HBASE-9850-0.94.14.patch

Patch for 0.94 version

 Issues with UI for table compact/split operation completion. After 
 split/compaction operation using UI, the page is not automatically 
 redirecting back using IE8/Firefox.
 -

 Key: HBASE-9850
 URL: https://issues.apache.org/jira/browse/HBASE-9850
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9850-0.94.14.patch, HBASE-9850-trunk.patch, 
 HBASE-9850.patch


 Steps:
 1. create table with regions.
 2. insert some amount of data in such a way that make some Hfiles which is 
 lessthan min.compacted files size (say 3 hfiles are there but min compaction 
 files 10)
 3. from ui perform compact operation on the table.
 TABLE ACTION REQUEST Accepted page is displayed
 4. operation is failing becoz compaction criteria is not meeting. but from ui 
 compaction requests are continously sending to server.  This happens using 
 IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
 this case.
 5. Later no automatic redirection to main hamster page occurs.
 SOLUTION:
 table.jsp in hbase master.
 The meta tag for HTML contains refresh with javascript:history.back().
 A javascript cannot execute in the meta refresh tag like above in table.jsp 
 and snapshot.jsp
 meta http-equiv=refresh content=5,javascript:history.back() /
 This will FAIL.
 W3 school also suggests to use refresh in JAVAscript rather than meta tag.
 If above meta is replaced as below, the behavior is OK verified for 
 IE8/Firefox.
   script type=text/javascript
   !--
   setTimeout(history.back(),5000);
   --
   /script
 Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.

2013-11-08 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9850:
--

Attachment: HBASE-9850-trunk.patch

Patch for trunk

 Issues with UI for table compact/split operation completion. After 
 split/compaction operation using UI, the page is not automatically 
 redirecting back using IE8/Firefox.
 -

 Key: HBASE-9850
 URL: https://issues.apache.org/jira/browse/HBASE-9850
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 0.94.11
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9850-0.94.14.patch, HBASE-9850-trunk.patch, 
 HBASE-9850.patch


 Steps:
 1. create table with regions.
 2. insert some amount of data in such a way that make some Hfiles which is 
 lessthan min.compacted files size (say 3 hfiles are there but min compaction 
 files 10)
 3. from ui perform compact operation on the table.
 TABLE ACTION REQUEST Accepted page is displayed
 4. operation is failing becoz compaction criteria is not meeting. but from ui 
 compaction requests are continously sending to server.  This happens using 
 IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
 this case.
 5. Later no automatic redirection to main hamster page occurs.
 SOLUTION:
 table.jsp in hbase master.
 The meta tag for HTML contains refresh with javascript:history.back().
 A javascript cannot execute in the meta refresh tag like above in table.jsp 
 and snapshot.jsp
 meta http-equiv=refresh content=5,javascript:history.back() /
 This will FAIL.
 W3 school also suggests to use refresh in JAVAscript rather than meta tag.
 If above meta is replaced as below, the behavior is OK verified for 
 IE8/Firefox.
   script type=text/javascript
   !--
   setTimeout(history.back(),5000);
   --
   /script
 Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-07 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Fix Version/s: (was: 0.96.0)
   0.96.1

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-07 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Release Note: Clock skew detection to be made absolute value comparison. 
Any time difference between master or region, high or low must prevent the 
region server startup
  Status: Patch Available  (was: Open)

Clock skew detection to be made absolute value comparison. Any time difference 
between master or region, high or low must prevent the region server startup

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox

2013-11-06 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13814899#comment-13814899
 ] 

Kashif J S commented on HBASE-9850:
---

Hi Hadoop QA,

-1 tests included. The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
This was UI issue. manually pressing the compact/split/clone/restore buttons 
would take you to Request Action accepted page. 
No Junit TC is required I guess. Do you write automation TC for UI ?

-1 javadoc. The javadoc tool appears to have generated 1 warning messages.
Since this patch involves modification of JSP pages (table.jsp and 
snapshot.jsp), I think this javadoc warning is not related to this patch. 
please confirm

-1 site. The patch appears to cause mvn site goal to fail.
I think this is not related to this patch. please confirm

-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling
I think this is not related to this patch since this patch involves 
modification of JSP pages (table.jsp and snapshot.jsp). please confirm










 Issues with UI for table compact/split operation completion. After 
 split/compaction operation using UI, the page is not automatically 
 redirecting back using IE8/Firefox.
 -

 Key: HBASE-9850
 URL: https://issues.apache.org/jira/browse/HBASE-9850
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.1, 0.94.14

 Attachments: HBASE-9850.patch


 Steps:
 1. create table with regions.
 2. insert some amount of data in such a way that make some Hfiles which is 
 lessthan min.compacted files size (say 3 hfiles are there but min compaction 
 files 10)
 3. from ui perform compact operation on the table.
 TABLE ACTION REQUEST Accepted page is displayed
 4. operation is failing becoz compaction criteria is not meeting. but from ui 
 compaction requests are continously sending to server.  This happens using 
 IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
 this case.
 5. Later no automatic redirection to main hamster page occurs.
 SOLUTION:
 table.jsp in hbase master.
 The meta tag for HTML contains refresh with javascript:history.back().
 A javascript cannot execute in the meta refresh tag like above in table.jsp 
 and snapshot.jsp
 meta http-equiv=refresh content=5,javascript:history.back() /
 This will FAIL.
 W3 school also suggests to use refresh in JAVAscript rather than meta tag.
 If above meta is replaced as below, the behavior is OK verified for 
 IE8/Firefox.
   script type=text/javascript
   !--
   setTimeout(history.back(),5000);
   --
   /script
 Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-06 Thread Kashif J S (JIRA)
Kashif J S created HBASE-9902:
-

 Summary: Region Server is starting normally even if clock skew is 
more than default 30 seconds(or any configured). - Regionserver node time is 
greater than master node time
 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S


When Region server's time is ahead of Master's time and the difference is more 
than hbase.master.maxclockskew value, region server startup is not failing with 
ClockOutOfSyncException.
This causes some abnormal behavior as detected by our Tests.

ServerManager.java#checkClockSkew
  long skew = System.currentTimeMillis() - serverCurrentTime;
if (skew  maxSkew) {
  String message = Server  + serverName +  has been  +
rejected; Reported time is too far out of sync with master.   +
Time difference of  + skew + ms  max allowed of  + maxSkew + 
ms;
  LOG.warn(message);
  throw new ClockOutOfSyncException(message);
}

Above line results in negative value when Master's time is lesser than 
region server time and   if (skew  maxSkew)  check fails to find the skew in 
this case.


Please Note: This was tested in hbase 0.94.11 version and the trunk also 
currently has the same logic.

The fix for the same would be to make the skew positive value first as below:

 long skew = System.currentTimeMillis() - serverCurrentTime;
skew = (skew  0 ? -skew : skew);
if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-06 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Fix Version/s: 0.96.0
   0.98.0

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.0


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time

2013-11-06 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9902:
--

Attachment: HBASE-9902.patch

Patch for absolute value for clock skew detection. For 0.98.0 and 0.96.0 
versions

 Region Server is starting normally even if clock skew is more than default 30 
 seconds(or any configured). - Regionserver node time is greater than master 
 node time
 

 Key: HBASE-9902
 URL: https://issues.apache.org/jira/browse/HBASE-9902
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.11
Reporter: Kashif J S
 Fix For: 0.98.0, 0.96.0

 Attachments: HBASE-9902.patch


 When Region server's time is ahead of Master's time and the difference is 
 more than hbase.master.maxclockskew value, region server startup is not 
 failing with ClockOutOfSyncException.
 This causes some abnormal behavior as detected by our Tests.
 ServerManager.java#checkClockSkew
   long skew = System.currentTimeMillis() - serverCurrentTime;
 if (skew  maxSkew) {
   String message = Server  + serverName +  has been  +
 rejected; Reported time is too far out of sync with master.   +
 Time difference of  + skew + ms  max allowed of  + maxSkew + 
 ms;
   LOG.warn(message);
   throw new ClockOutOfSyncException(message);
 }
 Above line results in negative value when Master's time is lesser than 
 region server time and   if (skew  maxSkew)  check fails to find the skew 
 in this case.
 Please Note: This was tested in hbase 0.94.11 version and the trunk also 
 currently has the same logic.
 The fix for the same would be to make the skew positive value first as below:
  long skew = System.currentTimeMillis() - serverCurrentTime;
 skew = (skew  0 ? -skew : skew);
 if (skew  maxSkew) {.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.

2013-11-05 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-9850:
--

Attachment: HBASE-9850.patch

Verified Patch for Trunk. Tested the same in IE/Firefox browsers.

IE -- Auto Redirection to back page is OK. No repeated split/compact requests 
from Table page sent to server. For snapshot clone/restore requests are also 
OK. No more repeated requests

Firefox -- Auto Redirection to back page is OK. No repeated requests for 
split/compact/clone/restore was observed earlier. Now also it is OK.

 Issues with UI for table compact/split operation completion. After 
 split/compaction operation using UI, the page is not automatically 
 redirecting back using IE8/Firefox.
 -

 Key: HBASE-9850
 URL: https://issues.apache.org/jira/browse/HBASE-9850
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 0.94.11
Reporter: Kashif J S
 Attachments: HBASE-9850.patch


 Steps:
 1. create table with regions.
 2. insert some amount of data in such a way that make some Hfiles which is 
 lessthan min.compacted files size (say 3 hfiles are there but min compaction 
 files 10)
 3. from ui perform compact operation on the table.
 TABLE ACTION REQUEST Accepted page is displayed
 4. operation is failing becoz compaction criteria is not meeting. but from ui 
 compaction requests are continously sending to server.  This happens using 
 IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
 this case.
 5. Later no automatic redirection to main hamster page occurs.
 SOLUTION:
 table.jsp in hbase master.
 The meta tag for HTML contains refresh with javascript:history.back().
 A javascript cannot execute in the meta refresh tag like above in table.jsp 
 and snapshot.jsp
 meta http-equiv=refresh content=5,javascript:history.back() /
 This will FAIL.
 W3 school also suggests to use refresh in JAVAscript rather than meta tag.
 If above meta is replaced as below, the behavior is OK verified for 
 IE8/Firefox.
   script type=text/javascript
   !--
   setTimeout(history.back(),5000);
   --
   /script
 Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.

2013-10-28 Thread Kashif J S (JIRA)
Kashif J S created HBASE-9850:
-

 Summary: Issues with UI for table compact/split operation 
completion. After split/compaction operation using UI, the page is not 
automatically redirecting back using IE8/Firefox.
 Key: HBASE-9850
 URL: https://issues.apache.org/jira/browse/HBASE-9850
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 0.94.11
Reporter: Kashif J S


Steps:

1. create table with regions.
2. insert some amount of data in such a way that make some Hfiles which is 
lessthan min.compacted files size (say 3 hfiles are there but min compaction 
files 10)
3. from ui perform compact operation on the table.
TABLE ACTION REQUEST Accepted page is displayed
4. operation is failing becoz compaction criteria is not meeting. but from ui 
compaction requests are continously sending to server.  This happens using 
IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
this case.
5. Later no automatic redirection to main hamster page occurs.

SOLUTION:

table.jsp in hbase master.

The meta tag for HTML contains refresh with javascript:history.back().

A javascript cannot execute in the meta refresh tag like above in table.jsp and 
snapshot.jsp

meta http-equiv=refresh content=5,javascript:history.back() /
This will FAIL.

W3 school also suggests to use refresh in JAVAscript rather than meta tag.

If above meta is replaced as below, the behavior is OK verified for IE8/Firefox.
  script type=text/javascript
  !--
  setTimeout(history.back(),5000);
  --
  /script

Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox

2013-10-28 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806838#comment-13806838
 ] 

Kashif J S commented on HBASE-9850:
---

The same problem exists in Hbase master snapshot.jsp.

Accordingly it should be replaced as well

 Issues with UI for table compact/split operation completion. After 
 split/compaction operation using UI, the page is not automatically 
 redirecting back using IE8/Firefox.
 -

 Key: HBASE-9850
 URL: https://issues.apache.org/jira/browse/HBASE-9850
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 0.94.11
Reporter: Kashif J S

 Steps:
 1. create table with regions.
 2. insert some amount of data in such a way that make some Hfiles which is 
 lessthan min.compacted files size (say 3 hfiles are there but min compaction 
 files 10)
 3. from ui perform compact operation on the table.
 TABLE ACTION REQUEST Accepted page is displayed
 4. operation is failing becoz compaction criteria is not meeting. but from ui 
 compaction requests are continously sending to server.  This happens using 
 IE(history.back() seems to resend compact/split request). Firefox seems Ok in 
 this case.
 5. Later no automatic redirection to main hamster page occurs.
 SOLUTION:
 table.jsp in hbase master.
 The meta tag for HTML contains refresh with javascript:history.back().
 A javascript cannot execute in the meta refresh tag like above in table.jsp 
 and snapshot.jsp
 meta http-equiv=refresh content=5,javascript:history.back() /
 This will FAIL.
 W3 school also suggests to use refresh in JAVAscript rather than meta tag.
 If above meta is replaced as below, the behavior is OK verified for 
 IE8/Firefox.
   script type=text/javascript
   !--
   setTimeout(history.back(),5000);
   --
   /script
 Hence in table.jsp and snapshot.jsp, it should be modified as above.



--
This message was sent by Atlassian JIRA
(v6.1#6144)