[jira] [Created] (HBASE-12686) Failures in split left the daughter regions in transition forever even after rollback

2014-12-14 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-12686:
---

 Summary: Failures in split left the daughter regions in transition 
forever even after rollback
 Key: HBASE-12686
 URL: https://issues.apache.org/jira/browse/HBASE-12686
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.98.9
Reporter: Rajeshbabu Chintaguntla
 Fix For: 0.98.10


If there are any split failures then the both daughter regions left in 
transition even after rollback, which will block balancing to happen forever 
until unless master is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12696) Possible NPE in SplitTransaction when skipStoreFileRangeCheck in splitPolicy return true

2014-12-15 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-12696:
---

 Summary: Possible NPE in SplitTransaction when 
skipStoreFileRangeCheck in splitPolicy return true
 Key: HBASE-12696
 URL: https://issues.apache.org/jira/browse/HBASE-12696
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 1.0.0, 2.0.0, 0.98.9


When we close the region during split we close all the storefiles readers. At 
the time of store file reference creation we open the reader to check split row 
with the store file  boundaries. But if skipStoreFileRangeCheck return true 
then we may not open the reader and it will be null. So getting NPE.
{code}
f.getReader().close(true);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12791) HBase does not attempt to clean up an aborted split when the regionserver shutting down

2014-12-30 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-12791:
---

 Summary: HBase does not attempt to clean up an aborted split when 
the regionserver shutting down
 Key: HBASE-12791
 URL: https://issues.apache.org/jira/browse/HBASE-12791
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
Priority: Critical
 Fix For: 2.0.0, 0.98.10, 1.0.1


HBase not cleaning the daughter region directories from HDFS  if region server 
shut down after creating the daughter region directories during the split.

Here the logs.

- RS shutdown after creating the daughter regions.
{code}
2014-12-31 09:05:41,406 DEBUG [regionserver60020-splits-1419996941385] 
zookeeper.ZKAssign: regionserver:60020-0x14a9701e53100d1, 
quorum=localhost:2181, baseZNode=/hbase Transitioned node 
80c665138d4fa32da4d792d8ed13206f from RS_ZK_REQUEST_REGION_SPLIT to 
RS_ZK_REQUEST_REGION_SPLIT
2014-12-31 09:05:41,514 DEBUG [regionserver60020-splits-1419996941385] 
regionserver.HRegion: Closing 
t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.: disabling compactions  
flushes
2014-12-31 09:05:41,514 DEBUG [regionserver60020-splits-1419996941385] 
regionserver.HRegion: Updates disabled for region 
t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.
2014-12-31 09:05:41,516 INFO  
[StoreCloserThread-t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.-1] 
regionserver.HStore: Closed f
2014-12-31 09:05:41,518 INFO  [regionserver60020-splits-1419996941385] 
regionserver.HRegion: Closed t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.
2014-12-31 09:05:49,922 DEBUG [regionserver60020-splits-1419996941385] 
regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl for 
table t dd9731ee43b104da565257ca1539aa8c
2014-12-31 09:05:49,922 DEBUG [regionserver60020-splits-1419996941385] 
regionserver.HRegion: Instantiated 
t,,1419996941401.dd9731ee43b104da565257ca1539aa8c.
2014-12-31 09:05:49,929 DEBUG [regionserver60020-splits-1419996941385] 
regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl for 
table t 2e40a44511c0e187d357d651f13a1dab
2014-12-31 09:05:49,929 DEBUG [regionserver60020-splits-1419996941385] 
regionserver.HRegion: Instantiated 
t,row2,1419996941401.2e40a44511c0e187d357d651f13a1dab.
Wed Dec 31 09:06:30 IST 2014 Terminating regionserver
2014-12-31 09:06:30,465 INFO  [Thread-8] regionserver.ShutdownHook: Shutdown 
hook starting; hbase.shutdown.hook=true; 
fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@42d2282e
{code}
- Skipping rollback if RS stopped or stopping so we end up in dirty daughter 
regions in HDFS.
{code}
2014-12-31 09:07:49,547 INFO  [regionserver60020-splits-1419996941385] 
regionserver.SplitRequest: Skip rollback/cleanup of failed split of 
t,,1419996880699.80c665138d4fa32da4d792d8ed13206f. because server is stopped
java.io.InterruptedIOException: Interrupted after 0 tries  on 350
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:156)
{code}

Because of this hbck always showing inconsistencies. 
{code}
ERROR: Region { meta = null, hdfs = 
hdfs://localhost:9000/hbase/data/default/t/2e40a44511c0e187d357d651f13a1dab, 
deployed =  } on HDFS, but not listed in hbase:meta or deployed on any region 
server
ERROR: Region { meta = null, hdfs = 
hdfs://localhost:9000/hbase/data/default/t/dd9731ee43b104da565257ca1539aa8c, 
deployed =  } on HDFS, but not listed in hbase:meta or deployed on any region 
server
{code}

If we try to repair then we end up in overlap regions in hbase:meta. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12901) Possible deadlock while onlining a region and get region plan for other region run parallel

2015-01-21 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-12901:
---

 Summary: Possible deadlock while onlining a region and get region 
plan for other region run parallel
 Key: HBASE-12901
 URL: https://issues.apache.org/jira/browse/HBASE-12901
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
Priority: Critical
 Fix For: 1.0.0, 1.1.0


There is a deadlock when region state updating(regionOnline)after assignment 
completed and getting region plan to other region parallelly. Before onlining 
we are synchronizing on regionStates and inside synchronizing on regionPlans to 
clear the region plan. At the same time there is a chance that while getting 
plan first we synchornize on regionPlans and then regionStates while getting 
assignments of a server. This is coming after HBASE-12686 fix. This issue 
present in branch-1 and branch-1.1 only. 
{code}
AM.-pool1-t33:
at 
org.apache.hadoop.hbase.master.AssignmentManager.clearRegionPlan(AssignmentManager.java:2917)
- waiting to lock 0xd0147f70 (a java.util.TreeMap)
at 
org.apache.hadoop.hbase.master.AssignmentManager.regionOffline(AssignmentManager.java:3617)
at 
org.apache.hadoop.hbase.master.AssignmentManager.regionOffline(AssignmentManager.java:1402)
at 
org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:1734)
at 
org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1821)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1456)
at 
org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:45)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
AM.-pool1-t29:
at 
org.apache.hadoop.hbase.master.RegionStates.getRegionAssignments(RegionStates.java:155)
- waiting to lock 0xd010b250 (a 
org.apache.hadoop.hbase.master.RegionStates)
at 
org.apache.hadoop.hbase.master.AssignmentManager.getSnapShotOfAssignment(AssignmentManager.java:3629)
at 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.getRegionAssignmentsByServer(BaseLoadBalancer.java:1146)
at 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.createCluster(BaseLoadBalancer.java:959)
at 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.randomAssignment(BaseLoadBalancer.java:1010)
at 
org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:2228)
- locked 0xd0147f70 (a java.util.TreeMap)
at 
org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:2185)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1905)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1464)
at 
org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:45)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
AM.ZK.Worker-pool2-t41:
at 
org.apache.hadoop.hbase.master.AssignmentManager.clearRegionPlan(AssignmentManager.java:2917)
- waiting to lock 0xd0147f70 (a java.util.TreeMap)
at 
org.apache.hadoop.hbase.master.AssignmentManager.regionOnline(AssignmentManager.java:1305)
at 
org.apache.hadoop.hbase.master.AssignmentManager$4.run(AssignmentManager.java:1196)
- locked 0xd010b250 (a 
org.apache.hadoop.hbase.master.RegionStates)
at 
org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1142)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-16 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-13054:
---

 Summary: Provide more tracing information for locking/latching 
events.
 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0


Currently not much tracing information available for locking and latching 
events like row level locking during do mini batch mutations, region level 
locking during flush, close and so on. It's better to provide more information 
for such events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12667) Deadlock in AssignmentManager

2015-01-29 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-12667.
-
Resolution: Duplicate

Already fixed as part of HBASE-12901. Marking as duplicate.

 Deadlock in AssignmentManager
 -

 Key: HBASE-12667
 URL: https://issues.apache.org/jira/browse/HBASE-12667
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.0
Reporter: zhaoyunjiong

 No order between regionPlans and regionStates caused dead lock.
 Trunk don't have the problem since it's already got refactor.
 master:phxhshdc11en0004:6:
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.clearRegionPlan(AssignmentManager.java:2898)
 - waiting to lock 0x00048cefe520 (a java.util.TreeMap)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.regionOnline(AssignmentManager.java:1286)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.handleRegionSplitting(AssignmentManager.java:3552)
 - locked 0x00048cf6fc10 (a 
 org.apache.hadoop.hbase.master.RegionStates)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:732)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processRegionInTransition(AssignmentManager.java:601)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRecoverLostRegions(AssignmentManager.java:2851)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:519)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:459)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:900)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:609)
 at java.lang.Thread.run(Thread.java:744)
 AM.-pool1-t10:
 at 
 org.apache.hadoop.hbase.master.RegionStates.getRegionAssignments(RegionStates.java:154)
 - waiting to lock 0x00048cf6fc10 (a 
 org.apache.hadoop.hbase.master.RegionStates)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.getSnapShotOfAssignment(AssignmentManager.java:3610)
 at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.getRegionAssignmentsByServer(BaseLoadBalancer.java:1146)
 at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.createCluster(BaseLoadBalancer.java:959)
 at 
 org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.randomAssignment(BaseLoadBalancer.java:1010)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:2209)
 - locked 0x00048cefe520 (a java.util.TreeMap)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:2166)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1886)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1445)
 at 
 org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:45)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12975) SplitTranaction, RegionMergeTransaction to should have InterfaceAudience of LimitedPrivate(Coproc,Phoenix)

2015-02-04 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-12975:
---

 Summary: SplitTranaction, RegionMergeTransaction to should have 
InterfaceAudience of LimitedPrivate(Coproc,Phoenix)
 Key: HBASE-12975
 URL: https://issues.apache.org/jira/browse/HBASE-12975
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Making SplitTransaction, RegionMergeTransaction limited private is required to 
support local indexing feature in Phoenix to ensure regions colocation. 

We can ensure region split, regions merge in the coprocessors in few method 
calls without touching internals like creating zk's, file layout changes or 
assignments.
1) stepsBeforePONR, stepsAfterPONR we can ensure split.
2) meta entries can pass through coprocessors to atomically update with the 
normal split/merge.
3) rollback on failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13667) Backport HBASE-12975 to 1.0 and 0.98 without changing coprocessors hooks

2015-05-11 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-13667:
---

 Summary: Backport HBASE-12975 to 1.0 and 0.98 without changing 
coprocessors hooks
 Key: HBASE-13667
 URL: https://issues.apache.org/jira/browse/HBASE-13667
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 0.98.13, 1.0.2


We can backport Split transaction, region merge transaction interfaces to 
branch 1.0 and 0.98 without changing coprocessor hooks. Then it should be 
compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13431) Move splitStoreFile to SplitTransaction interface to allow creating both reference files in case required

2015-04-08 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-13431:
---

 Summary: Move splitStoreFile to SplitTransaction interface to 
allow creating both reference files in case required
 Key: HBASE-13431
 URL: https://issues.apache.org/jira/browse/HBASE-13431
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla


 APPROACH #3 at PHOENIX-1734 helps to implement local indexing without much 
changes in HBase. For split we need one kernel change to allow creating both 
top and bottom reference files for index column family store files even when 
the split key not in the storefile key range. 
The changes helps in this case are  
1) pass boolean to HRegionFileSystem#splitStoreFile to allow to skip the 
storefile key range check.
2) move the splitStoreFile with extra boolean parameter  to the new interface 
introduced at HBASE-12975. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13756) Region server is getting aborted with NoSuchMethodError when RS reporting to master

2015-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-13756.
-
Resolution: Not A Problem
  Assignee: (was: Rajeshbabu Chintaguntla)

[~apurtell]
I ran the tests locally they are fine.
I think the hbase jars in jenkins machines might be old jars of HBase 1.1.0 RCs 
thats why they are failing. Need to remove them from local repository.


 Region server is getting aborted with NoSuchMethodError when RS reporting to 
 master
 ---

 Key: HBASE-13756
 URL: https://issues.apache.org/jira/browse/HBASE-13756
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Rajeshbabu Chintaguntla
Priority: Critical

 I have observed below exception when running Phoenix integration tests with 
 HBase-1.1.0. I think same can happen in real cluster when RS reporting to 
 master.
 {noformat}
 ABORTING region server 100.73.163.39,53415,1432394107922: Unhandled: 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ServerLoad$Builder.setNumberOfRequests(J)Lorg/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos$ServerLoad$Builder;
 Cause:
 java.lang.NoSuchMethodError: 
 org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ServerLoad$Builder.setNumberOfRequests(J)Lorg/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos$ServerLoad$Builder;
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1165)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1127)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:944)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:140)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:356)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
 at 
 org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:306)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:138)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13756) Region server is getting aborted after with NoSuchMethodError when RS reporting to master

2015-05-23 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-13756:
---

 Summary: Region server is getting aborted after with 
NoSuchMethodError when RS reporting to master 
 Key: HBASE-13756
 URL: https://issues.apache.org/jira/browse/HBASE-13756
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
Priority: Critical


I have observed below exception when running Phoenix integration tests with 
HBase-1.1.0. I think same can happen in real cluster when RS reporting to 
master.
{noformat}
ABORTING region server 100.73.163.39,53415,1432394107922: Unhandled: 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ServerLoad$Builder.setNumberOfRequests(J)Lorg/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos$ServerLoad$Builder;
Cause:
java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ServerLoad$Builder.setNumberOfRequests(J)Lorg/apache/hadoop/hbase/protobuf/generated/ClusterStatusProtos$ServerLoad$Builder;
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1165)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1127)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:944)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:140)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at 
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:306)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:138)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2016-04-07 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-15600:
---

 Summary: Add provision for adding mutations to memstore or able to 
write to same region in batchMutate coprocessor hooks
 Key: HBASE-15600
 URL: https://issues.apache.org/jira/browse/HBASE-15600
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 1.0.5, 2.0.0, 0.98.19, 1.1.5, 1.2.2


As part of PHOENIX-1734 we need to write the index updates to same region from 
coprocessors but writing from batchMutate API is not allowed because of mvcc. 

Raised PHOENIX-2742 to discuss any alternative way to write to the same region 
directly or not but not having any proper solution there.

Currently we have provision to write wal edits from coprocessors. We can set 
wal edits in MiniBatchOperationInProgress.
{noformat}
  /**
   * Sets the walEdit for the operation(Mutation) at the specified position.
   * @param index
   * @param walEdit
   */
  public void setWalEdit(int index, WALEdit walEdit) {
this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
  }
{noformat}

Similarly we can allow to write mutations from coprocessors to memstore as 
well. Or else we should provide the batch mutation API allow write in batch 
mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15387) Make HalfStoreFileReader configurable in LoadIncrementalHFiles

2016-03-03 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-15387:
---

 Summary: Make HalfStoreFileReader configurable in 
LoadIncrementalHFiles
 Key: HBASE-15387
 URL: https://issues.apache.org/jira/browse/HBASE-15387
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4


Currently we are initializing HalfStoreFileReader to split the HFile but we 
might have different implementation for splitting. So we can make it 
configurable. It's needed for local indexing in Phoenix(PHOENIX-2736). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15580) Tag coprocessor limitedprivate scope to StoreFile.Reader

2016-04-01 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-15580:
---

 Summary: Tag coprocessor limitedprivate scope to StoreFile.Reader
 Key: HBASE-15580
 URL: https://issues.apache.org/jira/browse/HBASE-15580
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 0.98.19, 1.1.5, 1.2.2


For phoenix local indexing we need to have custom storefile reader 
constructor(IndexHalfStoreFileReader) to distinguish from other storefile 
readers. So wanted to mark StoreFile.Reader scope as 
InterfaceAudience.LimitedPrivate("Coprocessor")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-29 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-19384:
---

 Summary: Results returned by preAppend hook in a coprocessor are 
replaced with null from other coprocessor even on bypass
 Key: HBASE-19384
 URL: https://issues.apache.org/jira/browse/HBASE-19384
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 3.0.0, 2.0.0-beta-1


Phoenix adding multiple coprocessors for a table and one of them has preAppend 
and preIncrement implementation and bypass the operations by returning the 
results. But the other coprocessors which doesn't have any implementation 
returning null and the results returned by previous coprocessor are override by 
null and always going with default implementation of append and increment 
operations. But it's not the case with old versions and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-20635) Support to convert the shaded user permission proto to client user permission object

2018-05-24 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-20635:
---

 Summary: Support to convert the shaded user permission proto to 
client user permission object
 Key: HBASE-20635
 URL: https://issues.apache.org/jira/browse/HBASE-20635
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently we have API to build the protobuf UserPermission to client user 
permission in AccessControlUtil but we cannot do the same when we use shaded 
protobufs.
{noformat}
  /**
   * Converts a user permission proto to a client user permission object.
   *
   * @param proto the protobuf UserPermission
   * @return the converted UserPermission
   */
  public static UserPermission 
toUserPermission(AccessControlProtos.UserPermission proto) {
return new UserPermission(proto.getUser().toByteArray(),
toTablePermission(proto.getPermission()));
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20635) Support to convert the shaded user permission proto to client user permission object

2018-06-25 Thread Rajeshbabu Chintaguntla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-20635.
-
Resolution: Fixed

bq. You understand the difference between hbase-protocol and 
hbase-protocol-shaded and that the shaded utils are for internal use only?
Yes understood [~stack]. Thanks.

> Support to convert the shaded user permission proto to client user permission 
> object
> 
>
> Key: HBASE-20635
> URL: https://issues.apache.org/jira/browse/HBASE-20635
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20635.patch, HBASE-20635_v2.patch, 
> PHOENIX-4528_5.x-HBase-2.0_v2.patch
>
>
> Currently we have API to build the protobuf UserPermission to client user 
> permission in AccessControlUtil but we cannot do the same when we use shaded 
> protobufs.
> {noformat}
>   /**
>* Converts a user permission proto to a client user permission object.
>*
>* @param proto the protobuf UserPermission
>* @return the converted UserPermission
>*/
>   public static UserPermission 
> toUserPermission(AccessControlProtos.UserPermission proto) {
> return new UserPermission(proto.getUser().toByteArray(),
> toTablePermission(proto.getPermission()));
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19593) Possible NPE if wal is closed during waledit append.

2017-12-22 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-19593:
---

 Summary: Possible NPE if wal is closed during waledit append.
 Key: HBASE-19593
 URL: https://issues.apache.org/jira/browse/HBASE-19593
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 3.0.0, 2.0.0-beta-1


There is a possible NPE when a wal is closed during waledit append because of 
not setting write entry to the wal key. Here is the code we are not setting 
write entry to wal key when when wal is closed.
{noformat}
if (this.closed) {
  throw new IOException(
  "Cannot append; log is closed, regionName = " + 
hri.getRegionNameAsString());
}
MutableLong txidHolder = new MutableLong();
MultiVersionConcurrencyControl.WriteEntry we = key.getMvcc().begin(() -> {
  txidHolder.setValue(ringBuffer.next());
});
long txid = txidHolder.longValue();
try (TraceScope scope = TraceUtil.createTrace(implClassName + ".append")) {
  FSWALEntry entry = new FSWALEntry(txid, key, edits, hri, inMemstore);
  entry.stampRegionSequenceId(we);
  ringBuffer.get(txid).load(entry);
} finally {
  ringBuffer.publish(txid);
}
return txid;
{noformat}
But on failure complete on mvcc will be called with nulll write entry cause 
NPE. 
{noformat}
WriteEntry writeEntry = null;
try {
  long txid = this.wal.append(this.getRegionInfo(), walKey, walEdit, true);
  // Call sync on our edit.
  if (txid != 0) {
sync(txid, durability);
  }
  writeEntry = walKey.getWriteEntry();
} catch (IOException ioe) {
  if (walKey != null) {
mvcc.complete(walKey.getWriteEntry());
  }
  throw ioe;
}
{noformat}
We are able to reproduce with mocking in one of the phoenix test cases to test 
wal replay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19933) Make use of column family level attribute for skipping hfile range check before create reference during split

2018-02-04 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-19933:
---

 Summary: Make use of column family level attribute for skipping 
hfile range check before create reference during split
 Key: HBASE-19933
 URL: https://issues.apache.org/jira/browse/HBASE-19933
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0-beta-2


Currently we are using split policy to  identify whether to skip store file 
range check or not at the time of reference creation during split. But the full 
fledged split with region reference cannot be used in master. So as an 
alternative way we need to make use of column family attribute to set it true 
or false at client level so the decision happen accordingly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-03 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-19703:
---

 Summary: Functionality added as part of HBASE-12583 is not working 
after moving the split code to master
 Key: HBASE-19703
 URL: https://issues.apache.org/jira/browse/HBASE-19703
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0-beta-2


As part of HBASE-12583 we are passing split policy to 
HRegionFileSystem#splitStoreFile so that we can allow to create reference files 
even the split key is out of HFile key range. This is needed for Local Indexing 
implementation in Phoenix. But now after moving the split code to master just 
passing null for split policy.
{noformat}
final String familyName = Bytes.toString(family);
final Path path_first =
regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
false, null);
final Path path_second =
regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
true, null);
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-20111) Able to split region explicitly even on shouldSplit return false from split policy

2018-03-01 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-20111:
---

 Summary: Able to split region explicitly even on shouldSplit 
return false from split policy
 Key: HBASE-20111
 URL: https://issues.apache.org/jira/browse/HBASE-20111
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0-beta-2


Currently able to split the region explicitly even when the split policy 
returns from shouldSplit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-25711) Setting wrong data block encoding through ColumnFamilyDescriptorBuilder#setValue leading to servers down

2021-03-29 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-25711:
---

 Summary: Setting wrong data block encoding through 
ColumnFamilyDescriptorBuilder#setValue leading to servers down
 Key: HBASE-25711
 URL: https://issues.apache.org/jira/browse/HBASE-25711
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Setting wrong data block encoding using ColumnFamilyDescriptorBuilder#setValue 
instead of using ColumnFamilyDescriptorBuilder#setDataBlockEncoding leading to 
region servers down eventually kill master also. This is possible from Phoenix 
where all the column family properties passed to descriptors using 
ColumnFamilyDescriptorBuilder#setValue. 
{noformat}
Failed to open region 
my_case_sensitive_table,,1617040355998.d8a1df22970075b8863d5c39b2c1e08c., will 
report to master
java.io.IOException: java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.SDFS
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1134)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1076)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:973)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:925)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7346)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7304)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7276)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7234)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7185)
at 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:133)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.SDFS
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.valueOf(DataBlockEncoding.java:31)
at 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder$ModifyableColumnFamilyDescriptor.lambda$getDataBlockEncoding$2(ColumnFamilyDescriptorBuilder.java:806)
at 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder$ModifyableColumnFamilyDescriptor.lambda$getStringOrDefault$0(ColumnFamilyDescriptorBuilder.java:708)
at 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder$ModifyableColumnFamilyDescriptor.getOrDefault(ColumnFamilyDescriptorBuilder.java:716)
at 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder$ModifyableColumnFamilyDescriptor.getStringOrDefault(ColumnFamilyDescriptorBuilder.java:708)
at 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder$ModifyableColumnFamilyDescriptor.getDataBlockEncoding(ColumnFamilyDescriptorBuilder.java:805)
at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:269)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5816)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1098)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1095)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more

{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26879) Allow Accepting snapshot location also to mapreduce jobs to run over exported snapshot.

2022-03-22 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-26879:
---

 Summary: Allow Accepting snapshot location also to mapreduce jobs 
to run over exported snapshot.
 Key: HBASE-26879
 URL: https://issues.apache.org/jira/browse/HBASE-26879
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently there is no way to provide the snapshot location to mapreduce jobs to 
run over the exported snapshot. It would be better to provide the option so 
that even we can scan snapshots available at exported location.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (HBASE-28149) move_servers_namespaces_rsgroup is not changing the new rs group in namespace description

2023-10-12 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28149:
---

 Summary: move_servers_namespaces_rsgroup is not changing the new 
rs group in namespace description
 Key: HBASE-28149
 URL: https://issues.apache.org/jira/browse/HBASE-28149
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


{noformat}

hbase:024:0> list_rsgroups
NAME                                                          SERVER / TABLE
 rs1                                                          server 
hostname2:16020
                                                              table hbase:meta
                                                              table hbase:acl
                                                              table 
hbase:namespace
                                                              table 
hbase:rsgroup
 default                                                      server 
hostname3:16020
                                                              table ns_R:ta
 tenant_group1
 tenant_group                                                 server 
hostname1:16020
                                                              table ns:tb1
                                                              table ns:tab
                                                              table ns:t2
4 row(s)
Took 0.0129 seconds
hbase:025:0> move_servers_namespaces_rsgroup 'rs1',['hostname1:16020'], ['ns']
Took 0.0302 seconds
hbase:026:0> list_rsgroups
NAME                                                          SERVER / TABLE
 rs1                                                          server 
hostname1:16020
                                                              server 
hostname2:16020
                                                              table ns:tb1
                                                              table ns:tab
                                                              table hbase:meta
                                                              table hbase:acl
                                                              table ns:t2
                                                              table 
hbase:namespace
                                                              table 
hbase:rsgroup
 default                                                      server 
hostname2:16020
                                                              table ns_R:ta
 tenant_group1
 tenant_group
4 row(s)
Took 0.0140 seconds
hbase:027:0> describe_namespace 'ns'
DESCRIPTION
{NAME => 'ns', hbase.rsgroup.name => 'tenant_group'}
Quota is disabled
Took 0.0093 seconds
hbase:028:0>

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27365) Minimise block addition failures due to no space in bucket cache writers queue by introducing wait time

2022-09-12 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27365:
---

 Summary: Minimise block addition failures due to no space in 
bucket cache writers queue by introducing wait time
 Key: HBASE-27365
 URL: https://issues.apache.org/jira/browse/HBASE-27365
 Project: HBase
  Issue Type: Improvement
  Components: BucketCache
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently in bucket cache asynchronous caching mechanism introduced where 
initially the blocks to be cached will be added to queue and writer threads 
consume the blocks from the queue and write to bucket cache. In case if block 
writing to bucket cache is slow then there is a chance that  queue of writer 
threads become full  and following block additions will be failed. In case of 
slower storages like s3 might introduce latencies even if we enable bigger 
sizes of bucket cache using ephemeral storages. So we can allow configurable 
wait time while adding blocks to queue so that chances of queue free up is 
possible during the wait time and block addition failures can be minimised.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27365) Minimise block addition failures due to no space in bucket cache writers queue by introducing wait time

2022-10-04 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27365.
-
Resolution: Fixed

Thanks for reviews [~wchevreuil] for reviews. Committed to master and branch-2, 
2.4 and 2.5.

> Minimise block addition failures due to no space in bucket cache writers 
> queue by introducing wait time
> ---
>
> Key: HBASE-27365
> URL: https://issues.apache.org/jira/browse/HBASE-27365
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Affects Versions: 3.0.0-alpha-3
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4, 2.4.15
>
>
> Currently in bucket cache asynchronous caching mechanism introduced where 
> initially the blocks to be cached will be added to queue and writer threads 
> consume the blocks from the queue and write to bucket cache. In case if block 
> writing to bucket cache is slow then there is a chance that  queue of writer 
> threads become full  and following block additions will be failed. In case of 
> slower storages like s3 might introduce latencies even if we enable bigger 
> sizes of bucket cache using ephemeral storages. So we can allow configurable 
> wait time while adding blocks to queue so that chances of queue free up is 
> possible during the wait time and block addition failures can be minimised. 
> To avoid the performance impact of wait time in regular read paths we can use 
> the wait time mainly during background operations like compactions, flushes 
> or prefetches etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27549) Upgrade Netty to 4.1.86.Final

2023-01-02 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27549:
---

 Summary: Upgrade Netty to 4.1.86.Final
 Key: HBASE-27549
 URL: https://issues.apache.org/jira/browse/HBASE-27549
 Project: HBase
  Issue Type: Bug
  Components: thirdparty
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: thirdparty-4.1.4


Netty version - 4.1.86.Final has fix some CVEs.
CVE-2022-41915,
CVE-2022-41881

Upgrade to latest version.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27549) [hbase-thirdparty] Upgrade Netty to 4.1.86.Final

2023-01-03 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27549.
-
Resolution: Fixed

Thanks for review [~zhangduo]

> [hbase-thirdparty] Upgrade Netty to 4.1.86.Final
> 
>
> Key: HBASE-27549
> URL: https://issues.apache.org/jira/browse/HBASE-27549
> Project: HBase
>  Issue Type: Bug
>  Components: thirdparty
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: thirdparty-4.1.4
>
>
> Netty version - 4.1.86.Final has fix some CVEs.
> CVE-2022-41915,
> CVE-2022-41881
> Upgrade to latest version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27550) org.apache.hadoop.hbase.spark.TestJavaHBaseContext failing with HBase 2.5.2

2023-01-03 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27550:
---

 Summary: org.apache.hadoop.hbase.spark.TestJavaHBaseContext 
failing with HBase 2.5.2
 Key: HBASE-27550
 URL: https://issues.apache.org/jira/browse/HBASE-27550
 Project: HBase
  Issue Type: Bug
  Components: hbase-connectors
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


{noformat}

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hbase.spark.TestJavaHBaseContext
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.157 s 
<<< FAILURE! - in org.apache.hadoop.hbase.spark.TestJavaHBaseContext
[ERROR] org.apache.hadoop.hbase.spark.TestJavaHBaseContext  Time elapsed: 5.142 
s  <<< ERROR!
java.lang.NoClassDefFoundError: org/apache/logging/log4j/Level
    at 
org.apache.hadoop.hbase.spark.TestJavaHBaseContext.setUpBeforeClass(TestJavaHBaseContext.java:98)
Caused by: java.lang.ClassNotFoundException: org.apache.logging.log4j.Level
    at 
org.apache.hadoop.hbase.spark.TestJavaHBaseContext.setUpBeforeClass(TestJavaHBaseContext.java:98)

[INFO]
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR]   TestJavaHBaseContext.setUpBeforeClass:98 » NoClassDefFound 
org/apache/logging/...
[INFO]
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
[INFO]

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27550) [hbase-connectors] org.apache.hadoop.hbase.spark.TestJavaHBaseContext failing with HBase 2.5.x

2023-01-03 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27550.
-
Resolution: Duplicate

[~zhangduo]  Thanks for the info. HBASE-27485 good fix. Hence closing this as 
duplicate.

> [hbase-connectors] org.apache.hadoop.hbase.spark.TestJavaHBaseContext failing 
> with HBase 2.5.x
> --
>
> Key: HBASE-27550
> URL: https://issues.apache.org/jira/browse/HBASE-27550
> Project: HBase
>  Issue Type: Bug
>  Components: hbase-connectors
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> {noformat}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hbase.spark.TestJavaHBaseContext
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.157 
> s <<< FAILURE! - in org.apache.hadoop.hbase.spark.TestJavaHBaseContext
> [ERROR] org.apache.hadoop.hbase.spark.TestJavaHBaseContext  Time elapsed: 
> 5.142 s  <<< ERROR!
> java.lang.NoClassDefFoundError: org/apache/logging/log4j/Level
>     at 
> org.apache.hadoop.hbase.spark.TestJavaHBaseContext.setUpBeforeClass(TestJavaHBaseContext.java:98)
> Caused by: java.lang.ClassNotFoundException: org.apache.logging.log4j.Level
>     at 
> org.apache.hadoop.hbase.spark.TestJavaHBaseContext.setUpBeforeClass(TestJavaHBaseContext.java:98)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestJavaHBaseContext.setUpBeforeClass:98 » NoClassDefFound 
> org/apache/logging/...
> [INFO]
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27509) Possible region gets stuck in CLOSING state

2022-11-24 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27509:
---

 Summary: Possible region gets stuck in CLOSING state
 Key: HBASE-27509
 URL: https://issues.apache.org/jira/browse/HBASE-27509
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 2.3.4
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


There is a possible chance of region gets stuck in closing state could be 
because of race between the flush and close or some where the readlock acquired 
on the region is not getting released.
{noformat}
"MemStoreFlusher.1" #236 prio=5 os_prio=0 tid=0x5639266a4000 nid=0x296e 
waiting on condition [0x7fdc48a63000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x7fdf42dde850> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2397)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:610)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:579)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:67)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:359)

"MemStoreFlusher.0" #234 prio=5 os_prio=0 tid=0x5639266a2800 nid=0x296d 
waiting on condition [0x7fdc48b64000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x7fdf42dde850> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2397)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:610)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:579)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:67)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:359)
{noformat} 
{noformat}
"RS_CLOSE_REGION-regionserver/sl73tskrnsqln00107:16020-0" #6337 daemon prio=5 
os_prio=0 tid=0x7fdc05448800 nid=0x15d1 waiting on condition 
[0x7fdc1befd000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x7fdf42dde850> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1662)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1591)
- locked <0x7fdf42ddf358> (a java.lang.Object)
at 
org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler.process(UnassignRegionHandler.java:114)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
{noformat}

>From one of the region server logs flushed has started and replay edits of 
>flush added then close 

[jira] [Created] (HBASE-27586) Bump up commons-codec to 1.15

2023-01-23 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27586:
---

 Summary: Bump up commons-codec to 1.15
 Key: HBASE-27586
 URL: https://issues.apache.org/jira/browse/HBASE-27586
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.3


commons-codec 1.15 has proper fix of few CVEs which may not effect in HBase but 
better to upgrade to ensure compliance.
Ex: ** While [a 
fix|https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113]
 was earlier made to {{commons-codec:commons-codec}} version 1.13, it was later 
found out to be incomplete. A [complete 
fix|https://github.com/apache/commons-codec/pull/29] exists in version 1.14 and 
that is the version users should upgrade to.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27585) Bump up jruby to 9.3.9.0

2023-01-23 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27585:
---

 Summary: Bump up jruby to 9.3.9.0
 Key: HBASE-27585
 URL: https://issues.apache.org/jira/browse/HBASE-27585
 Project: HBase
  Issue Type: Bug
  Components: security
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.3


Bump up Jruby to 9.3.9.0 to ensure compliance which has multiple CVEs fixed 
related to openssl,snakeyaml etc.
 * rdoc has been updated to 6.3.3 to fix all known CVEs. 
([#7396|https://github.com/jruby/jruby/issues/7396], 
[#7404|https://github.com/jruby/jruby/issues/7404])
 * rexml has been updated to 3.2.5 to fix all known CVEs. 
([#7395|https://github.com/jruby/jruby/issues/7395], 
[#7405|https://github.com/jruby/jruby/issues/7405])
 * jruby-openssl has been updated to 0.14.0 to fix weak HMAC key hashing in 
bouncycastle, which itself is updated to 1.71. 
([#7335|https://github.com/jruby/jruby/issues/7335], 
[#7385|https://github.com/jruby/jruby/issues/7385], 
[#7399|https://github.com/jruby/jruby/issues/7399])
 * psych has been updated to 3.3.4 to fix CVE-2022-38752 in the SnakeYAML 
library, which itself is updated to 1.33. 
([#7386|https://github.com/jruby/jruby/issues/7386], 
[#7388|https://github.com/jruby/jruby/issues/7388], 
[#7400|https://github.com/jruby/jruby/issues/7400])
 * rubygems has been updated to 3.2.33 and bundler updated to 2.2.33 to address 
CVE-2021-43809. ([#7397|https://github.com/jruby/jruby/issues/7397], 
[#7401|https://github.com/jruby/jruby/issues/7401])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27694) Exclude the older versions of netty pulling from Hadoop dependencies

2023-03-08 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27694:
---

 Summary: Exclude the older versions of netty pulling from Hadoop 
dependencies
 Key: HBASE-27694
 URL: https://issues.apache.org/jira/browse/HBASE-27694
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently the netty version of 3.10.6 is getting pulled from hdfs dependencies 
and sonatype kind of tools reporting the CVEs in HBase. To get rid of this 
better to exclude netty where hdfs or mapred client jars used.


 * org.apache.hbase : hbase-it : jar : tests : 2.5.2
 ** org.apache.hadoop : hadoop-mapreduce-client-core : 3.2.2
 *** io.netty : netty : 3.10.6.final
 ** org.apache.hbase : hbase-endpoint : 2.5.2
 *** org.apache.hadoop : hadoop-hdfs : jar : tests : 3.2.2
  io.netty : netty : 3.10.6.final
 *** org.apache.hadoop : hadoop-hdfs : 3.2.2
  io.netty : netty : 3.10.6.final
 * org.apache.hadoop : hadoop-mapreduce-client-jobclient : 3.2.2
 ** io.netty : netty : 3.10.6.final
 ** org.apache.hadoop : hadoop-mapreduce-client-common : 3.2.2
 *** io.netty : netty : 3.10.6.final
 * org.apache.hadoop : hadoop-mapreduce-client-jobclient : jar : tests : 3.2.2
 ** io.netty : netty : 3.10.6.final
 * org.apache.hadoop : hadoop-mapreduce-client-hs : 3.2.2
 ** io.netty : netty : 3.10.6.final
 ** org.apache.hadoop : hadoop-mapreduce-client-app : 3.2.2
 *** io.netty : netty : 3.10.6.final
 *** org.apache.hadoop : hadoop-mapreduce-client-shuffle : 3.2.2
  io.netty : netty : 3.10.6.final



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27698) Migrate meta locations from zookeeper may not always possible if we migrate from 1.x HBase

2023-03-09 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27698:
---

 Summary: Migrate meta locations from zookeeper may not always 
possible if we migrate from 1.x HBase
 Key: HBASE-27698
 URL: https://issues.apache.org/jira/browse/HBASE-27698
 Project: HBase
  Issue Type: Bug
  Components: migration
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


In HBase 1.x versions meta server location from zookeeper will be removed when 
the server stopped. In such cases migrating to 2.5.x branches may not create 
any meta entries in master data. So in case if we could not find the meta 
location from zookeeper we can meta location from wal directories with .meta 
extension and add to master data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27667) Normalizer can skip picking presplit regions while preparing merge plans

2023-02-24 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27667:
---

 Summary: Normalizer can skip picking presplit regions while 
preparing merge plans
 Key: HBASE-27667
 URL: https://issues.apache.org/jira/browse/HBASE-27667
 Project: HBase
  Issue Type: Improvement
  Components: Normalizer
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4


Normalizer can be used to merge the regions which become emtpy post TTL expiry.
But it's picking the presplit regions as well. We can skip picking presplit 
regions while preparing the merge plans by looking at the number of storefiles 
compared to size. Presplit regions have zero storefiles where as the TTL 
expired regions have 1 storefile with zero size.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27669) chaos-daemon.sh should make use hbase script start/stop chaosagent

2023-02-26 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27669:
---

 Summary: chaos-daemon.sh should make use hbase script start/stop 
chaosagent
 Key: HBASE-27669
 URL: https://issues.apache.org/jira/browse/HBASE-27669
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently chaos-daemon.sh  is just adding the libs from HBASE_HOME which is 
failing because hadoop dependencies not present in the classpath would be 
better to use hbase script which adds all the relavent jars from different 
classpaths like HBASE_CLASSPATHs and Hadoop paths so that things work properly. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27675) Document zookeeper based cluster manager

2023-02-27 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27675:
---

 Summary: Document zookeeper based cluster manager
 Key: HBASE-27675
 URL: https://issues.apache.org/jira/browse/HBASE-27675
 Project: HBase
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27674) Chaos Service Improvements

2023-02-27 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27674:
---

 Summary: Chaos Service Improvements
 Key: HBASE-27674
 URL: https://issues.apache.org/jira/browse/HBASE-27674
 Project: HBase
  Issue Type: Task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


We can improve the usage of chaos service to run random operations in the real 
cluster to verify the stability. Following things can be done.

1) Make use of the hbase script in the existing chaos-daemon.sh script instead 
of directly using the java command.

2) We can add a script to chaos server runner to run the script in the 
background.

3) Document usage of zookeeper based cluster manager mainly in the environments 
where ssh cannot be performed.

4) sudo is not required to kill the it's own user process so while running the 
commands need not use sudo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27667) Normalizer can skip picking presplit regions while preparing merge plans

2023-04-18 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27667.
-
Fix Version/s: (was: 2.6.0)
   (was: 3.0.0-alpha-4)
   (was: 2.5.5)
   Resolution: Won't Fix

As [~ndimiduk] mentioned at PR it would be better to not to check the 
storefiles count and size as a metric of considering presplit region. Hence 
closing as won't fix as there is no alternative meta data checks to find the 
presplit regions. Will reopen if any fix possible.

> Normalizer can skip picking presplit regions while preparing merge plans
> 
>
> Key: HBASE-27667
> URL: https://issues.apache.org/jira/browse/HBASE-27667
> Project: HBase
>  Issue Type: Improvement
>  Components: Normalizer
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> Normalizer can be used to merge the regions which become emtpy post TTL 
> expiry.
> But it's picking the presplit regions as well. We can skip picking presplit 
> regions while preparing the merge plans by looking at the number of 
> storefiles along with size. Presplit regions have zero storefiles where as 
> the TTL expired regions have 1 storefile with zero size.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27810) HBCK throws RejectedExecutionException when closing ZooKeeper resources

2023-05-01 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27810.
-
Resolution: Fixed

Pushed to branch-2.4+

Thanks [~andor]  for the patch.

> HBCK throws RejectedExecutionException when closing ZooKeeper resources
> ---
>
> Key: HBASE-27810
> URL: https://issues.apache.org/jira/browse/HBASE-27810
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 2.6.0, 3.0.0-alpha-3, 2.4.17, 2.5.4
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.5, 2.4.18
>
>
> HBCK throws RejectedExecutionException at the end of run, because the order 
> of closing ZooKeeper resources has been swapped in HBASE-27426.
> In ZKWatcher.java close() method first it shuts down the zkEventProcessor and 
> when it fully shut down, it closes the RecoverableZooKeeper (the ZK client). 
> The watcher receives the close event which cannot be submitted to the event 
> processor and throws exception.
> I think we need to check whether the executor is able to receive jobs before 
> submitting.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27751) [hbase-operator-tools] TestMissingTableDescriptorGenerator fails with HBase 2.5.3

2023-03-31 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27751.
-
Fix Version/s: hbase-operator-tools-1.3.0
   Resolution: Fixed

Committed to master. Thanks for the patch [~nihaljain.cs].

> [hbase-operator-tools] TestMissingTableDescriptorGenerator fails with HBase 
> 2.5.3
> -
>
> Key: HBASE-27751
> URL: https://issues.apache.org/jira/browse/HBASE-27751
> Project: HBase
>  Issue Type: Bug
>  Components: hbase-operator-tools
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
> Fix For: hbase-operator-tools-1.3.0
>
>
> hbase-operator-tools fails to compile against hbase 2.5.3 with following test 
> failures.
> {code:java}
> [INFO] Running org.apache.hbase.TestMissingTableDescriptorGenerator
> [ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 30.149 s <<< FAILURE! - in 
> org.apache.hbase.TestMissingTableDescriptorGenerator
> [ERROR] 
> testTableinfoGeneratedWhenNoTableSpecified(org.apache.hbase.TestMissingTableDescriptorGenerator)
>   Time elapsed: 16.734 s  <<< ERROR!
> java.lang.IllegalArgumentException: 
> hdfs://localhost:51882/user/nihaljain/test-data/de8af727-6c02-7a95-9beb-027d18fc6603/data/default/test-1/.tabledesc/.tableinfo.01.639
>   at 
> org.apache.hbase.TestMissingTableDescriptorGenerator.testTableinfoGeneratedWhenNoTableSpecified(TestMissingTableDescriptorGenerator.java:145)
> [ERROR] 
> shouldGenerateTableInfoBasedOnFileSystem(org.apache.hbase.TestMissingTableDescriptorGenerator)
>   Time elapsed: 6.794 s  <<< ERROR!
> java.lang.IllegalArgumentException: 
> hdfs://localhost:51961/user/nihaljain/test-data/5ade0aa1-cb9a-a1da-b700-fe808eeda3b9/data/default/test-1/.tabledesc/.tableinfo.01.666
>   at 
> org.apache.hbase.TestMissingTableDescriptorGenerator.shouldGenerateTableInfoBasedOnFileSystem(TestMissingTableDescriptorGenerator.java:120)
> [ERROR] 
> shouldGenerateTableInfoBasedOnCachedTableDescriptor(org.apache.hbase.TestMissingTableDescriptorGenerator)
>   Time elapsed: 6.621 s  <<< ERROR!
> java.lang.IllegalArgumentException: 
> hdfs://localhost:52022/user/nihaljain/test-data/d858258b-6ba1-8e4f-c118-4e30d8a5136f/data/default/test-1/.tabledesc/.tableinfo.01.666
>   at 
> org.apache.hbase.TestMissingTableDescriptorGenerator.shouldGenerateTableInfoBasedOnCachedTableDescriptor(TestMissingTableDescriptorGenerator.java:107)
> {code}
> Steps to reproduce, run following against hbase-operator-tools master:
> {code:java}
> mvn clean install -Dhbase.version=2.5.3 -Dhbase-thirdparty.version=4.1.4 
> {code}
> The goal is to allow hbase-operator-tools to compile with hbase 2.5.3 without 
> any failures



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27726) ruby shell not handled SyntaxError exceptions properly

2023-03-29 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27726.
-
Resolution: Fixed

> ruby shell not handled SyntaxError exceptions properly
> --
>
> Key: HBASE-27726
> URL: https://issues.apache.org/jira/browse/HBASE-27726
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.5.2
>Reporter: chiranjeevi
>Assignee: Rishabh Murarka
>Priority: Minor
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.4
>
>
> hbase:002:0> create 't2', 'cf'
> 2023-03-14 04:54:50,061 INFO  [main] client.HBaseAdmin: Operation: CREATE, 
> Table Name: default:t2, procId: 2140 completed
> Created table t2
> Took 1.1503 seconds
> => Hbase::Table - t2
> hbase:003:0> alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
> SyntaxError: (hbase):3: syntax error, unexpected tIDENTIFIER
> alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
>  ^~~
>   eval at org/jruby/RubyKernel.java:1091
>   evaluate at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/workspace.rb:85
>   evaluate at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:385
>     eval_input at uri:classloader:/irb/hirb.rb:115
>  signal_status at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:647
>     eval_input at uri:classloader:/irb/hirb.rb:112
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:246
>   loop at org/jruby/RubyKernel.java:1507
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:232
>  catch at org/jruby/RubyKernel.java:1237
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:231
>     eval_input at uri:classloader:/irb/hirb.rb:111
>    run at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:428
>  catch at org/jruby/RubyKernel.java:1237
>    run at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:427
>  at classpath:/jar-bootstrap.rb:226



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27754) [HBCK2] generateMissingTableDescriptorFile should throw write permission error and fail

2023-03-28 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-27754.
-
Fix Version/s: hbase-operator-tools-1.3.0
   Resolution: Fixed

> [HBCK2] generateMissingTableDescriptorFile should throw write permission 
> error and fail
> ---
>
> Key: HBASE-27754
> URL: https://issues.apache.org/jira/browse/HBASE-27754
> Project: HBase
>  Issue Type: Bug
>  Components: hbase-operator-tools, hbck2
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: hbase-operator-tools-1.3.0
>
>
> Try running hbck2 generateMissingTableDescriptorFile with a user not having 
> permissions to write to HDFS. 
> *Actual* 
> The tool completes with success message, while it actually does not really 
> generate/write the files, as it does not even have permissions.
> *Expected* 
> Tool should throw error and should not log task is success 'Table descriptor 
> written successfully. Orphan table  fixed.'
> *Debug dump* 
> Upon enabling debug logging, we can see incorrect behaviour.
> {code:java}
> 2023-03-24T19:03:16,890 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: 
> IPC Client (199657303) connection to hostname/ip_address:port_num from root 
> sending #31 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
> 2023-03-24T19:03:16,893 DEBUG [IPC Client (199657303) connection to 
> hostname/ip_address:port_num from root] ipc.Client: IPC Client (199657303) 
> connection to hostname/ip_address:port_num from root got value #31
> 2023-03-24T19:03:16,894 DEBUG [main] ipc.ProtobufRpcEngine: Call: getFileInfo 
> took 4ms
> 2023-03-24T19:03:16,894 DEBUG [main] hdfs.DFSClient: 
> /apps/hbase/data/data/default/ittable-2090120905/.tmp/.tableinfo.10: 
> masked={ masked: rw-r--r--, unmasked: rw-rw-rw- }
> 2023-03-24T19:03:16,895 DEBUG [IPC Parameter Sending Thread #0] ipc.Client: 
> IPC Client (199657303) connection to hostname/ip_address:port_num from root 
> sending #32 org.apache.hadoop.hdfs.protocol.ClientProtocol.create
> 2023-03-24T19:03:16,897 DEBUG [IPC Client (199657303) connection to 
> hostname/ip_address:port_num from root] ipc.Client: IPC Client (199657303) 
> connection to hostname/ip_address:port_num from root got value #32
> 2023-03-24T19:03:16,898 DEBUG [main] retry.RetryInvocationHandler: Exception 
> while invoking call #32 ClientNamenodeProtocolTranslatorPB.create over null. 
> Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException: Permission denied: user=root, 
> access=WRITE, 
> inode="/apps/hbase/data/data/default/ittable-2090120905/.tmp":hdfs:hdfs:drwxr-xr-x
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1896)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1880)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1839)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2457)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:791)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:478)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1031)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:959)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2963)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1587) 

[jira] [Created] (HBASE-27735) Considering Normalizer on in case of zk data is null leading to unnecessary meta table scans

2023-03-20 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27735:
---

 Summary: Considering Normalizer on in case of zk data is null 
leading to unnecessary meta table scans 
 Key: HBASE-27735
 URL: https://issues.apache.org/jira/browse/HBASE-27735
 Project: HBase
  Issue Type: Bug
  Components: Normalizer
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently when the zk data is null considering normaliser on which leads to 
unnecessary hbase meta scans. Would be better to scan through meta only when 
normalizer enabled explictly.
{noformat}
  public boolean isNormalizerOn() {
byte[] upData = super.getData(false);
try {
  // if data in ZK is null, use default of on.
  return upData == null || parseFrom(upData).getNormalizerOn();
} catch (DeserializationException dex) {
  LOG
.error("ZK state for RegionNormalizer could not be parsed " + 
Bytes.toStringBinary(upData));
  // return false to be safe.
  return false;
}
  }
{noformat}
{noformat}
  public boolean normalizeRegions(final NormalizeTableFilterParams ntfp,
final boolean isHighPriority) throws IOException {
if (regionNormalizerManager == null || 
!regionNormalizerManager.isNormalizerOn()) {
  LOG.debug("Region normalization is disabled, don't run region 
normalizer.");
  return false;
}
if (skipRegionManagementAction("region normalizer")) {
  return false;
}
if (assignmentManager.hasRegionsInTransition()) {
  return false;
}

final Set matchingTables = getTableDescriptors(new 
LinkedList<>(),
  ntfp.getNamespace(), ntfp.getRegex(), ntfp.getTableNames(), 
false).stream()
.map(TableDescriptor::getTableName).collect(Collectors.toSet());
final Set allEnabledTables =
  tableStateManager.getTablesInStates(TableState.State.ENABLED);
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27635) Shutdown zookeeper logs coming via ReadOnlyZKClient when hbase shell started

2023-02-10 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27635:
---

 Summary: Shutdown zookeeper logs coming via ReadOnlyZKClient when 
hbase shell started
 Key: HBASE-27635
 URL: https://issues.apache.org/jira/browse/HBASE-27635
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4


When hbase shell with HBase 2.5.2 started there is too much logging of zk 
connection realated, classpaths etc.  Even though we enabled ERROR log level 
for zookeeper package.

{noformat}

2023-02-10 17:34:25,211 INFO  
[ReadOnlyZKClient-host1:2181,host2:2181,host3:2181@0x15c16f19] 
zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.5.9-5-a433770fc7b303332f10174221799495a26bbca2, 
built on 02/07/2023 13:02 GMT
2023-02-10 17:34:25,212 INFO  
[ReadOnlyZKClient-host1:2181,host2:2181,host3:2181@0x15c16f19] 
zookeeper.ZooKeeper: Client environment:host.name=host1
2023-02-10 17:34:25,212 INFO  
[ReadOnlyZKClient-host1:2181,host2:2181,host3:2181:2181@0x15c16f19] 
zookeeper.ZooKeeper: Client environment:java.version=1.8.0_352
2023-02-10 17:34:25,212 INFO  
[ReadOnlyZKClient-host1:2181,host2:2181,host3:2181@0x15c16f19] 
zookeeper.ZooKeeper: Client environment:java.vendor=Red Hat, Inc.
2023-02-10 17:34:25,212 INFO  
[ReadOnlyZKClient-host1:2181,host2:2181,host3:2181@0x15c16f19] 
zookeeper.ZooKeeper: Client 
environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.352.b08-2.el7_9.x86_64/jre

{noformat}

Same way better to change the  org.apache.hadoop.hbase.zookeeper package log 
level to error.
{noformat}

# Set logging level to avoid verboseness
org.apache.logging.log4j.core.config.Configurator.setAllLevels('org.apache.zookeeper',
 log_level)
org.apache.logging.log4j.core.config.Configurator.setAllLevels('org.apache.hadoop',
 log_level)

org.apache.logging.log4j.core.config.Configurator.setAllLevels('org.apache.hadoop.hbase.zookeeper',
 log_level)

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27793) hbck can report unknown servers as inconsistencies

2023-04-13 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27793:
---

 Summary: hbck can report unknown servers as inconsistencies
 Key: HBASE-27793
 URL: https://issues.apache.org/jira/browse/HBASE-27793
 Project: HBase
  Issue Type: Bug
  Components: hbck
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently hbck is not reporting unknown servers it would be helpful to report 
those as inconsistencies so that directly hbck2 schedulerecoveries  option can 
be used to recover on unknown servers otherwise the taking the action for 
inconsistencies reported due to unknown servers may corrupt if not done 
properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27889) Possible NPE while getting cluster status from master

2023-05-24 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-27889:
---

 Summary: Possible NPE while getting cluster status from master
 Key: HBASE-27889
 URL: https://issues.apache.org/jira/browse/HBASE-27889
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


 
{noformat}
2023-05-23 13:31:33,840 ERROR 
[RpcServer.default.FPBQ.Fifo.handler=395,queue=35,port=16000] ipc.RpcServer: 
Unexpected throwable object 
java.lang.NullPointerException
    at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos$ServerTask$Builder.setStatus(ClusterStatusProtos.java:14120)
    at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toServerTask(ProtobufUtil.java:3565)
    at 
org.apache.hadoop.hbase.ClusterMetricsBuilder.lambda$toClusterStatus$4(ClusterMetricsBuilder.java:80)
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
    at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
    at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
    at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
    at 
org.apache.hadoop.hbase.ClusterMetricsBuilder.toClusterStatus(ClusterMetricsBuilder.java:80)
    at 
org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:980)
    at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
    at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
    at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28371) Suppress the noisy logging on HBaseAdmin#postOperationResult

2024-02-15 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28371:
---

 Summary: Suppress the noisy logging on 
HBaseAdmin#postOperationResult
 Key: HBASE-28371
 URL: https://issues.apache.org/jira/browse/HBASE-28371
 Project: HBase
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


While testing 2.6.0 found that for every admin operation there is INFO log of 
operation status which is noisy. Would be better to suppress by making it debug.
{noformat}

hbase:002:0> create 't','f'
2024-02-16 06:02:38,887 INFO  [main] client.HBaseAdmin: Operation: CREATE, 
Table Name: default:t, procId: 65 completed
Created table t

{noformat}
{noformat}

hbase:006:0> flush 't'
2024-02-16 06:03:16,294 INFO  [main] client.HBaseAdmin: Operation: FLUSH, Table 
Name: default:t, procId: 68 completed
Took 0.3733 seconds

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28350) Unabled to start running hbase-it tests with JDK 17

2024-02-07 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28350:
---

 Summary: Unabled to start running hbase-it tests with JDK 17 
 Key: HBASE-28350
 URL: https://issues.apache.org/jira/browse/HBASE-28350
 Project: HBase
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


It could be because of CMS flags used to start running it tests
{noformat}
[ERROR] Please refer to apache/hbase/hbase-it/target/failsafe-reports for the 
individual test results.
[ERROR] Please refer to dump files (if any exist) [date].dump, 
[date]-jvmRun[N].dump and [date].dumpstream.
[ERROR] The forked VM terminated without properly saying goodbye. VM crash or 
System.exit called?
[ERROR] Command was /bin/sh -c cd 'apache/hbase/hbase-it' && 
'/Library/Java/JavaVirtualMachines/jdk-17.jdk/Contents/Home/bin/java' 
'-enableassertions' '-Xmx4g' '-Djava.security.egd=file:/dev/./urandom' 
'-XX:+CMSClassUnloadingEnabled' '-verbose:gc' '-XX:+PrintCommandLineFlags' 
'-XX:+PrintFlagsFinal' '-jar' 
'apache/hbase/hbase-it/target/surefire/surefirebooter-20240208114255880_3.jar' 
'apache/hbase/hbase-it/target/surefire' '2024-02-08T11-42-55_824-jvmRun1' 
'surefire-20240208114255880_1tmp' 'surefire_0-20240208114255880_2tmp'
[ERROR] Error occurred in starting fork, check output in log
[ERROR] Process Exit Code: 1
[ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:643)
[ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:285)
[ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
[ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1203)
[ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1055)
[ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:871)
[ERROR] at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:126)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.doExecute2(MojoExecutor.java:328)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.doExecute(MojoExecutor.java:316)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:174)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.access$000(MojoExecutor.java:75)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor$1.run(MojoExecutor.java:162)
[ERROR] at 
org.apache.maven.plugin.DefaultMojosExecutionStrategy.execute(DefaultMojosExecutionStrategy.java:39)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:159)
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:105)
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:73)
[ERROR] at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:53)
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:118)
[ERROR] at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:261)
[ERROR] at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:173)
[ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:101)
[ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:906)
[ERROR] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:283)
[ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:206)
[ERROR] at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
[ERROR] at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[ERROR] at java.base/java.lang.reflect.Method.invoke(Method.java:568)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:283)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:226)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:407)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:348)

{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28341) [JDK17] Fix Failure TestLdapHttpServer

2024-02-11 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-28341.
-
Resolution: Fixed

Pushed to master, 3.x and 2.x branches. Thanks for review [~zhangduo].

> [JDK17] Fix Failure TestLdapHttpServer
> --
>
> Key: HBASE-28341
> URL: https://issues.apache.org/jira/browse/HBASE-28341
> Project: HBase
>  Issue Type: Sub-task
> Environment: 
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 4.0.0-alpha-1, 2.7.0, 2.5.8, 3.0.0-beta-2
>
>
> TestLdapHttpServer is failing with JDK17 because of internal APIs usage.
> {code:java}
> [INFO] Running org.apache.hadoop.hbase.http.TestLdapHttpServer
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.18 
> s <<< FAILURE! - in org.apache.hadoop.hbase.http.TestLdapHttpServer
> [ERROR] org.apache.hadoop.hbase.http.TestLdapHttpServer  Time elapsed: 7.165 
> s  <<< ERROR!
> java.lang.IllegalAccessError: class 
> org.apache.directory.server.core.security.CertificateUtil (in unnamed module 
> @0x25bbf683) cannot access class sun.security.x509.X500Name (in module 
> java.base) because module java.base does not export sun.security.x509 to 
> unnamed module @0x25bbf683
>   at 
> org.apache.directory.server.core.security.CertificateUtil.createTempKeyStore(CertificateUtil.java:334)
>   at 
> org.apache.directory.server.factory.ServerAnnotationProcessor.instantiateLdapServer(ServerAnnotationProcessor.java:158)
>   at 
> org.apache.directory.server.factory.ServerAnnotationProcessor.createLdapServer(ServerAnnotationProcessor.java:318)
>   at 
> org.apache.directory.server.factory.ServerAnnotationProcessor.createLdapServer(ServerAnnotationProcessor.java:351)
>   at 
> org.apache.directory.server.core.integ.CreateLdapServerRule$2.evaluate(CreateLdapServerRule.java:112)
>   at 
> org.apache.directory.server.core.integ.CreateDsRule$2.evaluate(CreateDsRule.java:124)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)
> [INFO] 
> [INFO] Results:
> [INFO] 
> [ERROR] Errors: 
> [ERROR]   TestLdapHttpServer » IllegalAccess class 
> org.apache.directory.server.core.security.CertificateUtil (in unnamed module 
> @0x25bbf683) cannot access class sun.security.x509.X500Name (in module 
> java.base) because module java.base does not export sun.security.x509 to 
> unnamed module @0x25bbf683
> [INFO] 
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> {code}
> Adding following to jvn flags works.
> {code}
> +  --add-opens java.base/sun.security.x509=ALL-UNNAMED
> +  --add-opens java.base/sun.security.util=ALL-UNNAMED
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28279) Bump up jetty-server to 9.4.53.v20231009

2023-12-22 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28279:
---

 Summary: Bump up jetty-server to 9.4.53.v20231009
 Key: HBASE-28279
 URL: https://issues.apache.org/jira/browse/HBASE-28279
 Project: HBase
  Issue Type: Bug
  Components: thirdparty
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Bump up jetty-server to 9.4.53.v20231009 to avoid CVE CVE-2023-36478



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28279) Bump up jetty-server, jetty-http to 9.4.53.v20231009

2024-01-05 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-28279.
-
Resolution: Fixed

Committed to master. Thanks for review [~bbeaudreault], [~nihaljain.cs]

> Bump up jetty-server, jetty-http to 9.4.53.v20231009
> 
>
> Key: HBASE-28279
> URL: https://issues.apache.org/jira/browse/HBASE-28279
> Project: HBase
>  Issue Type: Bug
>  Components: thirdparty
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: thirdparty-4.1.6
>
>
> Bump up jetty-server to 9.4.53.v20231009 to avoid CVE CVE-2023-36478



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28341) [JDK17] Fix Failure TestLdapHttpServer

2024-02-01 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28341:
---

 Summary: [JDK17] Fix Failure TestLdapHttpServer
 Key: HBASE-28341
 URL: https://issues.apache.org/jira/browse/HBASE-28341
 Project: HBase
  Issue Type: Sub-task
 Environment: TestLdapHttpServer is failing with JDK17 because of 
internal APIs usage.
{code:java}
[INFO] Running org.apache.hadoop.hbase.http.TestLdapHttpServer
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.18 s 
<<< FAILURE! - in org.apache.hadoop.hbase.http.TestLdapHttpServer
[ERROR] org.apache.hadoop.hbase.http.TestLdapHttpServer  Time elapsed: 7.165 s  
<<< ERROR!
java.lang.IllegalAccessError: class 
org.apache.directory.server.core.security.CertificateUtil (in unnamed module 
@0x25bbf683) cannot access class sun.security.x509.X500Name (in module 
java.base) because module java.base does not export sun.security.x509 to 
unnamed module @0x25bbf683
at 
org.apache.directory.server.core.security.CertificateUtil.createTempKeyStore(CertificateUtil.java:334)
at 
org.apache.directory.server.factory.ServerAnnotationProcessor.instantiateLdapServer(ServerAnnotationProcessor.java:158)
at 
org.apache.directory.server.factory.ServerAnnotationProcessor.createLdapServer(ServerAnnotationProcessor.java:318)
at 
org.apache.directory.server.factory.ServerAnnotationProcessor.createLdapServer(ServerAnnotationProcessor.java:351)
at 
org.apache.directory.server.core.integ.CreateLdapServerRule$2.evaluate(CreateLdapServerRule.java:112)
at 
org.apache.directory.server.core.integ.CreateDsRule$2.evaluate(CreateDsRule.java:124)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   TestLdapHttpServer » IllegalAccess class 
org.apache.directory.server.core.security.CertificateUtil (in unnamed module 
@0x25bbf683) cannot access class sun.security.x509.X500Name (in module 
java.base) because module java.base does not export sun.security.x509 to 
unnamed module @0x25bbf683
[INFO] 
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

{code}

Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28397) Makeuse of jenkins libraries to reuse the stage definition code to run with multiple JDKs

2024-02-22 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28397:
---

 Summary: Makeuse of jenkins libraries to reuse the stage 
definition code to run with multiple JDKs
 Key: HBASE-28397
 URL: https://issues.apache.org/jira/browse/HBASE-28397
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


As for the discussion at 
https://github.com/apache/hbase/pull/5689#pullrequestreview-1892363058 would be 
better to make use jenkins libraries to reuse most of the code in JenkinsFiles 
to support multiple JDKs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28406) Move namespace to RS group can support multiple options to move tables under it

2024-02-27 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28406:
---

 Summary: Move namespace to RS group can support multiple options 
to move tables under it
 Key: HBASE-28406
 URL: https://issues.apache.org/jira/browse/HBASE-28406
 Project: HBase
  Issue Type: Improvement
  Components: rsgroup
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.0.0-alpha-1, 3.0.0-beta-2


As discussed here
[https://github.com/apache/hbase/pull/5661#issuecomment-1949090826]

Move namespaces to rsgroup can have multiple options like:
 # *Move All Tables:* This option involves moving all tables within a namespace 
to the new RS group, regardless of the current RS group they belong to. This 
would effectively consolidate all tables of the namespace into the new RS 
group. This is the same as the current implementation so it can be default 
behavior.

 # {*}Move Tables present in current RS group{*}: With this option, only tables 
belonging to the current RS group of the namespace would be moved to the new RS 
group. This provides more granular control, allowing users to choose specific 
tables to move based on their needs. If the namespace does not belong to any RS 
group, the namespace tables in the default RS group would be moved to the new 
RS group.

 # *Move No Tables:* This option involves changing the RS group of the 
namespace without moving any tables. Existing tables would remain in their 
current RS group. This could be useful if there's a desire to separate the 
namespaces by RS group but without immediately moving the tables.

With this options we can give control to user choose proper option based on the 
requirements. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-26372) [JDK17] Jenkins build support

2024-03-01 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-26372.
-
Fix Version/s: 4.0.0-alpha-1
   3.0.0-beta-2
   Resolution: Fixed

Handled as part of HBASE-27949. Hence closing.

> [JDK17] Jenkins build support
> -
>
> Key: HBASE-26372
> URL: https://issues.apache.org/jira/browse/HBASE-26372
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.0.0-alpha-1, 3.0.0-beta-2
>
> Attachments: Screenshot from 2024-02-23 22-58-51.png
>
>
> We'll need to update our build infrastructure to include a JDK17 environment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28350) [JDK17] Unable to run hbase-it tests with JDK 17

2024-03-01 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-28350.
-
Resolution: Fixed

> [JDK17] Unable to run hbase-it tests with JDK 17 
> -
>
> Key: HBASE-28350
> URL: https://issues.apache.org/jira/browse/HBASE-28350
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 4.0.0-alpha-1, 2.7.0, 3.0.0-beta-2, 2.5.9
>
>
> It could be because of CMS flags used to start running it tests
> {noformat}
> [ERROR] Please refer to apache/hbase/hbase-it/target/failsafe-reports for the 
> individual test results.
> [ERROR] Please refer to dump files (if any exist) [date].dump, 
> [date]-jvmRun[N].dump and [date].dumpstream.
> [ERROR] The forked VM terminated without properly saying goodbye. VM crash or 
> System.exit called?
> [ERROR] Command was /bin/sh -c cd 'apache/hbase/hbase-it' && 
> '/Library/Java/JavaVirtualMachines/jdk-17.jdk/Contents/Home/bin/java' 
> '-enableassertions' '-Xmx4g' '-Djava.security.egd=file:/dev/./urandom' 
> '-XX:+CMSClassUnloadingEnabled' '-verbose:gc' '-XX:+PrintCommandLineFlags' 
> '-XX:+PrintFlagsFinal' '-jar' 
> 'apache/hbase/hbase-it/target/surefire/surefirebooter-20240208114255880_3.jar'
>  'apache/hbase/hbase-it/target/surefire' '2024-02-08T11-42-55_824-jvmRun1' 
> 'surefire-20240208114255880_1tmp' 'surefire_0-20240208114255880_2tmp'
> [ERROR] Error occurred in starting fork, check output in log
> [ERROR] Process Exit Code: 1
> [ERROR]   at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:643)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:285)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1203)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1055)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:871)
> [ERROR]   at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:126)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.doExecute2(MojoExecutor.java:328)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.doExecute(MojoExecutor.java:316)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:174)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.access$000(MojoExecutor.java:75)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor$1.run(MojoExecutor.java:162)
> [ERROR]   at 
> org.apache.maven.plugin.DefaultMojosExecutionStrategy.execute(DefaultMojosExecutionStrategy.java:39)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:159)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:105)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:73)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:53)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:118)
> [ERROR]   at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:261)
> [ERROR]   at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:173)
> [ERROR]   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:101)
> [ERROR]   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:906)
> [ERROR]   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:283)
> [ERROR]   at org.apache.maven.cli.MavenCli.main(MavenCli.java:206)
> [ERROR]   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
> [ERROR]   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
> [ERROR]   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:283)
> 

[jira] [Created] (HBASE-28438) Add support spitting region into multiple regions

2024-03-12 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28438:
---

 Summary: Add support spitting region into multiple regions
 Key: HBASE-28438
 URL: https://issues.apache.org/jira/browse/HBASE-28438
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


We have a requirement of splitting one region into multiple hundreds of regions 
at a time distribute load hot data. To do that we need split a region and wait 
for the completion of it and then again split the two regions etc..which is 
time consuming activity. 
Would be better to support splitting region into multiple regions more than two 
so that in single operation we can split the region.
Todo that we need to take care
1)Supporting admin APIs to take multiple split keys
2)Implement new procedure to create new regions, creating meta entries and 
udpating them to meta
3) close the parent region and open split regions.
4) Update the compaction of post split and readers also to use the portion 
store file reader based on the range to scan than half store reader.
5) make sure the catalog jonitor also cleaning the parent regions when there 
are all the regions split properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28528) Improvements in HFile prefetch

2024-04-17 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28528:
---

 Summary: Improvements in HFile prefetch
 Key: HBASE-28528
 URL: https://issues.apache.org/jira/browse/HBASE-28528
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently hfile prefetch on open is configurable cluster wise. Would be better 
to make it table wise configurable. Also would be better to have region filters 
which can allow to specify which regions data can be prefetched. This will be 
useful when there are hot regions whose data prefetching can help for low 
latency requirements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28530) Better not use threads when parallel seek enabled and only one storescanner to seek

2024-04-17 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created HBASE-28530:
---

 Summary: Better not use threads when parallel seek enabled and 
only one storescanner to seek
 Key: HBASE-28530
 URL: https://issues.apache.org/jira/browse/HBASE-28530
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


When parallel seek enabled, seeking through the scanners using multiple threads 
and waiting on the countdown lock to complete the seek on all the scanners. It 
would be better not to use threads when there is only one scanners to seek. 
Might not be significant improvement but will be useful when a region has one 
store file post major compaction.

{code:java}
  private void parallelSeek(final List scanners, 
final Cell kv)
throws IOException {
if (scanners.isEmpty()) return;
int storeFileScannerCount = scanners.size();
CountDownLatch latch = new CountDownLatch(storeFileScannerCount);
List handlers = new ArrayList<>(storeFileScannerCount);
for (KeyValueScanner scanner : scanners) {
  if (scanner instanceof StoreFileScanner) {
ParallelSeekHandler seekHandler = new ParallelSeekHandler(scanner, kv, 
this.readPt, latch);
executor.submit(seekHandler);
handlers.add(seekHandler);
  } else {
scanner.seek(kv);
latch.countDown();
  }
}

try {
  latch.await();
} catch (InterruptedException ie) {
  throw (InterruptedIOException) new InterruptedIOException().initCause(ie);
}
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)