[ 
https://issues.apache.org/jira/browse/HBASE-19804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335183#comment-16335183
 ] 

stack commented on HBASE-19804:
-------------------------------

Punting from beta-2 since a workaround. We register a bunch of mbeans when we 
come up. We'd have to add sub=PORT_NO to all beans so you could register 
multiple RegionServers in same context.

> [hbase-indexer] Metrics source RegionServer,sub=Server already exists!
> ----------------------------------------------------------------------
>
>                 Key: HBASE-19804
>                 URL: https://issues.apache.org/jira/browse/HBASE-19804
>             Project: HBase
>          Issue Type: Improvement
>          Components: hbase-indexer
>    Affects Versions: 2.0.0-beta-1
>            Reporter: stack
>            Assignee: stack
>            Priority: Major
>             Fix For: 2.0.0
>
>
> In the past, the hbase-indexer runs multiple RegionServers per JVM. In old 
> days, they had their own cut-down "RegionServer". In 2.0.0, we made it so 
> they could run an actual RegionServer but with services disabled. The latter 
> has an issue if you run more than one instance per JVM and it is NOT a 
> minihbasecluster instance. It fails with:
> {code:java}
> 1:09:13.371 PM  ERROR  HRegionServer  
> Failed init
> org.apache.hadoop.metrics2.MetricsException: Metrics source 
> RegionServer,sub=Server already exists!
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
>   at 
> org.apache.hadoop.hbase.metrics.BaseSourceImpl.<init>(BaseSourceImpl.java:115)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.<init>(MetricsRegionServerSourceImpl.java:101)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceImpl.<init>(MetricsRegionServerSourceImpl.java:93)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createServer(MetricsRegionServerSourceFactoryImpl.java:69)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServer.<init>(MetricsRegionServer.java:56)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1519)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:954)
>   at com.ngdata.sep.impl.SepConsumer$1.run(SepConsumer.java:203){code}
>  
> If you look in [10:26 AM] Wolfgang Hoschek: DefaultMetricsSystem code (found 
> by [~whoschek]), you'll see this:
> {code:java}
> synchronized ObjectName newObjectName(String name) {
>     try {
>       if (mBeanNames.map.containsKey(name) && !miniClusterMode) {
>         throw new MetricsException(name +" already exists!");
>       }
>       return new ObjectName(mBeanNames.uniqueName(name));
>     } catch (Exception e) {
>       throw new MetricsException(e);
>     }
>   }{code}
> i.e. if we are in a mini cluster context, we will not fail registering the 
> second bean instance.
>  
> If you look in master startup in HMasterCommandLine, you will see:
>  
> {code:java}
> // If 'local', defer to LocalHBaseCluster instance.  Starts master
> // and regionserver both in the one JVM.
> if (LocalHBaseCluster.isLocal(conf)) {
>   DefaultMetricsSystem.setMiniClusterMode(true);
> ....{code}
> ... will ensure we don't get the above exception in minihbasecluster context.
>  
> So, the idea here is to make it so being able to run more than one RS per JVM 
> is cleaner than doing the above hack. It needs to be a config too.... a 
> config. which says don't fail startup if second mbean registration just 
> because two RS in the one context (A later issue will be the accounting of 
> metrics per RS... If more than one RS, then we should make a unique mbean per 
> RS in the JVM).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to