[jira] [Comment Edited] (AMBARI-24382) Ambari shouldn't deployed "Undefined" Config Values
[ https://issues.apache.org/jira/browse/AMBARI-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566938#comment-16566938 ] Hari Sekhon edited comment on AMBARI-24382 at 8/2/18 3:22 PM: -- I've just found out that this doesn't happen if I change hbase-env.sh in default config group, probably because it doesn't touch any setting to cause hbase-site.xml to be regenerated and include Undefined values. I suspect the fix is simply to test each hbase-site.xml setting for being properly defined and omit any setting with Undefined value from being written in to hbase-site.xml, which should be straightforward to a dev who knows the code base well? was (Author: harisekhon): I've just found out that this doesn't happen if I change hbase-env.sh in default config group, probably because it doesn't touch any setting to cause hbase-site.xml to be regenerated and include Undefined values. I would think the fix is simply to test each hbase-site.xml setting for being properly defined for and omit each value with Undefined from being written in to hbase-site.xml which should be straightforward to a dev who knows the code base well? > Ambari shouldn't deployed "Undefined" Config Values > --- > > Key: AMBARI-24382 > URL: https://issues.apache.org/jira/browse/AMBARI-24382 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.2 >Reporter: Hari Sekhon >Priority: Blocker > > Ambari should not deploy any properties with "Undefined" value (breaks HBase > Master as shown below). > When using a config group for HBase RegionServers to only enable Bucket Cache > on RegionServers (see AMBARI-24370), and then enabling a bunch of settings > related to using the bucket cache in the default config, Ambari will infer > that there should be a bucketcache setting and injects the following > properties with a literal value of "Undefined" in hbase-site.xml: > {code:java} > > hbase.bucketcache.ioengine > Undefined > > > hbase.bucketcache.size > Undefined > {code} > which breaks HMaster restart: > {code:java} > 2018-07-30 13:26:08,283 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2824) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2838) > Caused by: java.lang.NumberFormatException: For input string: "Undefined" > at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) > at sun.misc.FloatingDecimal.parseFloat(FloatingDecimal.java:122) > at java.lang.Float.parseFloat(Float.java:451) > at org.apache.hadoop.conf.Configuration.getFloat(Configuration.java:1400) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:597) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:566) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:650) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:239) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:591) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:425) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2819) > ... 5 more > {code} > The above two settings are only added to a config group RegionServers, but > when the following settings are applied to the default config group thinking > they should be ignored by Masters because bucket cache isn't enabled on > Master, it turns out that Ambari infers they should exist, leaves them > Undefined but deploys them anyway (these settings are from the OpenTSDB HBase > performance tuning guide btw): > {code:java} > hbase.rs.cacheblocksonwrite=true > hbase.rs.evictblocksonclose=false > hfile.block.bloom.cacheonwrite=true > hfile.block.index.cacheonwrite=true > hbase.block.data.cachecompressed=true > hbase.bucketcache.blockcache.single.percentage=.99 > hbase.bucketcache.blockcache.multi.percentage=0 >
[jira] [Comment Edited] (AMBARI-24382) Ambari shouldn't deployed "Undefined" Config Values
[ https://issues.apache.org/jira/browse/AMBARI-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16564986#comment-16564986 ] Hari Sekhon edited comment on AMBARI-24382 at 8/1/18 8:48 AM: -- HBase Masters are currently configurable as any new config revision in the default config gropu or hmaster sub-group will inject literal Undefined and break the HMasters. This is a pretty severe config management bug. It came about because I had to enable Bucket Cache only on RegionServers via a sub-group because the HMasters hardware don't have enough RAM to enable the Bucket Cache and direct memory allocation required, so this is something that could become quite common as this is hardly an esoteric configuration to have differently sized masters and slaves and therefore separate config groups for each. was (Author: harisekhon): HBase Masters are currently configurable as any new config revision in the default config gropu or hmaster sub-group will inject literal Undefined and break the HMasters. This is a pretty severe config management bug. It came about because I had to enable Bucket Cache only on RegionServers via a sub-group because the HMasters hardware don't have enough RAM to enable the Bucket Cache and direct memory allocation required, so this is something that could become quite common as this is hardly an esoteric configuration to have smaller masters and separate config groups for slaves. > Ambari shouldn't deployed "Undefined" Config Values > --- > > Key: AMBARI-24382 > URL: https://issues.apache.org/jira/browse/AMBARI-24382 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.2 >Reporter: Hari Sekhon >Priority: Blocker > > Ambari should not deploy any properties with "Undefined" value (breaks HBase > Master as shown below). > When using a config group for HBase RegionServers to only enable Bucket Cache > on RegionServers (see AMBARI-24370), and then enabling a bunch of settings > related to using the bucket cache in the default config, Ambari will infer > that there should be a bucketcache setting and injects the following > properties with a literal value of "Undefined" in hbase-site.xml: > {code:java} > > hbase.bucketcache.ioengine > Undefined > > > hbase.bucketcache.size > Undefined > {code} > which breaks HMaster restart: > {code:java} > 2018-07-30 13:26:08,283 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2824) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2838) > Caused by: java.lang.NumberFormatException: For input string: "Undefined" > at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) > at sun.misc.FloatingDecimal.parseFloat(FloatingDecimal.java:122) > at java.lang.Float.parseFloat(Float.java:451) > at org.apache.hadoop.conf.Configuration.getFloat(Configuration.java:1400) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:597) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:566) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:650) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:239) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:591) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:425) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2819) > ... 5 more > {code} > The above two settings are only added to a config group RegionServers, but > when the following settings are applied to the default config group thinking > they should be ignored by Masters because bucket cache isn't enabled on > Master, it turns out that Ambari infers they should exist, leaves them > Undefined but deploys them anyway (these settings are from the OpenTSDB HBase > performance tuning guide btw): > {code:java} >
[jira] [Comment Edited] (AMBARI-24382) Ambari shouldn't deployed "Undefined" Config Values
[ https://issues.apache.org/jira/browse/AMBARI-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16564986#comment-16564986 ] Hari Sekhon edited comment on AMBARI-24382 at 8/1/18 8:47 AM: -- HBase Masters are currently configurable as any new config revision in the default config gropu or hmaster sub-group will inject literal Undefined and break the HMasters. This is a pretty severe config management bug. It came about because I had to enable Bucket Cache only on RegionServers via a sub-group because the HMasters hardware don't have enough RAM to enable the Bucket Cache and direct memory allocation required, so this is something that could become quite common as this is hardly an esoteric configuration to have smaller masters and separate config groups for slaves. was (Author: harisekhon): HBase Masters are currently configurable as any new config revision in the default config gropu or hmaster sub-group will inject literal Undefined and break the HMasters. This is a pretty severe config management bug. Reminder, it came about because I had to enable Bucket Cache only on RegionServers via a sub-group because HMasters hardware doesn't have enough RAM to enable the Bucket Cache, so this is something that could become quite common as this is hardly an esoteric configuration to have smaller masters and separate config groups for slaves. > Ambari shouldn't deployed "Undefined" Config Values > --- > > Key: AMBARI-24382 > URL: https://issues.apache.org/jira/browse/AMBARI-24382 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.2 >Reporter: Hari Sekhon >Priority: Blocker > > Ambari should not deploy any properties with "Undefined" value (breaks HBase > Master as shown below). > When using a config group for HBase RegionServers to only enable Bucket Cache > on RegionServers (see AMBARI-24370), and then enabling a bunch of settings > related to using the bucket cache in the default config, Ambari will infer > that there should be a bucketcache setting and injects the following > properties with a literal value of "Undefined" in hbase-site.xml: > {code:java} > > hbase.bucketcache.ioengine > Undefined > > > hbase.bucketcache.size > Undefined > {code} > which breaks HMaster restart: > {code:java} > 2018-07-30 13:26:08,283 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2824) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2838) > Caused by: java.lang.NumberFormatException: For input string: "Undefined" > at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) > at sun.misc.FloatingDecimal.parseFloat(FloatingDecimal.java:122) > at java.lang.Float.parseFloat(Float.java:451) > at org.apache.hadoop.conf.Configuration.getFloat(Configuration.java:1400) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:597) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:566) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:650) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:239) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:591) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:425) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2819) > ... 5 more > {code} > The above two settings are only added to a config group RegionServers, but > when the following settings are applied to the default config group thinking > they should be ignored by Masters because bucket cache isn't enabled on > Master, it turns out that Ambari infers they should exist, leaves them > Undefined but deploys them anyway (these settings are from the OpenTSDB HBase > performance tuning guide btw): > {code:java} > hbase.rs.cacheblocksonwrite=true > hbase.rs.evictblocksonclose=false >
[jira] [Comment Edited] (AMBARI-24382) Ambari shouldn't deployed "Undefined" Config Values
[ https://issues.apache.org/jira/browse/AMBARI-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16563846#comment-16563846 ] Hari Sekhon edited comment on AMBARI-24382 at 7/31/18 3:41 PM: --- This keeps happening if any change is introduced at the default config group level. I've even tried moving the HMasters to their own sub-group not in default config group, but the same thing happens, all values undefined for their group are picked up and inserted with literal "Undefined" values. was (Author: harisekhon): This keeps happening if any change is introduced at the default config group level. I've even tried moving the HMasters to their own sub-group not in default config group, but the same thing happens, all values undefined for their group are picked up and inserted with literal "Undefined" values. When combined with the limitation in AMBARI-24393, it means I cannot apply main settings to HBase any more without completely breaking the HMasters site-hbase.xml eg. I cannot change the handler count or anything else on the main HBase Settings page. > Ambari shouldn't deployed "Undefined" Config Values > --- > > Key: AMBARI-24382 > URL: https://issues.apache.org/jira/browse/AMBARI-24382 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.2 >Reporter: Hari Sekhon >Priority: Blocker > > Ambari should not deploy any properties with "Undefined" value (breaks HBase > Master as shown below). > When using a config group for HBase RegionServers to only enable Bucket Cache > on RegionServers (see AMBARI-24370), and then enabling a bunch of settings > related to using the bucket cache in the default config, Ambari will infer > that there should be a bucketcache setting and injects the following > properties with a literal value of "Undefined" in hbase-site.xml: > {code:java} > > hbase.bucketcache.ioengine > Undefined > > > hbase.bucketcache.size > Undefined > {code} > which breaks HMaster restart: > {code:java} > 2018-07-30 13:26:08,283 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2824) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2838) > Caused by: java.lang.NumberFormatException: For input string: "Undefined" > at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) > at sun.misc.FloatingDecimal.parseFloat(FloatingDecimal.java:122) > at java.lang.Float.parseFloat(Float.java:451) > at org.apache.hadoop.conf.Configuration.getFloat(Configuration.java:1400) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:597) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:566) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:650) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:239) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:591) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:425) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2819) > ... 5 more > {code} > The above two settings are only added to a config group RegionServers, but > when the following settings are applied to the default config group thinking > they should be ignored by Masters because bucket cache isn't enabled on > Master, it turns out that Ambari infers they should exist, leaves them > Undefined but deploys them anyway (these settings are from the OpenTSDB HBase > performance tuning guide btw): > {code:java} > hbase.rs.cacheblocksonwrite=true > hbase.rs.evictblocksonclose=false > hfile.block.bloom.cacheonwrite=true > hfile.block.index.cacheonwrite=true > hbase.block.data.cachecompressed=true > hbase.bucketcache.blockcache.single.percentage=.99 > hbase.bucketcache.blockcache.multi.percentage=0 > hbase.bucketcache.blockcache.memory.percentage=.01 > {code} > I've worked around
[jira] [Comment Edited] (AMBARI-24382) Ambari shouldn't deployed "Undefined" Config Values
[ https://issues.apache.org/jira/browse/AMBARI-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16563846#comment-16563846 ] Hari Sekhon edited comment on AMBARI-24382 at 7/31/18 3:38 PM: --- This keeps happening if any change is introduced at the default config group level. I've even tried moving the HMasters to their own sub-group not in default config group, but the same thing happens, all values undefined for their group are picked up and inserted with literal "Undefined" values. When combined with the limitation in AMBARI-24393, it means I cannot apply main settings to HBase any more without completely breaking the HMasters site-hbase.xml eg. I cannot change the handler count or anything else on the main HBase Settings page. was (Author: harisekhon): This keeps happening if any change is introduced at the default config group level. I've even tried moving the HMasters to their own sub-group not in default config group, but the same thing happens, all values undefined for their group are picked up and inserted with literal "Undefined" values. > Ambari shouldn't deployed "Undefined" Config Values > --- > > Key: AMBARI-24382 > URL: https://issues.apache.org/jira/browse/AMBARI-24382 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.2 >Reporter: Hari Sekhon >Priority: Critical > > Ambari should not deploy any properties with "Undefined" value (breaks HBase > Master as shown below). > When using a config group for HBase RegionServers to only enable Bucket Cache > on RegionServers (see AMBARI-24370), and then enabling a bunch of settings > related to using the bucket cache in the default config, Ambari will infer > that there should be a bucketcache setting and injects the following > properties with a literal value of "Undefined" in hbase-site.xml: > {code:java} > > hbase.bucketcache.ioengine > Undefined > > > hbase.bucketcache.size > Undefined > {code} > which breaks HMaster restart: > {code:java} > 2018-07-30 13:26:08,283 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2824) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2838) > Caused by: java.lang.NumberFormatException: For input string: "Undefined" > at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) > at sun.misc.FloatingDecimal.parseFloat(FloatingDecimal.java:122) > at java.lang.Float.parseFloat(Float.java:451) > at org.apache.hadoop.conf.Configuration.getFloat(Configuration.java:1400) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:597) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:566) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:650) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:239) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:591) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:425) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2819) > ... 5 more > {code} > The above two settings are only added to a config group RegionServers, but > when the following settings are applied to the default config group thinking > they should be ignored by Masters because bucket cache isn't enabled on > Master, it turns out that Ambari infers they should exist, leaves them > Undefined but deploys them anyway (these settings are from the OpenTSDB HBase > performance tuning guide btw): > {code:java} > hbase.rs.cacheblocksonwrite=true > hbase.rs.evictblocksonclose=false > hfile.block.bloom.cacheonwrite=true > hfile.block.index.cacheonwrite=true > hbase.block.data.cachecompressed=true > hbase.bucketcache.blockcache.single.percentage=.99 > hbase.bucketcache.blockcache.multi.percentage=0 > hbase.bucketcache.blockcache.memory.percentage=.01 > {code} > I've worked around