----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/62224/#review185099 -----------------------------------------------------------
Ship it! Ship It! - Robert Nettleton On Sept. 11, 2017, 5:51 p.m., Madhuvanthi Radhakrishnan wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/62224/ > ----------------------------------------------------------- > > (Updated Sept. 11, 2017, 5:51 p.m.) > > > Review request for Ambari, Amarnath reddy pappu, Jayush Luniya, and Robert > Nettleton. > > > Bugs: AMBARI-21865 > https://issues.apache.org/jira/browse/AMBARI-21865 > > > Repository: ambari > > > Description > ------- > > In an Namenode HA environment, export Blueprint throws NPE/Server Error for > some missing configurations. > > > Diffs > ----- > > > ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java > b4e102737f > > > Diff: https://reviews.apache.org/r/62224/diff/1/ > > > Testing > ------- > > Installed Ambari-2.4.3 HDP-2.5.3.0. > > Added the properties to "Custom hdfs-site" > dfs.nameservices=nonha,hacluster > dfs.namenode.rpc-address.nonha=host-1.openstacklocal:8020 > dfs.ha.namenodes.hacluster=nn1,nn2 > dfs.namenode.rpc-address.hacluster.nn2=host-2.openstacklocal:8020 > dfs.namenode.rpc-address.hacluster.nn1=host-tt-3.openstacklocal:8020 > dfs.namenode.http-address.nonha=host-tt-1.openstacklocal:50070 > dfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > dfs.nameservices.internal=nonha > > Added the fix/null-check and then exported the blueprint which was successful. > Some properties will not be exported. > Here is a snippet of the blueprint hdfs-site section > "dfs.namenode.rpc-address.hacluster.nn2": "%HOSTGROUP::host_group_2%:8020", > "dfs.namenode.rpc-address.hacluster.nn1": "%HOSTGROUP::host_group_3%:8020", > "dfs.nameservices": "nonha,hacluster", > "dfs.namenode.http-address.nonha": "host-1.openstacklocal:50070", > "dfs.nameservices.internal": "nonha", > "dfs.namenode.rpc-address": "%HOSTGROUP::host_group_1%:8020", > "dfs.namenode.https-address": "%HOSTGROUP::host_group_1%:50470", > "dfs.namenode.rpc-address.nonha": "host-1.openstacklocal:8020", > "dfs.http.policy": "HTTP_ONLY", > "dfs.client.failover.proxy.provider.hacluster": > "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider", > "dfs.namenode.http-address": "%HOSTGROUP::host_group_1%:50070", > "dfs.datanode.https.address": "0.0.0.0:50475", > "dfs.ha.namenodes.hacluster": "nn1,nn2", > "dfs.namenode.secondary.http-address": "%HOSTGROUP::host_group_2%:50090", > "dfs.datanode.http.address": "0.0.0.0:50075" > > As you see because dfs.ha.namenodes.nonha does not exist, there will not be > any properties like dfs.namenode.rpc-address.nonha.<namenode> > > > Thanks, > > Madhuvanthi Radhakrishnan > >
