> On June 16, 2016, 6:34 p.m., Di Li wrote: > > ambari-web/app/controllers/main/admin/highAvailability/nameNode/step9_controller.js, > > line 146 > > <https://reviews.apache.org/r/48734/diff/1/?file=1420112#file1420112line146> > > > > I am under the impression that the time it takes for NN to exit the > > safemode is largely determined by the amount of data in HDFS, not whether > > DNs are started before NN. > > > > Would it be safer to have some logic to check if NameNode is out of the > > safemode? On a cluster with terabytes of data in HDFS, it may take NN quite > > some time (a few minutes, depending on the cluster's performenace) to exit > > the safemode. > > Victor Galgo wrote: > Hi Di Li! Thanks for taking a look into this. > > The problem here is more coplicated than it looks like. > > *Here is the basic scenario for handling safemode:* > During "Start All" on Namenode Start we wait while safemode goes to of > safemode to say that start is succesful. > > *However on HA wizard:* > We start Namenodes at the point when Datanodes are stopped. Which means > NN won't go out of safemode at that point, that's why skip that waitting on > NN start in HA wizard. > After that when we do "Start all" (last step in the wizard). Namenodes > are already started, so there won't be triggered any waitting for them to get > out of safemode when DNs are started. > > My solution makes NNs stopped before "Start All", which means that when > "Start All" on HA wizard is done, on NN start it will ensure that NNs will go > out of safemode (since DN are already started at that point). > > Di Li wrote: > Hello Victor, > > Thanks for the explanation. I may be asking something obvious to > experienced eyes so please bear with me. > Could you please > 1. point me to the logic that: "During "Start All" on Namenode Start we > wait while safemode goes to of safemode to say that start is succesful." > 2. point me to the logic that skips #1 when DS isn't running. > > I looked at the HDFS namenode Python scripts the "wait_for_safemode_off" > method seems to be called only during the upgrade time. I could have missed > something, so please let me know.
ensure_safemode_off = True # True if this is the only NameNode (non-HA) or if its the Active one in HA is_active_namenode = True if params.dfs_ha_enabled: Logger.info("Waiting for the NameNode to broadcast whether it is Active or Standby...") if check_is_active_namenode(hdfs_binary): Logger.info("Waiting for the NameNode to leave Safemode since High Availability is enabled and it is Active...") else: # we are the STANDBY NN ensure_safemode_off = False check_is_active_namenode will return false after a lot of retries for both namenodes, since both of them aren't even out of safemode. Which will set ensure_safemode_off to False. Which will make it skip below: # wait for Safemode to end if ensure_safemode_off: wait_for_safemode_off(hdfs_binary) - Victor ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/48734/#review138047 ----------------------------------------------------------- On June 15, 2016, 4:41 p.m., Victor Galgo wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/48734/ > ----------------------------------------------------------- > > (Updated June 15, 2016, 4:41 p.m.) > > > Review request for Ambari, Andriy Babiichuk, Alexandr Antonenko, Andrew > Onischuk, Di Li, Dmitro Lisnichenko, Jayush Luniya, Robert Levas, Sandor > Magyari, Sumit Mohanty, Sebastian Toader, and Yusaku Sako. > > > Bugs: AMBARI-17182 > https://issues.apache.org/jira/browse/AMBARI-17182 > > > Repository: ambari > > > Description > ------- > > On the last step "Start all" on enabling HA below happens: > > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", > line 147, in <module> > ApplicationTimelineServer().execute() > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 219, in execute > method(env) > File > "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", > line 43, in start > self.configure(env) # FOR SECURITY > File > "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", > line 54, in configure > yarn(name='apptimelineserver') > File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", > line 89, in thunk > return fn(*args, **kwargs) > File > "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", > line 276, in yarn > mode=0755 > File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", > line 154, in __init__ > self.env.run() > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 160, in run > self.run_action(resource, action) > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 124, in run_action > provider_action() > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", > line 463, in action_create_on_execute > self.action_delayed("create") > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", > line 460, in action_delayed > self.get_hdfs_resource_executor().action_delayed(action_name, self) > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", > line 259, in action_delayed > self._set_mode(self.target_status) > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", > line 366, in _set_mode > self.util.run_command(self.main_resource.resource.target, > 'SETPERMISSION', method='PUT', permission=self.mode, assertable_result=False) > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", > line 195, in run_command > raise Fail(err_msg) > resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w > '%{http_code}' -X PUT > 'http://testvgalgo.org:50070/webhdfs/v1/ats/done?op=SETPERMISSION&user.name=hdfs&permission=755'' > returned status_code=403. > { > "RemoteException": { > "exception": "RetriableException", > "javaClassName": "org.apache.hadoop.ipc.RetriableException", > "message": "org.apache.hadoop.hdfs.server.namenode.SafeModeException: > Cannot set permission for /ats/done. Name node is in safe mode.\nThe reported > blocks 675 needs additional 16 blocks to reach the threshold 0.9900 of total > blocks 697.\nThe number of live datanodes 20 has reached the minimum number > 0. Safe mode will be turned off automatically once the thresholds have been > reached." > } > } > > > This happens because NN is not yet out of safemode at the moment of ats > start, because DNs just started. > > To fix this "stop namenodes" has to be triggered before "start all". > > If this is done, on "Start all" it will be ensured that datanodes start prior > to NN, and that NN are out of safemode before ATS start. > > > Diffs > ----- > > > ambari-web/app/controllers/main/admin/highAvailability/nameNode/step9_controller.js > 24677e4 > ambari-web/app/messages.js 6465812 > > Diff: https://reviews.apache.org/r/48734/diff/ > > > Testing > ------- > > Calling set on destroyed view > Calling set on destroyed view > Calling set on destroyed view > Calling set on destroyed view > > 28668 tests complete (34 seconds) > 154 tests pending > > [INFO] > [INFO] --- apache-rat-plugin:0.11:check (default) @ ambari-web --- > [INFO] 51 implicit excludes (use -debug for more details). > [INFO] Exclude: .idea/** > [INFO] Exclude: package.json > [INFO] Exclude: public/** > [INFO] Exclude: public-static/** > [INFO] Exclude: app/assets/** > [INFO] Exclude: vendor/** > [INFO] Exclude: node_modules/** > [INFO] Exclude: node/** > [INFO] Exclude: npm-debug.log > [INFO] 1425 resources included (use -debug for more details) > Warning: org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser: Property > 'http://www.oracle.com/xml/jaxp/properties/entityExpansionLimit' is not > recognized. > Compiler warnings: > WARNING: 'org.apache.xerces.jaxp.SAXParserImpl: Property > 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.' > Warning: org.apache.xerces.parsers.SAXParser: Feature > 'http://javax.xml.XMLConstants/feature/secure-processing' is not recognized. > Warning: org.apache.xerces.parsers.SAXParser: Property > 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized. > Warning: org.apache.xerces.parsers.SAXParser: Property > 'http://www.oracle.com/xml/jaxp/properties/entityExpansionLimit' is not > recognized. > [INFO] Rat check: Summary of files. Unapproved: 0 unknown: 0 generated: 0 > approved: 1425 licence. > [INFO] > ------------------------------------------------------------------------ > [INFO] BUILD SUCCESS > [INFO] > ------------------------------------------------------------------------ > [INFO] Total time: 1:31.015s > [INFO] Finished at: Sun Jun 12 14:37:47 EEST 2016 > [INFO] Final Memory: 13M/407M > [INFO] > ------------------------------------------------------------------------ > > Also to test this I have installed 3 nodes cluster and enabled namenode ha on > it. > > > Thanks, > > Victor Galgo > >