try ntpdate -b -p8 <whichever server> However, you flat should not be seeing 13 minutes. Something wrong. Suggest nptdate -d -b -p8 <whichever server> and look at the results.
On Thu, Feb 26, 2015 at 10:54 AM, Jan van Bemmelen <[email protected]> wrote: > Hi Tariq, > > So this is not really an Hadoop issue, but more a general Linux time > question. Here’s how to manually get the time synchronised: > > /etc/init.d/ntp stop (or whatever way you prefer to kill ntpd) > ntpdate 0.centos.pool.ntp.org > > This should sync time with the centos ntp server, and output a line > indicating the time difference. If this doesn’t fix the time difference > between your machines, you can check online for info on how to setup ntp, > or set the time manually using the ‘date’ command. Please check man date > for more info. Once the 13 minute time difference has been corrected > restart your job. > > Regards, > Jan > > > > On 26 Feb 2015, at 19:39, [email protected] wrote: > > Thanks Jan, > > I followed the link and re-booted the node. > > Still no success. > > Time on this node is about 13 minutes behind the other nodes. Any otehr > suggestion please > > This node is workig as my namenode > > > > > On Thu, Feb 26, 2015 at 6:31 PM, Jan van Bemmelen <[email protected]> > wrote: > >> Hi Tariq, >> >> You seem to be using debian or ubuntu. The documentation here will guide >> you through setting up ntp: >> http://www.cyberciti.biz/faq/debian-ubuntu-linux-install-ntpd/ . When >> you have finished these steps you can check the system’s clocks using the >> ‘date’ command’. The differences between the servers should be minimal. >> >> Regards, >> Jan >> >> >> On 26 Feb 2015, at 19:19, [email protected] wrote: >> >> Thanks Jan. I did the follwoing: >> >> 1) Manually set the timezone of all the nodes using "sudo >> dpkg-reconfigure tzdata" >> 2) Re-booted the nodes >> >> Still having the same exception. >> >> How can I configure NTP? >> >> Regards, >> Tariq >> >> >> On Thu, Feb 26, 2015 at 5:33 PM, Jan van Bemmelen <[email protected]> >> wrote: >> >>> Could you check for any time differences between your servers? If so, >>> please install and run NTP, and retry your job. >>> >>> Regards, >>> Jan >>> >>> >>> On 26 Feb 2015, at 17:57, [email protected] wrote: >>> >>> I am getting "Unauthorized request to start container. This token is >>> expired." >>> How to resovle it. The problem is reported on different forums, but I >>> could not find an solution to it. >>> >>> >>> Below is the execution log >>> >>> 15/02/26 16:41:02 INFO impl.YarnClientImpl: Submitted application >>> application_1424968835929_0001 >>> 15/02/26 16:41:02 INFO mapreduce.Job: The url to track the job: >>> http://101-master15:8088/proxy/application_1424968835929_0001/ >>> 15/02/26 16:41:02 INFO mapreduce.Job: Running job: job_1424968835929_0001 >>> 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 running >>> in uber mode : false >>> 15/02/26 16:41:04 INFO mapreduce.Job: map 0% reduce 0% >>> 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 failed >>> with state FAILED due to: Application application_1424968835929_0001 failed >>> 2 times due to Error launching appattempt_1424968835929_0001_000002. Got >>> exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized >>> request to start container. >>> This token is expired. current time is 1424969604829 found 1424969463686 >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526) >>> at >>> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) >>> at >>> org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) >>> at >>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122) >>> at >>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249) >>> at >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> at >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> at java.lang.Thread.run(Thread.java:745) >>> . Failing the application. >>> 15/02/26 16:41:04 INFO mapreduce.Job: Counters: 0 >>> Time taken: 0 days, 0 hours, 0 minutes, 9 seconds. >>> >>> >>> >> >> > >
