Team,

There is a port blocking between YARN and NODE Manager.

After enabling port comunnication job went smoothly.


-Sankara Telukutla

On Friday, June 5, 2015, Xu, Qian A <[email protected]> wrote:

> I'm a bit late jump into the thread.
>
> When you start a job with `start job -j 1 -s`, you can see the progress on
> the screen. You will see a link to the job on YARN. You will find more
> valuable information in the link. Usually if it blocks at the beginning, it
> can be an exception throws by extracting every single record, this will
> take quite a long time. Also it can be a YARN problem, which is your case.
>
> Thanks
> Qian Xu (Stanley)
>
>
> -----Original Message-----
> From: SHANKAR REDDY [mailto:[email protected] <javascript:;>]
> Sent: Wednesday, June 03, 2015 3:12 AM
> To: [email protected] <javascript:;>
> Subject: Re: Sqoop2 job hung for long time
>
> This seems like the clear problem with Yarn which is unable start even
> sample word count example as well.
>
>
>
> Kind Regards,
> Sankara Telukutla
>
>
> On Tue, Jun 2, 2015 at 11:13 AM, SHANKAR REDDY <
> [email protected] <javascript:;>>
> wrote:
>
> > Hi Richard,
> > Attached the logs from Yarn and Sqoop2.
> >
> > After recycle the cluster I see that new files are created. The logs
> > attached are fresh logs after server restart.
> >
> > Most of the places I see the below error from the logs.
> >
> > Caused by: java.lang.UnsupportedOperationException: Usage threshold is
> > not supported
> >
> > Please suggest.
> >
> >
> > Kind Regards,
> > Sankara Telukutla
> >
> >
> > On Tue, Jun 2, 2015 at 12:27 AM, Zhou, Richard
> > <[email protected] <javascript:;>>
> > wrote:
> >
> >> Hey, it seems that resource manager cannot create container.
> >> For the log file in
> >>
> >> /var/log/hadoop-yarn
> >> /var/log/sqoop2
> >>
> >> You cannot delete the log file manually. CDH will append log into the
> >> log file, but if there is no log file existing in the folder, CDH
> >> will not create a new log file.
> >> You will find error message in CM (cluster->yarn->resource
> >> manager->log file link in summary tab).
> >> You need to touch a new log file and change owner to yarn and change
> >> mode to 644.
> >>
> >> [root@server-654 richard]# ll
> >> /var/log/hadoop-yarn/hadoop-cmf-yarn2-RESOURCEMANAGER-server-654.nova
> >> local.log.out
> >> -rw-r--r-- 1 yarn yarn 54243 Jun  2 14:42
> >> /var/log/hadoop-yarn/hadoop-cmf-yarn2-RESOURCEMANAGER-server-654.nova
> >> local.log.out
> >>
> >>
> >> Regards
> >> Richard
> >>
> >>
> >> -----Original Message-----
> >> From: SHANKAR REDDY [mailto:[email protected] <javascript:;>]
> >> Sent: Tuesday, June 02, 2015 12:51 PM
> >> To: [email protected] <javascript:;>
> >> Subject: Re: Sqoop2 job hung for long time
> >>
> >> Hi Richard,
> >> Please see below.
> >>
> >> sqoop:000> set option -name verbose -value true Verbose option was
> >> changed to true sqoop:000> sqoop:000> start job -jid 2 -s Submission
> >> details Job ID: 2 Server URL: http://localhost:12000/sqoop/ Created by:
> >> ubuntu Creation date: 2015-06-02 04:41:56 UTC Lastly updated by:
> >> ubuntu External ID: job_1433186285881_0007
> >>
> >>
> >> http://ip-172-31-1-201.us-west-1.compute.internal:8088/proxy/applicat
> >> ion_1433186285881_0007/ Source Connector schema:
> >> Schema{name=clp_sandbox.HADOOP_TEST,columns=[
> >>
> >>
> >> FixedPoint{name=SERIAL_NO,nullable=true,type=FIXED_POINT,byteSize=4,s
> >> igned=true},
> >>
> >>
> >>
> FixedPoint{name=EMPLOYEE_ID,nullable=true,type=FIXED_POINT,byteSize=4,signed=true},
> >>         Text{name=NAME,nullable=true,type=TEXT,charSize=null}]}
> >> 2015-06-02 04:41:56 UTC: BOOTING  - Progress is not available
> >> 2015-06-02 04:42:07 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:42:17 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:42:27 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:42:37 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:42:47 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:42:57 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:43:07 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:43:17 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:43:27 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:43:37 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:43:47 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:43:57 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:44:07 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:44:17 UTC: BOOTING  - 0.00 %
> >> 2015-06-02 04:44:27 UTC: BOOTING  - 0.00 %
> >>
> >>
> >> ..
> >> I cleaned up the logs and before doing the above steps and found that
> >> no new logs are created at below location.
> >>
> >> /var/log/hadoop-yarn
> >> /var/log/sqoop2
> >>
> >>
> >> Is there anything I can verify?
> >>
> >> Kind Regards,
> >> Sankara Telukutla
> >>
> >>
> >> On Mon, Jun 1, 2015 at 8:10 PM, Zhou, Richard
> >> <[email protected] <javascript:;>>
> >> wrote:
> >>
> >> > Hey, would you send out the log, including the Sqoop log and Yarn log.
> >> > It should be in /var/log/ if Cloudera 5.4.1 is installed.
> >> > And run command "set option -name verbose -value true" to set
> >> > verbose, then re-run the job with "start job -jid 2 -s" to show more
> information.
> >> >
> >> >
> >> > Regards
> >> > Richard
> >> >
> >> > -----Original Message-----
> >> > From: SHANKAR REDDY [mailto:[email protected]
> <javascript:;>]
> >> > Sent: Tuesday, June 02, 2015 9:16 AM
> >> > To: [email protected] <javascript:;>
> >> > Subject: Sqoop2 job hung for long time
> >> >
> >> > Team,
> >> >
> >> > I have a job which is from MySql to HDFS transformation using sqoop2.
> >> > The job I have started not responding for long time and seems like
> >> > there is a problem with YARN which couldn't able to pick up the same.
> >> > Could you please help me to rectify this problem?
> >> >
> >> > Version details:
> >> > SQOOP2  : 1.99.5
> >> > Cloudera :  5.4.1
> >> >
> >> > application_1433186285881_0004
> >> > <
> >> > http://ec2-52-8-94-128.us-west-1.compute.amazonaws.com:8088/cluster
> >> > /ap
> >> > p/application_1433186285881_0004
> >> > >
> >> > sqoop2Sqoop: Test Job-copyMAPREDUCEroot.sqoop2Mon Jun 1 17:18:39
> >> > -0700
> >> > 2015
> >> >
> >> > N/AACCEPTEDUNDEFINED
> >> > UNASSIGNED
> >> > <http://ec2-52-8-94-128.us-west-1.compute.amazonaws.com:8088/cluste
> >> > r/#
> >> > >
> >> >
> >> >
> >> > And the job information :
> >> > sqoop:000> show job -jid 2
> >> > 1 job(s) to show:
> >> > Job with id 2 and name Test Job-copy (Enabled: true, Created by
> >> > null at
> >> > 5/21/15 8:45 AM, Updated by null at 6/2/15 12:18 AM) Using link id
> >> > 1 and Connector id 4
> >> >   From database configuration
> >> >     Schema name: clp_sandbox
> >> >     Table name: HADOOP_TEST
> >> >     Table SQL statement:
> >> >     Table column names:
> >> >     Partition column name: EMPLOYEE_ID
> >> >     Null value allowed for the partition column: true
> >> >     Boundary query:
> >> >   Throttling resources
> >> >     Extractors: 2
> >> >     Loaders: 1
> >> >   ToJob configuration
> >> >     Override null value: false
> >> >     Null value:
> >> >     Output format: SEQUENCE_FILE
> >> >     Compression format: NONE
> >> >     Custom compression format:
> >> >     Output directory: /hadooptest
> >> >
> >> >
> >> >
> >> > Let me know if you need any information I can provide.
> >> >
> >> >
> >> >
> >> > -Shankar
> >> >
> >>
> >
> >
>


-- 
Regards,
Sankara Reddy Telukutla

Reply via email to