Re: hung AM due to timeline timeout

2016-08-09 Thread Slava Markeyev
I do depend on the tez UI. I did move to the rolling level db store which
helped with failures but it was still slow to empty the queue after the dag
finished. I'm testing increasing tez.yarn.ats.max.events.per.batch and
seeing some positive results from that.

-Slava

On Sun, Aug 7, 2016 at 3:50 PM, Bikas Saha  wrote:

> @Hitesh,  Is there an option to limit the time spent clearing the backlog?
>
> @Slava - If you are not using the Tez UI then perhaps you can disable
> timeline integration or you could use an option, if such exists per
> question above, to limit the final sync into ATS.
>
> Bikas
>
> > Subject: Re: hung AM due to timeline timeout
> > From: hit...@apache.org
> > Date: Wed, 3 Aug 2016 19:10:53 -0700
> > To: user@tez.apache.org
>
> >
> > It might worth filing a YARN jira to get it backported to 2.6.x and
> 2.7.x. At the very least, it will simplify rebuilding the timeline-server
> jar against the CDH version that you are running.
> >
> > — Hitesh
> >
> > > On Aug 3, 2016, at 4:42 PM, Slava Markeyev 
> wrote:
> > >
> > > Thanks for the info Hitesh. Unfortunately it seems that RollingLevelDB
> is only in trunk. I may have to backport it to 2.6.2 (version I use). I did
> notice that the leveldb does grow to tens of gb which may be an indication
> of pruning not happening often enough (or at all?). I also need to fix the
> logging as the logs for the timeline server don't seem to be very active
> beyond it starting up.
> > >
> > > For the job I posted before here is the associated eventQueueBacklog
> log line.
> > > 2016-08-03 19:23:27,932 [INFO] [AMShutdownThread]
> |ats.ATSHistoryLoggingService|: Stopping ATSService,
> eventQueueBacklog=17553
> > > I'll look into lowering tez.yarn.ats.event.flush.timeout.millis while
> trying to look into the timelineserver.
> > >
> > > Thanks for your help,
> > > Slava
> > >
> > > On Wed, Aug 3, 2016 at 2:45 PM, Hitesh Shah  wrote:
> > > Hello Slava,
> > >
> > > Can you check for a log line along the lines of "Stopping ATSService,
> eventQueueBacklog=“ to see how backed up is the event queue to YARN
> timeline?
> > >
> > > I have noticed this in quite a few installs with YARN Timeline where
> YARN Timeline is using the simple Level DB impl and not the RollingLevelDB
> storage class. The YARN timeline ends up hitting some bottlenecks around
> the time when the data purging happens ( takes a global lock on level db ).
> The Rolling level db storage impl solved this problem by using separate
> level dos for different time intervals and just throwing out the level db
> instead of trying to do a full scan+purge.
> > >
> > > Another workaround though not a great one is to set
> “tez.yarn.ats.event.flush.timeout.millis” to a value say 6 i.e. 1
> min. This implies that the Tez AM will try for at max 1 min to flush the
> queue to YARN timeline before giving up and shutting down the Tez AM.
> > >
> > > A longer term option is the YARN Timeline version 1.5 work currently
> slated to be released in hadoop 2.8.0 which uses HDFS for writes instead of
> the current web service based approach. This has a far better perf
> throughput for writes albeit with a delay on the read path as the Timeline
> server scans HDFS for new updates. The tez changes for this are already
> available in the source code under the hadoop28 profile though the
> documentation for this is still pending.
> > >
> > > thanks
> > > — Hitesh
> > >
> > >
> > >
> > >
> > >
> > > > On Aug 3, 2016, at 2:02 PM, Slava Markeyev <
> slava.marke...@upsight.com> wrote:
> > > >
> > > > I'm running into an issue that occurs fairly often (but not
> consistently reproducible) where yarn reports a negative value for memory
> allocation eg (-2048) and a 0 vcore allocation despite the AM actually
> running. For example the AM reports a runtime of 1hrs, 29mins, 40sec while
> the dag only 880 seconds.
> > > >
> > > > After some investigating I've noticed that the AM has repeated
> issues contacting the timeline server after the dag is complete (error
> trace below). This seems to be delaying the shutdown sequence. It seems to
> retry every minute before either giving up or succeeding but I'm not sure
> which. What's the best way to debug why this would be happening and
> potentially shortening the timeout retry period as I'm more concerned with
> job completion than logging it to the timeline server. This doesn't seem to
> be happening consistently to all tez jobs only some.
> > > >
> > > > I'm using hive 1.1.0 and tez 0.7.1 on cdh5.4.10 (hadoop 2.6).
> > > >
> > > > 2016-08-03 19:18:22,881 [INFO] [ContainerLauncher #112] |impl.
> ContainerManagementProtocolProxy|: Opening proxy : node:45454
> > > > 2016-08-03 19:18:23,292 [WARN] [HistoryEventHandlingThread]
> |security.UserGroupInformation|: PriviledgedActionException as:x
> (auth:SIMPLE) cause:java.net.SocketTimeoutException: Read timed out
> > > > 2016-08-03 19:18:23,292 [ERROR] 

Re: Some resource about tez architecture and design document

2016-08-09 Thread Hitesh Shah
The following 2 links should help you get started. Might be best to start with 
the sigmod paper and one of the earlier videos. 

https://cwiki.apache.org/confluence/display/TEZ/How+to+Contribute+to+Tez
https://cwiki.apache.org/confluence/display/TEZ/Presentations%2C+publications%2C+and+articles+about+Tez

thanks
— Hitesh

> On Aug 9, 2016, at 8:03 AM, darion.yaphet  wrote:
> 
> Hi team :
> I'm a beginner to learn tez source code . Is there any resource tez 
> architecture and design document to introduce it ? 
> thanks 
> 
> 
>  



Some resource about tez architecture and design document

2016-08-09 Thread darion.yaphet
Hi team :
I'm a beginner to learn tez source code . Is there any resource tez 
architecture and design document to introduce it ? 
thanks