yes both of those things and maybe a bit more explanation on how they were implemented for Hive/Pig Also, the workflow.workflowcontext column looks like a blob of JSON which I guess ends up in some model in the web app? but how to construct it..? (the regex code in MapReduceHobHistoryUpdater.java is not exactly straightforward :) ) thanks A
From: Billie Rinaldi <[email protected]> Reply-To: <[email protected]> Date: Wed, 29 Jan 2014 07:03:32 -0800 To: <[email protected]> Subject: Re: Jobs view .. how to hook into it.... Sure. Which part is confusing? The adjacencies? Or why you would use it at all? On Tue, Jan 28, 2014 at 4:47 PM, Aaron Cody <[email protected]> wrote: > thanks Billie - do you think you could go into a little more detail about the > workflow DAG stuff on the wiki? it¹s a little cryptic (to me anyway) :) > > From: Billie Rinaldi <[email protected]> > Reply-To: <[email protected]> > Date: Mon, 20 Jan 2014 07:40:04 -0800 > To: <[email protected]> > Subject: Re: Jobs view .. how to hook into it.... > > In Hadoop 1 only, there is a log4j appender on the JobTracker/JobHistory that > inserts the data into postgres (or whichever db you have configured). The > code is in contrib/ambari-log4j. > > Billie > > > On Fri, Jan 17, 2014 at 1:59 PM, Aaron Cody <[email protected]> wrote: >> hello >> I¹m looking at integrating my own process into the Ambari Jobs¹ view and I >> can see how the web side of things works .. i.e. the view makes REST calls to >> the server which in turn results in a query to postgres to get the job stats >> but what is not so clear is how those job/task stats get into postgres in >> the first place. >> Q: for example, with MapReduce .. is Hadoop/JobTracker somehow inserting the >> job/task info into postgres directly? Or is there some other mechanism in >> Ambari that is listening for map reduce jobs/tasks to start/finish? >> >> any hints on where to look in the source tree would be greatly appreciated >> TIA >
smime.p7s
Description: S/MIME cryptographic signature
