1) It goes to the task log, which you can access through the Hadoop web ui.
It does not go to your client console (unless your println is in the
constructor or a couple other functions that are invoked on the client to
set things up / check for correctness).

2) It's always a databag, and you do need to cast is as such. I wrote this
blog post to explain a bit what group schema looks like and what you can and
can't do with it:
http://squarecog.wordpress.com/2010/05/11/group-operator-in-apache-pig/

On Sun, Oct 9, 2011 at 8:03 PM, Walter Chang <[email protected]>wrote:

> Hi,
>
> I have two questions when using udf:
> 1. if i do system.err.println , where does it print to ? is there a log
> file
> ?
>
> 2. if i did a group , what's the type of second column of the generated
> table ? is it always databag ? So if i use it in my udf, i need to cast it
> back to databag and use iterator to access each tuple, is it correct ?
>
> Thanks a lot,
>
> Weide
>

Reply via email to