Hi,

I've written my own Processor to handle my log format per this wiki and I've
run into a couple of gotchast:
http://wiki.apache.org/hadoop/DemuxModification

1. The default processor is not the TsProcessor as documented, but the
DefaultProcessor (see line 83 of Demux.java). This causes headaches because
when using DefaultProcessor data always goes under minute "0" in hdfs,
regardless of when in the hour it was created.

2. When implementing a custom parser as shown in the wiki, how do you
register the class so it gets included in the job that's submitted to the
hadoop cluster? The only way I've been able to do this is to put my class in
the package org.apache.hadoop.chukwa.extraction.demux.processor.mapper and
then manually add that class to the chukwa-core-0.3.0.jar that  is on my
data processor, which is a pretty rough hack. Otherwise, I get class not
found exceptions in my mapper.

thanks,
Bill

Reply via email to