Just to let you know. We found a small flaw in our code. Specifically the
compression part that seems to be causing the issue.

Thank you for helping me to troubleshoot the problem.

2011/6/24 Patricio Echagüe <[email protected]>

> Paul, the trace is from the task log at
>
> logs/hadoop/userlogs/<job>/.....
>
> Sean: I'm investigating. I don't rule out it can be Brisk related though.
>
> All I'm running is: mahout
> org.apache.mahout.clustering.syntheticcontrol.canopy.Job
>
>
> On Fri, Jun 24, 2011 at 11:54 AM, Sean Owen <[email protected]> wrote:
>
>> OK well what input/output is it reading from? it's either missing, or
>> isn't
>> the right format, or is corrupted, or maybe it's the _SUCCESS thing again.
>> It's got to be something like that. I don't know -- can't see what you're
>> doing from here. But would be good to know if you spy a problem.
>>
>> 2011/6/24 Patricio Echagüe <[email protected]>
>>
>> > I see. Unfortunately all the trace I see is the one I pasted.
>> >
>> > It looks like it is writing to a local file when it fails.
>>
>
>

Reply via email to