Hi Robert

I actually mean both.  Scenarios where multiple jobs are running on
cluster, and same job could  be running on multiple task managers.  How can
we make sure that each job logs to a different file so that Logs are not
mixed, and its easy to debug a particular job.  Something like Hadoop Yarn,
where each attempt of a task produces a different log file.

Regards
Sumit Chawla


On Thu, Jul 14, 2016 at 6:11 AM, Robert Metzger <rmetz...@apache.org> wrote:

> Hi Sumit,
>
> What exactly do you mean by pipeline?
> Are you talking about cases were multiple jobs are running concurrently on
> the same TaskManager, or are you referring to parallel instances of a Flink
> job?
>
> On Wed, Jul 13, 2016 at 9:49 PM, Chawla,Sumit <sumitkcha...@gmail.com>
> wrote:
>
>> Hi All
>>
>> Does flink provide any ability to streamline logs being generated from a
>>  pipeline.  How can we keep the logs from two pipelines separate so that
>> its easy to debug the pipeline execution (something dynamic to
>> automatically partition the logs per pipeline)
>> Regards
>> Sumit Chawla
>>
>>
>

Reply via email to