Hi Amarnatha,
There are a couple of options which I can think of.
1. Why don't you just set up a simple daemon to watch hadoop.log and
generate a subsequent stream writing it to /tmp/myurls.log e.g. tail -f
hadoop.log > /tmp/myurls.log
2. Check out confirmation/log4j.properties, you will see the configuration
for Hadoop.log in there. 'Maybe' you can change this location, rebuild your
deployment and it will solve your issue.
I'm sure there ate several other ways as well.
hth
Lewis

On Thu, Sep 6, 2018 at 8:58 AM <user-digest-h...@nutch.apache.org> wrote:

> From: Amarnatha Reddy <polu.a...@gmail.com>
> To: user@nutch.apache.org
> Cc:
> Bcc:
> Date: Tue, 4 Sep 2018 22:13:58 +0530
> Subject: redirect bin/crwal log output to some other file
> Hi All,
>
> We are using bin/crawl  command to crawl and index data into solr,
> currently the output is writing into default logs/hadoop.log file, so my
> requirement is how can i log data writing into different file
>
>
> bin/crawl -i -D solr.server.url=http://localhost:8983/solr/jeepkr -s urls/
> crawl/ 1  -->this will write log details under default path logs/hadoop.log
>
> How can i write log path by passing as part of bin/crawl?
>
> ex: bin/crawl -i -D solr.server.url=http://localhost:8983/solr/jeepkr -s
> urls/ crawl/ 1  >/tmp/myurls.log
> --
>
>

Reply via email to