I would recommend a small daemon who logtails the existing squid log 
file and pumps the data into the database of choice.

The File::Tail perl module is a good help for this job. Kind of a tail 
-f on steroids.

Regards
Henrik

On Thursday 05 June 2003 18.21, Lucas Brasilino wrote:
> Hi All:
>
>       Maybe I'm crossposting but anyway....
>       I'm intending do develop an application to store in
> a database all users traffic in a way to easier create reports.
> I'm considering two solutions:
>
> 1) Create a daemon which will read all squids logs by a pipe
> in /dev, configure this pipe as squid's log file and store
> users_ip and URL in the database, or
>
> 2) Create a daemon which will connect to squid's port and
> send a "GET cache_object://hostname/filedescriptors",
> and store only Description field which content is "^http://";
> (URL's).
>
>       I'm wondering which will be the better way to accomplish my
> duty. Suggestion?? :)
>
> PS: as a database I'll use Berkeley DB or GDBM, which are MUCH MORE
> lighter.
>
>
> thanks in advance

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]

Reply via email to