Re: [squid-users] how to send one information or warning page to the every authenticated client

2005-02-15 Thread OTR Comm
Hello,

I can send Login Alerts to all users, Special Messages to all users,
Group Messages to all members of a given group and Personal Messages to
individuals.  The Login Alerts appear to all users when they login to
the system, and they only appear once.

Special Messages appear both to people when they login and to users who
are already logged in the next time they submit a GET to Squid.  That
is, as soon as the Squid administrator post the Special Message, all
users who are currently logged into the system will receive the message
at their next GET, but users who are not logged on will receive the
mesage when they do logon.

Group Messages only appear to members of a particular group when they
login, or after the next GET if they are already logged on.  Them same
as for Special Messages, but specifically targeted to specific group
members.

Personal Messages are directed to a specific user, and that user will
receive the personal message at the next GET if they are logged on when
the administrator post the message, or when they do login if they are
not logged in when the message is posted.

How this all works is really off topic for Squid, but I would be more
than willing to communicate directly with you about how my system
works.  I developed all the code, and it is OpenSource, so drop me a
note at [EMAIL PROTECTED] if you are interested.  I can set you up
with an account in my system and demonstrate how it works.

I use Squid 2.5, Linux, Perl, Customized SquidGuard and MySQL.  I
authenticate through MySQL.  Not really close to your configuration, but
the paradigm is portable.

Murrah Boswell

Srinivasa Chary wrote:
 
 Hi All,
 
   How can i send one information default HTML page to each client when he
 try to access internet through squid proxy by giving the username and
 password in any browser.
   Actually i want to send some warning message to clients who is
 authenticated by some username and password to access the internet.
 
 I am using the squid 2.5 and samba 3 and windows 2003 for authenticatin .
 
 Thanks in advance.
 
 Regards,
 M.Srinivasa Chary
 
 Regards,
 M.Srinivasa Chary
 Telecommunication Engineer
 Infotech Divison
 National Telephone Services
 GSM: +968 9263127.


[squid-users] Re: Correction: [squid-users] how to send one information or warning page to the every authenticated client

2005-02-15 Thread OTR Comm
I use Standard C, not Perl in this system.

Murrah Boswell


Hello,

I can send Login Alerts to all users, Special Messages to all users,
Group Messages to all members of a given group and Personal Messages to
individuals.  The Login Alerts appear to all users when they login to
the system, and they only appear once.

Special Messages appear both to people when they login and to users who
are already logged in the next time they submit a GET to Squid.  That
is, as soon as the Squid administrator post the Special Message, all
users who are currently logged into the system will receive the message
at their next GET, but users who are not logged on will receive the
mesage when they do logon.

Group Messages only appear to members of a particular group when they
login, or after the next GET if they are already logged on.  Them same
as for Special Messages, but specifically targeted to specific group
members.

Personal Messages are directed to a specific user, and that user will
receive the personal message at the next GET if they are logged on when
the administrator post the message, or when they do login if they are
not logged in when the message is posted.

How this all works is really off topic for Squid, but I would be more
than willing to communicate directly with you about how my system
works.  I developed all the code, and it is OpenSource, so drop me a
note at [EMAIL PROTECTED] if you are interested.  I can set you up
with an account in my system and demonstrate how it works.

I use Squid 2.5, Linux, Perl, Customized SquidGuard and MySQL.  I
authenticate through MySQL.  Not really close to your configuration, but
the paradigm is portable.

Murrah Boswell

Srinivasa Chary wrote:
 
 Hi All,
 
   How can i send one information default HTML page to each client when he
 try to access internet through squid proxy by giving the username and
 password in any browser.
   Actually i want to send some warning message to clients who is
 authenticated by some username and password to access the internet.
 
 I am using the squid 2.5 and samba 3 and windows 2003 for authenticatin .
 
 Thanks in advance.
 
 Regards,
 M.Srinivasa Chary
 
 Regards,
 M.Srinivasa Chary
 Telecommunication Engineer
 Infotech Divison
 National Telephone Services
 GSM: +968 9263127.


[squid-users] PushCache Patch: store_swapout.c question

2004-12-13 Thread OTR Comm
Hello,

I have asked this question before, but I still can't figure it out!

When a file is written to cache, where does Squid figure out which cache
directory to store the file in?

I see in store_swapout.c where e-swap_filen and e-swap_dirn are
assigned, but if I write these out witha debug statement, I get:

snip
dirno 0, fileno 
snip

and on my system, this file went to:

/usr/local/squid/var/cache/00/00/

what I need to know is where does squid genetrate the '00/00' part of
the path?  And how does 'dirno 0' enter into this calculation?

What I am trying to do is make a record in MySQL of each file pushed to
the server so I can go back to each of the cache entries by path and
filename, and work with the specific file.

Thanks,
Murrah Boswell


Re: [squid-users] store.log, store_log.c, storeLog() question

2004-11-02 Thread OTR Comm

 Which version of Squid is this based on?

2.5.STABLE5-CVS

Thanks,
Murrah Boswell


Re: [squid-users] store.log, store_log.c, storeLog() question

2004-11-01 Thread OTR Comm

 
 See fs/ufs/store_dir_ufs.c

I have store_dir.c in the pushcache patch, but not store_dir_ufs.c.

 
  I see in storeLog() where e-swap_dirn and e-swap_filen are written to
  store.log, but e-swap_dirn doesn't give me the directory/subdirectory.
 
 swap_filen gives you the cache file number, which is directly mapped to
 director/subdirectory by the L1 / L2 parameters of the ufs/aufs/diskd
 store implementations.

I don't see where in store_dir.c squid determines which cache directory
and subdirectory to store a given cache file in.

Could you help me a little more here?

Thanks,
Murrah Boswell


[squid-users] store.log, store_log.c, storeLog() question

2004-10-29 Thread OTR Comm
Hello,

Where is it that squid determines the cache directory, subdirectory, and
filename to store a cache file?

I see in storeLog() where e-swap_dirn and e-swap_filen are written to
store.log, but e-swap_dirn doesn't give me the directory/subdirectory.

I guess it is in either store.c, store_client.c, store_swapout.c, or
store_dir.c, but I can't find where.  Could someone help me with this
please?

I need to know the path from '/var/cache' to any given cache file for
another system that I am working on that interfaces to the pushcache
patch for squid.  I would like to be able to get at this from within
store_log.c if possible, but need more info than e-swap_dirn gives.  Is
there possibly some way to get at var/cache/directory/subdirectory in
storeLog()?

Thanks,
Murrah Boswell


Re: [squid-users] GET/PUT Question - AGAIN

2004-10-08 Thread OTR Comm
Hello,


 No headers at all, just a HTTP/1.0 200 OK line?

That's right!

  Does this mean that squid (the pushcache patched version) is ignoring my
  Proxy-Connection: Keep-Alive header?
 
 Yes, and in addition not being very friendly about it..

Oh well, looks like I have some coding to do to make it a little more
sociable.

Where in the true squid code (version 2.5) does the header information
get returned?  That is, what modules in the pushcache patch do I need to
start looking in?  Are http.c, client_side.c, and HttpHeader.c good
places to start?

Thanks,
Murrah Boswell


Re: [squid-users] GET/PUT Question - AGAIN

2004-10-08 Thread OTR Comm
Hello,

 The places in Squid generating HTTP replies (which is somewhat different
 from relaying an upstream reply) are:
 
 ftp.c
 gopher.c
 errorpage.c
 cachemgr.c
 internal.c
 
 All uses httpReplySetHeaders() to initiate the HTTP reply headers, and
 then adds/deletes what is required from there..

Okay!

The PushCache patched version never calls httpReplySetHeaders().

I don't understand something (lots actually).  What is the actual switch
that tells squid to keep the connection alive?  That is, how does
sending a reply header trigger this keep alive state?

The PushCache version does call clientProcessRequest() (client_side.c),
but it gets caught by:

snip
} else if (r-method == METHOD_PUT) {
snip

that must not have the logic to tell squid to keep the connection
alive.  If I knew how squid actually keeps the connection alive, then I
could include it here.

One thing I just noticed in the PushCache patch is that I call
comm_add_close_handler(), but I don't call httpReadReply() when I am
pushing files.  Does httpReadReply() have something to do with keeping
the connection alive?  I do call clientReadRequest() on FD 12, but then
I call comm_add_close_handler() on FD 12 without calling
httpReadReply().

When I am doing GETs, both clientReadRequest() and httpReadReply() get
called on different FDs.

BTW, the first and last FD always seems to be FD 12.  This is true with
GETS and pushes (i.e., PUTs).  Also, the last thing a GET session does
is clientReadRequest() on FD 12, and always comes back with:

no data to process ((11) Resource temporarily unavailable)

I never get this with the pushes!


Thanks,
Murrah Boswell


Re: [squid-users] GET/PUT Question - AGAIN

2004-10-08 Thread OTR Comm

 It depends, but for the connection to be able to be kept persistent in
 HTTP/1.0 the following criterias need to be fulfilled:
 
 a) The reply-body must have a known length, as indicated by the
 Content-length header.

Would I have a reply-body if I am pushing?  I think I understand this
with a GET request, but I don't see how it applies to a PUT directive to
the server.

 
 b) The client must have requested a persistent connection via the
 (Proxy-)Connection: keep-alive header.

Got this!

 
 c) The server must acknowledge the request to keep the connection
 persistent by answering with a (Proxy-)Connection: keep-alive header.

Where and how does the server acknowledge this?

Thanks,
Murrah Boswell


Re: [squid-users] GET/PUT Question - AGAIN

2004-10-07 Thread OTR Comm
Hello,

Henrik Nordstrom wrote:
 
 On Wed, 6 Oct 2004, OTR Comm wrote:
 
  I have the cache.log and snort dumps for both squid GETs and my
  PushCache PUTs sessions for the same pages if that would be helpful. The
  combined dumps are 12 pages so I shouldn't post them at the list here.
 
 Sorry, I was not aware you are using the PushCache patch.
 
 Is the PUT response from PushCache persistent? If not the connection will
 be closed by Squid immediately after sending the reply and it won't be
 possible to send another request on the same connection. To negotiate a
 persistent connection you need to include the Proxy-Connection:
 keep-alive header.

I do have Proxy-Connection: Keep-Alive in my header.  Do I need to
include the Keep-Alive: timeout=15, max=100 header?

Also, do I need to include an acl for PUT in squid.conf?  If so, what is
the syntax and how do I apply it?

I tried:

snip
acl put_okay method PUT
put_okay allow our_networks
put_okay deny all
snip

Squid started without okay, but when I stop squid, it reports:

parseConfigFile: line 1469 unrecognized: 'put_okay allow our_networks'
parseConfigFile: line 1471 unrecognized: 'put_okay deny all'


Do I need to add PUT as an extension method?

Thanks,
Murrah Boswell


Re: [squid-users] GET/PUT Question - AGAIN

2004-10-07 Thread OTR Comm


Henrik Nordstrom wrote:
 
 On Thu, 7 Oct 2004, OTR Comm wrote:
 
  I do have Proxy-Connection: Keep-Alive in my header.
 
 Good.
 
 Do you also get one back from Squid? If not the pushCache patch needs to
 be extended.

I don't get any kind of response back from squid except HTTP/1.0 200
OK after the file has been pushed.

That is, when I do a GET from my browser through squid, I do see that
squid replys with:

HTTP/1.0 200 OK
snip
X-Cache: MISS from mcw.isp-systems.lcl
Proxy-Connection: keep-alive
snip

but, squid doesn't respond similarly when I push the files.  I just get
the HTTP/1.0 200 OK message.

Does this mean that squid (the pushcache patched version) is ignoring my
Proxy-Connection: Keep-Alive header?

 
  Also, do I need to include an acl for PUT in squid.conf?  If so, what is
  the syntax and how do I apply it?
 
 I have never used the PushCache patch to Squid. What does
 squid.conf.default say about push access controls after you install your
 PushCache patched Squid?

The squid.conf.default that comes with the patch doesn't have any
information about push access controls.

Thanks,
Murrah Boswell


Re: [squid-users] GET/PUT Question - AGAIN

2004-10-06 Thread OTR Comm
Hello,

Henrik Nordstrom wrote:
 
 On Tue, 5 Oct 2004, OTR Comm wrote:
 
  Like I said, during a multiple file push session, the first entry shows
  up in cache.log and gets cached, but the other entries do not show up in
  cache.log, and do not get cached.  But all the entries show up in snort
  as they pass through port 3128.
 
 What does the traffic look like between Squid and your web server?

I'm not quite sure what you are asking here, but when I set my browser
to use squid as a proxy, the traffic flows smoothly, and all the
requested pages and images get cached properly.  Then when I revisit the
pages, squid serves the pages and images out of cache.

I have the cache.log and snort dumps for both squid GETs and my
PushCache PUTs sessions for the same pages if that would be helpful. The
combined dumps are 12 pages so I shouldn't post them at the list here.

Thanks,
Murrah Boswell


[squid-users] GET/PUT Question - Work Around

2004-10-06 Thread OTR Comm
Hello,

I made a work-around for my problem!  Now I create a new socket
instantiation before pushing each file to squid and close the socket
after I receive an 'HTTP/1.0 200 OK' response.

I realize that this is not the most efficient way to communicate through
a socket, but it allows me to continue with a proof-of-concept that I am
working on.

I can revisit this problem later, but the work-around will be okay for
now.

Thanks,

Murrah Boswell


[squid-users] GET/PUT Question - Update

2004-10-06 Thread OTR Comm
Hello,

It appears that web servers also open and close the socket while
servicing GETs.  I watched snort dumps of different traffic sessions
between squid and other web servers, and they all opened and closed the
interface during their particular session with squid.

So, it seems like my work-around is not far off the mark.  It **is**
still inefficient, but it does work.

Thanks again,

Murrah Boswell


Re: [squid-users] GET/PUT Question - AGAIN

2004-10-05 Thread OTR Comm
Hello,

Henrik Nordstrom wrote:
 
 What does the requests you send to Squid look like?

A typical request looks like:

PUT http://learning.plans/cachepurger HTTP/1.0
Date: Wed Oct 06 01:14:10 2004 GMT
Server: ISP Systems CachePusher 1.0
Content-Type: text/html
Content-Length: 2232
Last-Modified: Mon Oct 04 02:47:34 2004 GMT
Proxy-Connection: keep-alive


HTML
HEAD
TITLESquid Cache Purger DOWNLOAD/TITLE
/HEAD
body bgcolor=seashell link=blue vlink=green leftmargin=50
topmargin=0
center
table
trthfont size=4ISP Systems' Squid Cache Purger/font/th/tr
tr/tr
trtdnbsp;/td/tr

trtd align=left
font size=3

bSquid Cache Purger allows users to purge entries in the Squid cache
by searching for key words and/or urls.br
Users will be presented with a list of found entries and then be allowed
to select which cached entries to purge.p
I developed the script to support another project I am working on, but
Ibr
thought this module might be useful as a stand alone to other people.p

Developed by: ISP Systemsbr
Version: betabr
Language: Perlbr
License: a href=http://www.gnu.org/copyleft/gpl.html;bGNU General
Public License (GPL)/b/ap
Download: a
href=cachepurger/cachepurger.tar.gzbcachepurger.tar.gz/b/abr
Updated: 3 Mar 2004 20:33
p
Installation instructions are included in the tarball, or you can view
the a href=cachepurger/READMEbREADME/b/a file.
/b/font
/td/tr
trtdnbsp;/td/tr

/table

p
table
trthfont size=4Other Projects/font/th/tr
tr/tr
trtdnbsp;/td/tr

trtd align=left
font size=3
a href=squidsearch/bSquid Search Engine/b/abr
/td/tr

p
trtd align=left
bSquidSearch allows users to search for key words in the Squid
cache.p
It searches the binary files that make up the cache, and pulls key
words from the META tags and body of the cached files.  
Links to the stored files are created by parsing the meta data in the
header of the cache files until
thebr STORE_META_URL token is found.
/td/tr

trtdnbsp;/td/tr
trtd align=left
font size=3
a href=http://www.bogopop.com/bogopop/;bBogoPop Spam Filtering
System/b/abr
/td/tr
trtd align=left
font size=3
bBogoPop is an MS Windows based email spam filtering system based on
Naive Bayesian categorization theory with an 
inverse chi-square convergence accelerator.p
/b/font
/td/tr

/table

/center
/body
/HTML

Like I said, during a multiple file push session, the first entry shows
up in cache.log and gets cached, but the other entries do not show up in
cache.log, and do not get cached.  But all the entries show up in snort
as they pass through port 3128.

 
 ngrep is a generally better tool for looking at the HTTP protcol details.

I have never used it!  But I will look into it!


Thanks,
Murrah Boswell


[squid-users] GET/PUT Question - AGAIN

2004-10-04 Thread OTR Comm
Hello,

I managed to get squid to accept a single file that I can push to it
from a MS Windows application that I am developing.

Now I am trying to send two during the same session.  But I am having
problems.

BTW: squid is listening on 216.19.43.110:3128, and my Windows platform
is pushing from 192.168.1.254

When I push the first file, squid comes back with 'HTTP/1.0 200 OK'

Using snort to monitor port 3128, I get:

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

10/04-19:06:13.740922 216.19.43.110:3128 - 192.168.1.254:2160
TCP TTL:64 TOS:0x0 ID:34359 IpLen:20 DgmLen:40 DF
***A Seq: 0x40A75EE0  Ack: 0x7701A35C  Win: 0x2DA0  TcpLen: 20

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

10/04-19:06:13.744075 216.19.43.110:3128 - 192.168.1.254:2160
TCP TTL:64 TOS:0x0 ID:34360 IpLen:20 DgmLen:58 DF
***AP*** Seq: 0x40A75EE0  Ack: 0x7701A35C  Win: 0x2DA0  TcpLen: 20
48 54 54 50 2F 31 2E 30 20 32 30 30 20 4F 4B 0D  HTTP/1.0 200 OK.
0A 00..

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

10/04-19:06:13.744403 216.19.43.110:3128 - 192.168.1.254:2160
TCP TTL:64 TOS:0x0 ID:34361 IpLen:20 DgmLen:40 DF
***A***F Seq: 0x40A75EF2  Ack: 0x7701A35C  Win: 0x2DA0  TcpLen: 20

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

10/04-19:06:13.744573 192.168.1.254:2160 - 216.19.43.110:3128
TCP TTL:128 TOS:0x0 ID:60795 IpLen:20 DgmLen:40 DF
***A Seq: 0x7701A35C  Ack: 0x40A75EF3  Win: 0xFADE  TcpLen: 20

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

afer squid recives and caches the first file.  Then when I send the
second file, I see it go past in snort, but squid doesn't acknowledge
that it came in.  No errors, nothing shows up as I am tailing on the
cache.log (with custom debugging hooks), and nothing shows up in
store.log or access.log.  The file sizes are correct, and the 'PUT'
header contains the correct Content-Length for the second file, but it
will not cache.

What I see in snort after the second file transfers is a response back
from squid:

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

10/04-19:06:22.789730 216.19.43.110:3128 - 192.168.1.254:2160
TCP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:40 DF
*R** Seq: 0x40A75EF3  Ack: 0x0  Win: 0x0  TcpLen: 20

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

I don't know why squid is sending back ID:0, Ack:0x0, and Win:0x0

Am I missing something that my PUSH client is suppose to send squid to
tell it to get ready for another file?

Thanks,
Murrah Boswell


[squid-users] GET/PUT Question

2004-10-01 Thread OTR Comm
Hello,

Could someone tell me how squid recognizes the end of a GET sequence?  I
assume the end of a PUT session is similar?

For example, when squid sends a GET request for a specific html page,
the answering server sends the requested data, but how does squid know
when the data transmission stream for that particular page has ended?

To keep it simple, lets suppose that the requested page doesn't have any
graphics, so we just have one GET from squid.

What I am doing is working with Jon Kay's pushcache version of
Squid2.5.  Currently to get squid to cache a single file that I push, I
have to terminate the file with '\r\n\n', but this appears to close the
socket after the file is pushed.  I believe what I need to do is mimic
the GET process, and deliminate in-stream for each file.  Then close the
socket at the end of the push session. Does this make sense?

Anyway, I hope someone can help here!

Thanks,
Murrah Boswell


Re: [squid-users] Squid and controlling access to internet in a classroom

2004-03-20 Thread OTR Comm
Hello,

Geir Fossum wrote:
 
 Hi,
 
 I'm running the computersystems at a school.
 I wonder if there is a simple way to let the teachers toggle internetaccess
 on/off via a webpage and a ON/OFF button ?
 Which in turn reconfigures Squid and restarts the service.

I am working on a system, currently called gafilterfish, that is being
developed to do exactly this (among other things).

It is an interface between Squid and a mysql version of squidGuard that
I wrote.

The system support custom group designations (like individual
classrooms) and specifies individual group managers (like classroom
teachers).  Then when the students are entered into the system, their
group designation controls what access they have to the internet.

For example, say you have a grade level 3 taught by Mrs. Sue Morton,
then the admiinistrator of the system can create a group called
'gradlevel3Morton.'  Then, if a student is entered into the system, and
put into group 'gradlevel3Morton' Mrs. Morton controls the level of
access to the Internet for her classroom at any given time.  This is all
done through web based interfaces.

It is also possible for an individual to belong to multiple groups, so 
student could be in multiple 'classes.'

Now suppose that Mrs. Morton's grade level 3 class is taught between
1000 and 1100 and a particular student is in group 'gradlevel3SueMorton'
and also in 'gradlevel3LarrySmith.'  It is trivial to make the system
check for time windows when checking 'privileges' for the student. 
Although the system does not check time windows currently, it is on my
TO-DO list, and will be implemented soon.

Another feature of the system is that it supports realtime, group based
messages.  That is, Mrs. Morton can log into the group management
interface and compose a message for her class, and then every student
logged into the system will receive that message immediately.  If one of
her students is not currently logged on, they will receive that message
(plus all other unreceived messages) the next time they log on.

Group managers also can setup group based whitelists and have control
over which sites are blocked.

The system also supports times overrides for blacklisted sites for
individual users with override privileges, and if a user has personal
management privileges, that user can permanantly override a site that is
normally blacklisted.

I am developing this system primarily to support schools, but it would
apply to any organization that can be partitioned into groups, classes,
divisions, etc.

Like I said earlier, I am still working on the system, but it is 90%
functionally complete.  I am not an html coder, so the web interfaces
are simple, but if you want to check it out, email me at
[EMAIL PROTECTED] and I will set you up with a user account to log
in.

I am sure I haven't explained the system adequately, that is not my long
suit, but I hope you get the direction of where I am headed with it.

Thanks,

Murrah Boswell


[squid-users] redirct.cc question

2004-03-19 Thread OTR Comm
Hello,

I am using Squid Version 3.0-PRE3-CVS

Does anyone know what could cause 'r-client_addr' in redirect.cc to
evaluate to the gateway address of the machine that Squid is running on
even for people coming in from outside the machine?

That is, when Squid calls the redirector (squidGuard), the client
address is always the same (209.145.210.129), i.e., the gateway address
of the Squid host box.

I assume that client_addr comes from:

r-client_addr = conn.getRaw() != NULL ? conn-log_addr : no_addr;

in the redirectStart function in redirect.cc.

Could the problem have to do with how I have NAT setup, or something to
do with the nameserver?

Thanks,

Murrah Boswell


Re: [squid-users] redirct.cc question - Never Mind

2004-03-19 Thread OTR Comm
Hello,

Disregard this question, it was in my NAT configuration.

Thanks,

Murrah Boswell

OTR Comm wrote:
 
 Hello,
 
 I am using Squid Version 3.0-PRE3-CVS
 
 Does anyone know what could cause 'r-client_addr' in redirect.cc to
 evaluate to the gateway address of the machine that Squid is running on
 even for people coming in from outside the machine?
 
 That is, when Squid calls the redirector (squidGuard), the client
 address is always the same (209.145.210.129), i.e., the gateway address
 of the Squid host box.
 
 I assume that client_addr comes from:
 
 r-client_addr = conn.getRaw() != NULL ? conn-log_addr : no_addr;
 
 in the redirectStart function in redirect.cc.
 
 Could the problem have to do with how I have NAT setup, or something to
 do with the nameserver?
 
 Thanks,
 
 Murrah Boswell


Re: [squid-users] swap.state and perl unpack question

2004-03-07 Thread OTR Comm
Hello,

 please note that field alignment applies as per the requirements your
 platform so there may be padding between smaller and larger fields. On x86
 this padding is seen between the op and swap_file_number fields.

I am on an x86 machine.

 
 Detailed layout on x86:   offset(length)
 
  0(1)  char op;
  1(3)  char padding[3]
  4(4)  int swap_file_number;
  8(4)  time_t timestamp;
 12(4)  time_t lastref;
 16(4)  time_t expires;
 20(4)  time_t lastmod;
 24(4)  size_t swap_file_sz;
 28(2)  u_short refcount;
 30(2)  u_short flags;
 32(16) unsigned char key[MD5_DIGEST_CHARS];
 

I still cant't read the swap.state records properly.

I now have in code:

$binary_layout = A1 x3 i l l l l l s s A16;
$recordlen = length pack $binary_layout, ;

open SWAP_STATE, $swap_state;

if ( ! read SWAP_STATE, $record, $recordlen ) {
  print Failed to read initial record in $swap_state\n;
  die;
}

($metaop,$metapad,$metafilenum,$metatimestamp,$metalastref,$metaexpires,$metalastmod,
$metafilesz,$metarefcount,$metaflags,$metakey) = unpack $binary_layout,
$record;

print DEBUG binary_layout: $binary_layout\n;
print DEBUG recordlen: $recordlen\n; 
print DEBUG metaop: $metaop\n;
print DEBUG metafilenum: $metafilenum\n;
print DEBUG metatimestamp: $metatimestamp\n;
print DEBUG metalastref: $metalastref\n;
print DEBUG metaexpires: $metaexpires\n;  
print DEBUG metalastmod: $metalastmod\n;

and for output I get I get:

binary_layout: A1 x3 i l l l l l s s A16
recordlen: 48
metaop: 
metafilenum: 1076866014
metatimestamp: 1076866015
metalastref: -1
metaexpires: 1062471704
metalastmod: 5992

Obviously I have done something wrong, but I can't see it.  Could
someone help please?

 
 Most if not all of this information is also logged in swap.log where you
 also have access to the URL etc, provided it's known by Squid at the
 time.

I would rather use the swap.state data since it has more information,
but I looked at using swap.log.

I didn't have the cache_swap_log option set in squid.conf, but I turned
it on and pointed to /usr/local/squid/var/logs/swap.log

I restarted (not reloaded) squid and swap.log.00 got created.  When it
first got created, it had the same size as my swap.state file, but as
soon as I went to some sites not cached, the swap.log file updated, but
the swap.state file remained unchanged (both size and timestamp).  Are
swap.log and swap.state mutually exclusive?

Now, I restarted squid again, and still the swap.state file remained
unchanged.  So it appears that if I have swap.log enabled, swap.state is
not updated, right?  If not, how/when is swap.state updated?


Also, what is the binary layout for the swap.log?



Thanks,

Murrah Boswell


Re: [squid-users] swap.state and perl unpack question

2004-03-07 Thread OTR Comm
Hello,

I got the unpacking fixed!

 
 ($metaop,$metapad,$metafilenum,$metatimestamp,$metalastref,$metaexpires,$metalastmod,
 $metafilesz,$metarefcount,$metaflags,$metakey) = unpack $binary_layout,
 $record;

I had my unpack incorrect, it should be

($metaop,$metafilenum,$metatimestamp,$metalastref,$metaexpires,$metalastmod,
$metafilesz,$metarefcount,$metaflags,$metakey) = unpack $binary_layout,
$record;

and then I get for output:
snip
binary_layout: A1 x3 i l l l l l s s A16
recordlen: 48
metaop: ^A
metafilenum: 001B
metatimestamp: 1076866068
metalastref: 1076866068
metaexpires: -1
metalastmod: 1053759007
metafilesz: 3179
metarefcount: 1
snip

I am not handling the 'op' field correctly, but I will work on that
later.

But the questions about swap.log and swap.state are still open.

 
 
  Most if not all of this information is also logged in swap.log where you
  also have access to the URL etc, provided it's known by Squid at the
  time.
 
 I would rather use the swap.state data since it has more information,
 but I looked at using swap.log.
 
 I didn't have the cache_swap_log option set in squid.conf, but I turned
 it on and pointed to /usr/local/squid/var/logs/swap.log
 
 I restarted (not reloaded) squid and swap.log.00 got created.  When it
 first got created, it had the same size as my swap.state file, but as
 soon as I went to some sites not cached, the swap.log file updated, but
 the swap.state file remained unchanged (both size and timestamp).  Are
 swap.log and swap.state mutually exclusive?
 
 Now, I restarted squid again, and still the swap.state file remained
 unchanged.  So it appears that if I have swap.log enabled, swap.state is
 not updated, right?  If not, how/when is swap.state updated?
 
 Also, what is the binary layout for the swap.log?
 

Thanks,

Murrah Boswell


Re: [squid-users] swap.state and perl unpack question

2004-03-07 Thread OTR Comm
Hello,


 Note: I meant store.log above.

Thank you for the clarification and your earlier help!  I figured out
the unpacking, so I can read the records in swap.state now.  I also
figured out how I can take the filenum and recreate the path to it
relative to ../var/cache.

$full_path = sprintf(%02X/%02X/%08X\n,
($filenum / $L2**2) % $L1,
($filenum / $L2) % $L2, $filenum);

where $L1=32 and $L2=512, since I have

cache_dir ufs /usr/local/squid/var/cache 500 32 512


The filenum, timestamp, and lastmod are important for another system
that I am working on, and now I can get to them, thank you,


Murrah Boswell


[squid-users] Question: squidclient in ping mode

2004-03-01 Thread OTR Comm
Hello,

When squidclient is used in ping mode, e.g.

/usr/local/squid/bin/squidclient -g 5 -h 209.145.208.8 -p 8939
http://216.19.43.110
2004-03-01 11:33:11 [1]: 0.131 secs, 7.557252 KB/s
2004-03-01 11:33:12 [2]: 0.001 secs, 997.00 KB/s
2004-03-01 11:33:13 [3]: 0.001 secs, 997.00 KB/s
2004-03-01 11:33:14 [4]: 0.001 secs, 997.00 KB/s
2004-03-01 11:33:15 [5]: 0.001 secs, 997.00 KB/s

what is actually being pinged here?

I know this site (http://216.19.43.110) is not in the squid cache on
209.145.208.8.

When I ping 216.19.43.110 from the host, I get:

[EMAIL PROTECTED] root]# ping 216.19.43.110
PING 216.19.43.110 (216.19.43.110) from 209.145.208.8 : 56(84) bytes of
data.
64 bytes from 216.19.43.110: icmp_seq=1 ttl=247 time=48.5 ms
64 bytes from 216.19.43.110: icmp_seq=2 ttl=247 time=34.5 ms
64 bytes from 216.19.43.110: icmp_seq=3 ttl=247 time=35.6 ms

Can someone explain what squidclient in ping mode is doing and how I can
interpret it's output?


Thanks,

Murrah Boswell


[squid-users] Purging Cache Question

2004-02-28 Thread OTR Comm
Hello,

Am I correct in understanding that the first thing that Squid does when
it receives a purge request (after it has verified proper authority,
host, port, etc.) is remove the cache reference from memory with
removeClientStoreReference(sc, http) and then do the removal from
swap.state and the L2 directory by setting the ipcache_entry expire time
to squid_curtime in ipcacheInvalidate(const char *name)?

Also, how does ip_table get to ipcache_get(const char *name) in
ipcache.cc, and where is hash_lookup defined?  That is from:

static ipcache_entry *
ipcache_get(const char *name)
{
if (ip_table != NULL)
return (ipcache_entry *) hash_lookup(ip_table, name);
else
return NULL;
}



Thanks,

Murrah Boswell


[squid-users] Cache Update Question

2004-02-28 Thread OTR Comm
Hello,

When a cache item is updated, is the L2 file for that item 'touch'ed? 
I.e, is the date of the file changed?


Thanks,

Murrah Boswell


Re: [squid-users] swap.state question

2004-02-27 Thread OTR Comm
Hello,


  Is the swap.state a database where Squid does it's cache checking and
  the L2 directory files where the data is presented from?
 
 Yes.

What database format is swap.state in?

Are there any existing tools that can pinpoint a particular entry in the
cache by name?

If not, can you point me to code routines in 3.0 that address swap.state
during a purge?


Thanks,

Murrah Boswell


[squid-users] errorpage.cc and errorConvert question

2004-02-26 Thread OTR Comm
Hello,

I added an additional case to errorConvert, just for debugging, i.e.,

case 'C': 
if (r-auth_user_request) {
  p = [UNKNOWN];
} else {
  p = [unknown];
}
break;

I wanted to see if r-auth_user_request is true in errorConvert.

Then I modified my query string in ERR_FORWARDING_DENIED to pickup the
value for 'C', i.e.,

URL=http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%Uident=%C

But %C doesn't pickup either value from the case statement.  %U does
pickup the URL, but it is like the case for 'C' is ignored.

What have I missed here?


Thanks,
Murrah Boswell

By the way, just an observation, at other case statements in
errorConvert, 'unknown' is misspelled 
at cases 'M' and 'P' where it is spelled 'unkown'. Just an editorial
observation!


Re: [squid-users] TAG:deny_info - another question

2004-02-26 Thread OTR Comm
Hello,

 Unfortunately there is no % tag for the user name. Should not be hard to
 add one I guess. See src/errorpage.c.

I am using squid-3.0 so I looked in errorpage.cc and found the
errorConvert(char token, ErrorState * err) function.

I am not too good with c++ so please excuse my ignorance and basic
questions.

I see how the URL is setup in the case for 'U'.
I see that

snip
HttpRequest *r = err-request;
snip


I see in HttpRequest.h that HttpRequest is a class with

snip
String extacl_user; /* User name returned by extacl lookup */
snip

Now, can I setup another case in errorConvert for the username (maybe
'C' for client ID) and reference 'r-extacl_user' to get the username?

Thanks,

Murrah Boswell


Re: [squid-users] TAG:deny_info - another question - Solved

2004-02-26 Thread OTR Comm
Hello,

 Examples on how most of these can be accessed can be found in
 ClientHttpRequest::logRequest() and clientPrepareLogWithRequestDetails()
 (both found in client_side.cc) where the information is prepared for
 logging in access.log.

Thanks Henrik - This was the lead I needed!

I found the code for access to the username in
clientPrepareLogWithRequestDetails and added another case in
errorConvert to pass the username:

snip

case 'C':
if (r-auth_user_request) {
  if (authenticateUserRequestUsername(r-auth_user_request))
p =
xstrdup(authenticateUserRequestUsername(r-auth_user_request));
  authenticateAuthUserRequestUnlock(r-auth_user_request);
  r-auth_user_request = NULL;
} else {
  p = [unknown];
}
break;

snip

So now my query string:

URL=http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%Uident=%C

passes the username in %C

RESULTS FROM FD_Handler.cgi : 'QUERY_STRING :
url=http://www.usatoday.com/ident=otrcomm'


Thanks for your help and patience,

Murrah Boswell


Re: [squid-users] Tag: deny_info question

2004-02-25 Thread OTR Comm
Hello,

I have problems understanding deny_info.


 
 You can always negate acls with ! if this makes your life easier.
 deny_info looks for the last ACL on the access line where the request was
 denied. Any ! should not be specified to deny_info.

I have a rule like so:

deny_info http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi
password

but then none of my users ever receive the authentication prompt and the
browser acts like it is an endless loop trying to get to
http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi.  Both the
access.log and store.log are loaded now with references to
FD_Handler.cgi. But, like I said, the browser doesn't ever present the
authentication prompt.

If I change the rule to !password, users can authenticate, but the
deny_info rule is ignore and the standard Forwarding Denied error page
is present when a non-cached page is requested.

I must have missed something major about how deny_info works, and/or how
to define the ACL.

How can I redirect the Forwarding Denied error to FD_Handler.cgi, and
still allow all my users to authenticate?  I am confused.

Thanks,

Murrah Boswell


Re: [squid-users] Tag: deny_info question

2004-02-25 Thread OTR Comm
Hello,

 deny_info uses the last acl on the http_access line denying access, so by
 defining dummy acls which always matches you can have detailed control
 per http_access line which deny_info message is used.

Can you give me an example of a dummy acl that always matches?

Currently for http_access I have:

snip

http_access allow password

http_access deny ADVERTISE

http_access allow our_networks

# And finally deny all other access to this proxy  
http_access deny all

snip

and would I append this always matching dummy acl to the 'http_access
deny all' rule?


Thanks,

Murrah Boswell


Re: [squid-users] Tag: deny_info question

2004-02-25 Thread OTR Comm
Hello,

Still not working!

 
 acl somename src 0.0.0.0/0

I setup an acl like so:

acl fderror src 0.0.0.0/0

 
 I think you want somehting like this:
 
 http_access deny ADVERTISE
 http_access allow our_networks password
 http_access deny all
 
 As for when/how to use deny_info this depends on what you want to
 accomplish.

Then I added:

deny_info http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%s
fderror

but still Forwarding Denied errors are directed to the standard error
page, not to FD_Handler.cgi.

What I want to accomplish is have my users sent to
http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%s instead
of displaying the ERR_FORWARDING_DENIED page when they try to access a
site that is not in the cache.  Remember I have miss_access only allowed
for one user and a few ACLs, but none of my other users are allowed
miss_access.

Basically what I am trying to accomplish is a log of all URLs requested
but not found in the cache without having to parse the access.log or
store.log.

I setup some rules like so for a test:

acl nogoogle dstdomain .google.com
http_access deny nogoogle

deny_info http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%s
nogoogle

and this did indeed send me to FD_Handler.cgi when I tried to access
http://www.google.com and it had %s defined correctly (although it was
unescaped).

So I see that deny_info does indeed work (not that I doubted it), but I
can't get it to work for my requirement.

Thanks,

Murrah Boswell


Re: [squid-users] Tag: deny_info question

2004-02-25 Thread OTR Comm
Hello,

  Then I added:
 
  deny_info http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%s
  fderror
 
 Did you also add fderror last on the access line for which you want this
 redirection to happen?

What I have is:

acl fderror src 0.0.0.0/0

http_access deny ADVERTISE fderror

deny_info http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%s
fderror

 
 And have you patched your Squid to support deny_info on miss_access?

I am using squid-3.0-PRE3, so the patches didn't apply. correct?


One thing though, I am not sure how the acl for fderror is suppose to
trigger on a Forward Denied situation.


Thanks,

Murrah Boswell


[squid-users] TAG: deny_info - workaround

2004-02-25 Thread OTR Comm
Hello,

I opted for the easy way out.  I rewrote the standard
ERR_FORWARDING_DENIED to be a redirector:

!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN
http://www.w3.org/TR/html4/loose.dtd;   
html
head
titleForwarding Denied/title
meta http-equiv=Refresh content=0;
URL=http://216.19.43.110/cgi-bin/squidsearch/FD_Handler.cgi?url=%U;
/head
body bgcolor=SEASHELL text=#804040 link=#008080
p
UL
LI
STRONG
Forwarding Denied.
/STRONG
/UL
/p
/body/html


Not as elegant as I would like, but it works so I can catalog the URLs
that are not cached.

Henrik, thanks for you help and patience,

Murrah Boswell


[squid-users] TAG:deny_info - another question

2004-02-25 Thread OTR Comm
Hello,

Is it possible to get squid to also send the user ident when it 'calls'
ERR_FORWARDING_DENIED?  That is, the URL goes is sent in %U, but can I
get the user ident also?


Thanks,

Murrah Boswell


Re: [squid-users] Does squid try to update is cache automatically

2004-02-24 Thread OTR Comm
Hello,

Jim_Brouse/[EMAIL PROTECTED] wrote:
 
 Does squid try to go out and update webpages that users frequently visit,
 like if I visit www.kernel.org in the morning will it keep checking whether
 it has an up to date page even if a user is not requesting that site or
 does squid only update it cache when users request a particular site?

Squid does not automatically check for update pages, but you can do this
with wget in a cron job.

You can put a list of urls in a file and then use the
'--input-file=file' or '-i file' option with wget to.  For example
http://www.kernel.org

Other options to look at:

-r - recursive
--timestamping
-p or --page-requisites
-nd or --no-directories - Do not create a hierarchy of directories when
retrieving recursively 
-nH or --no-host-directories
-q - quiet download
--tries=number - Set retries number

man wget shows all the options.

Then set http_proxy = http://1.2.3.4:port_num/ in wgetrc to whatever
your squid server is and run wget from cron job whenever you want during
the day.  Squid will get updated.

I am not an expert on wget, but I do use it to load/update Squid.


Murrah Boswell


Re: [squid-users] Redirecting Windows Update

2004-02-22 Thread OTR Comm
Hello,

Scott Phalen wrote:
 
 I have two Windows Update servers internal in my network.  By changing
 certain registry keys I can have the clients use those servers to download
 updates.  Unfortunately I have 3000+ computers and changing them all is
 nearly impossible.
 
 Does anyone have a config I could use to redirect all
 windowsupdate.microsoft.com updates OR a link to show me how to create this
 config?

I believe that this is a routing issue, not a Squid issue specificially.

I believe that you would have better luck looking at iptables with NAT
and Redirect.  I am not an iptables guru, but a guy I work with is, and
he does magic with routing issues and iptables.

look at
http://www.linuxquestions.org/questions/archive/3/2003/01/1/39970 to get
an idea of what iptables acn do if you don't already know.

It appears that what you want to do is capture port 80 traffic from your
network pointed to windowsupdate.microsoft.com (i.e., 207.46.134.90 or
207.46.134.92) and redirect it to one of your Update Servers (i.e., some
ip addresses on you network), correct?

Subscribe to the netfilter/iptablers user mailing list at
http://www.netfilter.org/mailinglists.html and ask those guys your
question.  I am sure there are people there who have had and solved
similar problems, and maybe your's specifically.


Murrah Boswell


Re: [squid-users] miss_access Revisited - NEVER MIND

2004-02-22 Thread OTR Comm
I think I figured it out:

acl squidsearch url_regex [-i]
^http://216.19.43.110/cgi-bin/squidsearch/squidsearch2.pl

miss_access allow squidsearch


and turn these back on

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


Thanks,

Murrah Boswell


[squid-users] Master Cache Server Feeding Client Cache Servers

2004-02-20 Thread OTR Comm
Hello,

Is it possible to have a master cache server that updates client cache
servers based upon queries sent from the clients to the master?

That is, if a query to a client cache server has a MISS, then the client
will query the master server for data.

Along similar lines, is it possible to have a master cache server update
client servers on a scheduled basis (say nightly)?

Thanks,

Murrah Boswell


[squid-users] Squid: The Definitive Guide

2004-02-20 Thread OTR Comm
Hello,

For anyone who is interested, amazon.com just shipped my copy of 'Squid:
The Definitive Guid,' Duane Wessells a month earlier than they
origionally expected.

Murrah Boswell


Re: [squid-users] Don't Know Topic For This Question - Got it!

2004-02-17 Thread OTR Comm
  acl wget_prog proxy_auth wget
  acl our_networks src 192.168.1.0/24 ...
  redirector_access deny wget_prog
 
  http_access allow password
  http_access allow wget_prog
  #http_access allow our_networks  (Commented out for this test)
  http_access deny all
 
  miss_access deny our_networks
 
 Hmm.. this does not look right. You need to allow wget miss_access and
 deny everyone else.
 
 Then you need to allow everyone who should be allowed to use the Squid
 server at all in http_access, this includes all your clients including
 wget.

The following configuration works, but I am not sure if the ordering of
the rules is correct or the most efficient ordering:


acl wget_prog proxy_auth wget
acl our_networks src 192.168.1.0/24 ...
redirector_access deny wget_prog

http_access allow password
http_access allow wget_prog
http_access allow our_networks
http_access deny all

miss_access allow wget_prog
miss_access deny password


Thanks,

Murrah Boswell


Re: [squid-users] Don't Know Topic For This Question - Got it!

2004-02-17 Thread OTR Comm


Henrik Nordstrom wrote:
 
 On Tue, 17 Feb 2004, OTR Comm wrote:
 
  acl wget_prog proxy_auth wget
  acl our_networks src 192.168.1.0/24 ...
  redirector_access deny wget_prog
 
  http_access allow password
 
 How is password defined? If this is all users then no further http_access
 rules is needed.

acl password proxy_auth REQUIRED

So once password is allowed, it is redundant to have the rule

http_access allow wget_prog

right?


 
  miss_access allow wget_prog
  miss_access deny password
 
 The deny password is not needed.

I got rid of ** miss_access deny password **

wget_prog is defined like so,

acl wget_prog proxy_auth wget

and this is what I use to control miss_access, right?


Thanks,

Murrah Boswell


Re: [squid-users] Re: Don't Know Topic For This Question

2004-02-17 Thread OTR Comm
 Wwwoffle should be useful for this job.

Thanks for the lead, it does look interesting, but I need to stay with
Squid and wget for other functionality in the system that I am
developing.

Thank you,

Murrah Boswell


[squid-users] Don't Know Topic For This Question

2004-02-16 Thread OTR Comm
Hello,

I ma trying to setup a squid system that uses wget to 'feed' data into
and then allows authenticated users to access the cached data in the
system but not go beyond the cached data.  That is, if the data is
avalable in the cache, then it is presented to the user, but if the data
is not available, then squid will not go offsite to retrieve the data. 
The only way that data can get into the cache is through wget.

So, how can I stop suqid from getting data offsite except for user that
wget comes in as?

I currently have wget configured to access squid as username 'wget' and
this works.  Now I need to limit outside access through squid to just
the 'wget' user.

Do I use some flavor of http_access acl?


Thanks,

Murrah Boswell


Re: [squid-users] Don't Know Topic For This Question

2004-02-16 Thread OTR Comm
Henrik Nordstrom wrote:
 
 On Mon, 16 Feb 2004, OTR Comm wrote:
 
  So, how can I stop suqid from getting data offsite except for user that
  wget comes in as?
 
 See miss_access.
 
 You probably also want to enable offline_mode unless user access is always
 via a neighbor cache using this cache as sibling.

Okay! Something's not working though!

I have:

acl wget_prog proxy_auth wget
acl our_networks src 192.168.1.0/24 ...
redirector_access deny wget_prog

http_access allow password
http_access allow wget_prog
#http_access allow our_networks  (Commented out for this test)
http_access deny all

miss_access deny our_networks

offline_mode on

and then I run wget from the command line:

wget -r -q -nd -l 4 --delete-after http://www.deatech.com/cobcottage/

I see an entry in store.log, gut there is nothing in the cache and links
are not recursed

1076985107.052 RELEASE -1  67BE901E5113F4EB321042E39658E0E6  403
1076985107 0 1076985107 text/html 1917/1917 GET
http://www.deatech.com/cobcottage/

and when I try to access http://www.deatech.com/cobcottage/ from my
browser, I get:

snip
ERROR
The requested URL could not be retrieved

Forwarding Denied.
snip

It doesn't look like wget has access to the outside through squid either
and no one has access to the cache.

What have I done wrong?


Thanks,

Murrah Boswell


[squid-users] store.log question

2004-02-16 Thread OTR Comm
I just rebooted my server and restarted squid and several consecutive
entries like this appeared in my store.log:

snip
1076992457.790 RELEASE -1  CE8669ED92F5F708F87752CD9666DFAF  
? ? ? ? ?/? ?/? ? ?
snip

What does this mean?  Has something been purged from the cache?


Thanks,

Murrah Boswell


[squid-users] cache_dir L1 L2 question

2004-02-14 Thread OTR Comm
Hello,

I have a 80GB drive on a system that I would like to dedicate to a squid
server.  The notes in squid.conf say that I should subtract 20% and use
that number for the Mbytes field for cache_dir.  So I would have 64000.

The question is, what is a reasonable L1 and L2 to put for this setting?

Also, I don't understand the different storage types, (ufs, aufs, diskd,
etc), but for the system I want to set up, would either one be
preffered?


Thank you,

Murrah Boswell


Re: [squid-users] cache_dir L1 L2 question

2004-02-14 Thread OTR Comm
  I have a 80GB drive on a system that I would like to dedicate to a squid
  server.  The notes in squid.conf say that I should subtract 20% and use
  that number for the Mbytes field for cache_dir.  So I would have 64000.
 
 You should do that to start with.  After Squid has been running with
 a full cache you can think about increasing the cache size.

You mean set the cache size to 64GB to start with, right?

 Also, please read: http://www.oreilly.com/catalog/squid/chapter/index.html

I bought your book a few weeks ago, now I am waiting for Amazon to ship
it.


Thanks,

Murrah Boswell


Re: [squid-users] Squid and wget and redirector

2004-02-11 Thread OTR Comm
 ident is not related to authentication. You want proxy_auth.

Thanks muchly, that did it.

 If there was no complain on the acl line then verify spelling.

I moved redirector_access deny wget_prog to follow 'acl wget_prog
proxy_auth wget' and that fixed the error.

I also added 'http_access allow wget_prog'

Thanks again,

Murrah Boswell


Re: [squid-users] Squid and wget

2004-02-10 Thread OTR Comm
 The proper way would be to set up Squid on its default port of 3128 or a
 common proxy port like 8080. Then set the proxy variable for wget (something
 like http_proxy=http://squidmachine:3128) so wget will request URLs through
 Squid.
 
 If you're trying to pre-fetch data, you might also be interested in the
 following wget options:
 --delete-after will delete the files after download
 -r does a recursive crawl
 -l for recursion depth
 -nd no directory creation, to speed up the fetching

Thanks,

I completely overlooked the .wgetrc file that let me configure
http_proxy, proxy-user, and proxy-password.

I ran:

wget -nd --delete-after http://www.indo.com/culture/dance_music.html

and the page was loaded into the squid cache, i.e.,

snip from squid cache using purge
/usr/local/orsquid/var/cache/00/00/0012   021144
http://www.indo.com/culture/dance_music.html
snip

wget returned:

snip
100%[===]
20,77036.49K/s 

11:35:08 (36.45 KB/s) - `dance_music.html' saved [20770/20770]
snip

and then I ran the same command, wget -nd --delete-after
http://www.indo.com/culture/dance_music.html to see if wget would then
puul from cache, and got:

snip
100%[===]
20,770--.--K/s 

11:36:19 (89.22 MB/s) - `dance_music.html' saved [20770/20770]
snip

So I assume that the --.--K/s in the second run indicates that wget did
pull from the squid cache, right?


Thanks,
Murrah Boswell


Re: [squid-users] Squid and wget

2004-02-10 Thread OTR Comm
 The easiest way to check is to look at access.log and the 4th column.
 If it says TCP_MISS/200 then it means Squid is getting the file directly.
 if it says TCP_HIT/200 or TCP_IMS_HIT/304 then Squid is getting it from the
 disk cache or memory cache, respectively.

My 4th column in access.log, says TCP_MEM_HIT/200 when Squid finds it in
cache, but I get the point and thank you very much for your help.

Thanks,

Murrah Boswell


[squid-users] Squid and wget and redirector

2004-02-10 Thread OTR Comm
Hello,

So now I have wget 'feeding' to Squid's cache.

I use a redirector with squid, so every request is going to my
redirector (as it should).  However, I believe that I am overloading my
redirector.

Is there any way to tell squid not to use the redirector_program if the
session is coming from wget?

Or should I setup another instantiation of Squid just for wget that uses
a different port, but shares the cache with the 'redirector'
instantiation?


Thanks,

Murrah Boswell


Re: [squid-users] Squid and wget and redirector

2004-02-10 Thread OTR Comm


Henrik Nordstrom wrote:
 
 On Tue, 10 Feb 2004, OTR Comm wrote:
 
  Is there any way to tell squid not to use the redirector_program if the
  session is coming from wget?
 
 See redirector_access.

I have authentication turned on and I have a user named wget.  I
authenticate through an external program, mysql_auth.

I have wget setup to access squid as proxy-username 'wget'. So if I
don't want user wget to access the redirector, I tried:

acl wget_prog ident wget
redirector_access deny wget_prog

but I get errors:

2004/02/10 16:28:03| squid.conf line 1043: redirector_access deny
wget_prog
2004/02/10 16:28:03| aclParseAccessLine: ACL name 'wget_prog' not found.
2004/02/10 16:28:03| squid.conf line 1043: redirector_access deny
wget_prog
2004/02/10 16:28:03| aclParseAccessLine: Access line contains no ACL's,
skipping

What have I done wrong here?

Thanks,

Murrah Boswell


Re: [squid-users] Squid and wget and redirector

2004-02-10 Thread OTR Comm
OTR Comm wrote:
 
 Henrik Nordstrom wrote:
 
  On Tue, 10 Feb 2004, OTR Comm wrote:
 
   Is there any way to tell squid not to use the redirector_program if the
   session is coming from wget?
 
  See redirector_access.
 
 I have authentication turned on and I have a user named wget.  I
 authenticate through an external program, mysql_auth.
 
 I have wget setup to access squid as proxy-username 'wget'. So if I
 don't want user wget to access the redirector, I tried:
 
 acl wget_prog ident wget
 redirector_access deny wget_prog
 

Never mind, I figured it out.

'acl wget_prog ident wget' was wrong,

I needed:

acl wget_prog proxy_auth wget
redirector_access deny wget_prog

and

http_access allow wget_prog

Thanks,

Murrah Boswell


[squid-users] Progress on Squid and Search Engines

2004-02-09 Thread OTR Comm
Okay, I figured out (sort of) how the purge utility is extracting the
url from the file header.

Now, how does squid which cache file goes with which cache file.  For
example, I completely erased all my cache files and started over with a
clean slate.  Then I went to http://www.openldap.org/ and checked the
cache.  There were four files, one for the html code
(.../var/cache/00/00/) and three for the three gifs
(0001,0002, and 0003) that appear at the website.

Now my question is, how does squid know which of these cached gifs is
which when it is reconstructing the openldap.org index.html page when/if
it needs to?  I mean, is there some kind of mapping internal to the
cached files, or some other table, that squid uses for reconstruction? 


Thanks,

Murrah Boswell


Re: [squid-users] Squid and Search Engines

2004-02-08 Thread OTR Comm
 Technicallly it should be possible, but you need to write another
 retreiver spider for the engine knowing how to read the squid cache files
 instead of fetching from the web or indexing local files.
 
 The format of the cache files are described in the programmers guide and
 iirc there is even a perl module in CPAN for reading these files.

That was my next question; i.e. how do I read the cache?
Do you by any chance know the name of the CPAN module?

I looked at CPAN and found the Cache-2.01 module, is this the one?

 The developer list for the preferred search engine is a better place to
 ask I think. There is no modifications required to Squid but the search
 engine needs to be slightly modified to know how to read the Squid cache
 data.
 
 Each file in the cache contains
 
 a) Meta data like the URL of the file, size, time cached etc. Of this the
 search engine needs to use the URL as name of the indexed object.
 
 b) The object HTTP headers.
 
 c) The object contents. This is what needs to be indexed.
 
 b+c is the HTTP reply as received by Squid.

When I do a 'file' on a particular cache file, I get back that it is
DBase 3 format, is this correct, or is this just the closest that Linux
can get on determining the type of file?  The question really is, how do
I put the cached file back into it's original format, with it's original
title for presentation to the server?

I looked at the 'purge' utility written by Jens-S. Vöckler since it can
decipher the squid cache, but I don't understand how it is working.

For example, I have a cache file:

/usr/local/squid/var/cache/00/09/092D

with header information:

^Co
Content-Length: 2173
Content-Type: image/gif
Last-Modified: Sun, 11 Jan 2004 05:20:46 GMT
Accept-Ranges: bytes
ETag: 5db8d2aa2d8c31:627d33
Server: Microsoft-IIS/6.0
Date: Thu, 22 Jan 2004 03:02:01 GMT
Connection: close
snip

and from that, the 'purge' utility returns the URL of:

http://www.whitehouse.org/kids/images/tn-palm.gif

How is the URL deciphered?  For the life of me, I can't figure it out.

I read in the Programming Guide that A cache swap file consists of two
parts: the cache metadata, and the object data.

Could you please point me to the code in squid that will show me how to
get at and decipher the metadata?

I am sorry t be such a bother, but I get totally lost in the squid code,
so pointer to the correct modules to look in will be very much
apprectaited.

Thanks,
Murrah Boswell


[squid-users] Squid and Search Engines

2004-02-07 Thread OTR Comm
Hello,

Is it possible to setup a search engine, like Glimpse or Swish-e, to
catalog and search against the information in the Squid cache?

To minimize bandwidth on certain remote locations, I would like to
develop a spider to 'feed' information into squid cache, and then have
the search engine work off the cache instead of sending the user out on
the Internet for data.

The spider would gather information based on preplanned, scheduled 'hunt
topics' and load the squid cache, which would be the source of query for
the search engine.  Then through some manipulation of apache's
mod_rewrite module, the data from squid's cache would appear to be
coming from the web.

Does this even make sense?  Should I ask this at the Squid Development
list?


Thanks,

Murrah Boswell


[squid-users] Squid and Glimpse/Harvest Question

2004-02-05 Thread OTR Comm
Hello,

Does anyone know of a way, or even if it is possible, to have Squid
interfaced to the Glimpse webbot, or any webbot or spider?  Or directly
coupled to the Glimpse search engine?

I am working on a proof of concept to see if I can couple the
Glimpse/Harvest distributed search engine to Squid.  The idea is to have
a Squid caching server setup at a school and have the search engine
spider/webbot gather information based on a learning plan defined by a
teacher and feed this data into Squid.  Then Harvest engine would
perform it's queries on the Squid server and found would point to data
resident in Squid cache.

I see this as a way to minimize bandwidth usage in remote school systems
where bandwidth is so costly, but also as an attempt to optimize the
Internet as a learning tool.

This would not be like trying to inject raw stream data into squid,
since the webbot would have urls for all data it would be storing in
squid.

Then the next step is to develop a system that would create
personalized, topic specific portfolios for each student based on what
part of the 'learning plan' they are in.  So, whenever they logged into
the system, they would be presented with an updated portfolio on their
subject topic (or topics).

Hope this makes sense,

Murrah Boswell


Re: [squid-users] Squid: The Definitive Guide now available

2004-02-05 Thread OTR Comm
 O'Reilly released Squid: The Definitive Guide a couple of days
 ago.  Here is their web page for it:
 
 http://www.oreilly.com/catalog/squid/
 
 Has anyone received a copy yet?  Looks like Amazon and Barnes and
 Noble are yet shipping it.

I ordered a copy from Amazon today and they said it would ship in a
month or two.  A few weeks ago, Amazon said it was in final review, but
I could get on the list for delivery.  So, at least they do know it is
available.  Maybe they just don't have it in their distribution system
yet.  O'Reilly did have it (they didn't say when they would ship
though), but there were other books I needed at Amazon.

I wait in anticipation,
Murrah Boswell


Re: [squid-users] Caching P2P

2004-01-04 Thread OTR Comm
No offense, but could you guys take this discussion out of this list
please?

It is interesting, but IMHO, it eally doesn't belong here.

Thanks,
Murrah Boswell

[EMAIL PROTECTED] wrote:
 
 What you dont realize that that the majority of the traffic with p2p is *not*
 the downloads themselves but instead is the 100s of clients/servers
 contacting each other and exchanging directory information. The chatter is constant
 and unrelenting. Caching p2p content is problematic in more ways than one. A
 few movies will fill your cache. You'd have to either 1) discover which ports
 are in use as they are variable and random or 2) assume that every port
 *might* have content.
 
 In reality you'd be better off just running your own supernode on your
 network and have your customers/users connect to you. That effecitvely, is your
 cache. Of course you'll likely get sued, but its a better concept than a p2p
 cache.
 
 BC


[squid-users] Squid Authentication : Again

2003-12-31 Thread OTR Comm
Hello,

I am trying to figure something out.  When Squid is configured to
authenticate, how does it keep up with the different session for
individual users who have logged on?

I have looed into this before, and actually asked questions to the group
about various aspects of Squid authentication, but I really need to know
how Squid keeps up with individual authenticated users.

The reason I ask is, and I have asked this before, is there any way to
have Squid keep up with individual sessions without authentication?

N2H2, the company that wrote the Bess Filtering system, uses Squid
without authentication and a filtering helper like squidGuard that
supports overrides of blocked sites.  User who have authority to
override sites, login and then somehow Squid can distinguish those
users.  How can Squid do this?

Has N2H2 written some type of wrapper around Squid you think?

I have asked N2H2 for a copy of their Squid code, but they put me off
and then lately they told me that I have to talk to their legal
department.  Even though Squid is under GPL, they still want me to jump
through hoops with their legal department.

Does anyone have any ideas about how I can get Squid to recognize
particular user sessions without requiring authentication?

I have written a system that does just about everything that the Secure
Computing/N2H2 Smartfilter and Bess systems do, but I have to have my
users login to Squid so Squid passes the user info to my helper.

Any help will be greatly appreciated,

Murrah Boswell


[squid-users] Squid Auth Question

2003-12-21 Thread OTR Comm
Hello,

I have read in the Squid code someplace, but can't remember where now,
but what does Squid expect back from the authentication program?  I use
ncsa_auth, and I see in ncsa_auth.c:

snip
u = (user_data *)hash_lookup(hash, user);
if (u == NULL) {
printf(ERR\n);
} else if (strcmp(u-passwd, (char *) crypt(passwd, u-passwd))
== 0) {
printf(OK\n);
} else if (strcmp(u-passwd, (char *) crypt_md5(passwd,
u-passwd)) == 0) {
printf(OK\n);
} else {
printf(ERR\n);
}
snip

So if Squid 'sees' an 'OK' from the auth program, it passes the users
and if Squid 'sees' an 'ERR' it calls the auth program again.  Is this
correct?

Thanks,

Murrah Boswell


Re: [squid-users] Too few redirector processes are running

2003-12-19 Thread OTR Comm
Yes, Squid has permission to run the redirector.  Squid runs as 'nobody'
and the redirector is owned by 'nobody' with 755 permissions.

Murrah Boswell

Duane Wessels wrote:
 
  Is this telling me that my redirectors are dying from an error in the
  redirector code, or what?
 
 Make sure that the Squid userid is able to execute the redirector
 program.  Check file and directory permissions.
 
 Duane W.


[squid-users] AGAIN: Re: [squid-users] Too few redirector processes are running

2003-12-19 Thread OTR Comm
Do you think it could have something to do with the convoluted way that
ar.atwola.com is generating the dynamic ads since all I get are broken
links to their ads?  I mean, what does the link
http://ar.atwola.com/link/93183014/aol do anyway?

Murrah Boswell

Duane Wessels wrote:
 
  Is this telling me that my redirectors are dying from an error in the
  redirector code, or what?
 
 Make sure that the Squid userid is able to execute the redirector
 program.  Check file and directory permissions.
 
 Duane W.


[squid-users] Too few redirector processes are running

2003-12-18 Thread OTR Comm
Hello,

I have redirector_children 10 in my config file, but I keep getting the
entry below, over and over again in my cache.log when I go to
http://money.cnn.com/best/bplive:

snip
2003/12/18 23:01:30| Starting Squid Cache version 3.0-PRE3-CVS for
i686-pc-linux-gnu...
2003/12/18 23:01:30| Process ID 6587
2003/12/18 23:01:30| With 1024 file descriptors available
2003/12/18 23:01:30| DNS Socket created at 0.0.0.0, port 32785, FD 4
2003/12/18 23:01:30| Adding nameserver 192.168.1.5 from /etc/resolv.conf
2003/12/18 23:01:30| Adding nameserver 216.19.2.80 from /etc/resolv.conf
2003/12/18 23:01:30| Adding nameserver 63.169.42.12 from
/etc/resolv.conf
2003/12/18 23:01:30| Adding nameserver 209.140.24.33 from
/etc/resolv.conf
2003/12/18 23:01:30| helperOpenServers: Starting 10 'orsquidGuard'
processes
2003/12/18 23:01:30| helperOpenServers: Starting 10 'ncsa_auth'
processes
2003/12/18 23:01:30| Unlinkd pipe opened on FD 29
2003/12/18 23:01:30| Swap maxSize 102400 KB, estimated 7876 objects
2003/12/18 23:01:30| Target number of buckets: 393
2003/12/18 23:01:30| Using 8192 Store buckets
2003/12/18 23:01:30| Max Mem  size: 8192 KB
2003/12/18 23:01:30| Max Swap size: 102400 KB
2003/12/18 23:01:30| Rebuilding storage in /usr/local/orsquid/var/cache
(CLEAN)
2003/12/18 23:01:30| Using Least Load store dir selection
2003/12/18 23:01:30| Set Current Directory to /usr/local/squid/var/cache
2003/12/18 23:01:30| Loaded Icons.
2003/12/18 23:01:30| Accepting  HTTP connections at 0.0.0.0, port 8940,
FD 31.
2003/12/18 23:01:30| WCCP Disabled.
2003/12/18 23:01:30| Ready to serve requests.
2003/12/18 23:01:30| Done reading /usr/local/orsquid/var/cache swaplog
(1692 entries)
2003/12/18 23:01:30| Finished rebuilding storage from disk.
2003/12/18 23:01:30|  1692 Entries scanned
2003/12/18 23:01:30| 0 Invalid entries.
2003/12/18 23:01:30| 0 With invalid flags.
2003/12/18 23:01:30|  1692 Objects loaded.
2003/12/18 23:01:30| 0 Objects expired.
2003/12/18 23:01:30| 0 Objects cancelled.
2003/12/18 23:01:30| 0 Duplicate URLs purged.
2003/12/18 23:01:30| 0 Swapfile clashes avoided.
2003/12/18 23:01:30|   Took 0.0 seconds (46286.4 objects/sec).
2003/12/18 23:01:30| Beginning Validation Procedure
2003/12/18 23:01:30|   Completed Validation Procedure
2003/12/18 23:01:30|   Validated 1692 Entries
2003/12/18 23:01:30|   store_swap_size = 14540k
2003/12/18 23:01:30| WARNING: redirector #1 (FD 5) exited
2003/12/18 23:01:31| WARNING: redirector #2 (FD 6) exited
2003/12/18 23:01:31| WARNING: redirector #3 (FD 7) exited
2003/12/18 23:01:31| WARNING: redirector #4 (FD 8) exited
2003/12/18 23:01:31| WARNING: redirector #5 (FD 9) exited
2003/12/18 23:01:31| WARNING: redirector #6 (FD 10) exited
2003/12/18 23:01:31| storeDirWriteCleanLogs: Starting...
2003/12/18 23:01:31|   Finished.  Wrote 1692 entries.
2003/12/18 23:01:31|   Took 0.0 seconds (839702.2 entries/sec).
FATAL: Too few redirector processes are running
Squid Cache (Version 3.0-PRE3-CVS): Terminated abnormally.
CPU Usage: 0.090 seconds = 0.040 user + 0.050 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 470
Memory usage for squid via mallinfo():
total space in arena:2918 KB
Ordinary blocks: 2897 KB  9 blks
Small blocks:   0 KB  0 blks
Holding blocks:  2396 KB 13 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  20 KB
Total in use:5293 KB 181%
Total free:20 KB 1%

snip

I bumped redirector_children to 40, and still got he same message
repeated over and over.

Is this telling me that my redirectors are dying from an error in the
redirector code, or what?

Thanks,

Murrah Boswell


[squid-users] # of redirector children before system overload ??

2003-12-16 Thread OTR Comm
Hello,

What is the maximum number of redirector children that squid can
support.  I assume that squid doesn't have a limit, but what is the
system overhead associated with each redirector child?

I am just wondering how many redirector children I should specify for a
system that is potentially supporting 5,000+ users.

Thanks,

Murrah Boswell


[squid-users] Size Of Download Control

2003-12-13 Thread OTR Comm
Hello,

Does squid record or know about the size (in bytes) of each GET?  If
not, is there any way to get this info from the Web Server?

If so, what structure does it use to record this info?  I want to pass
the size of each GET to a modified verion of squidGuard so I can do some
quota checking!

I assume that squid does keep track of the size of the GETs (maybe I am
wrong), so if I knew the structure that holds this info, I will rewrite
the code to pass it to squidGuard and rework squidGuard to deal with
this new info.

Thanks,
Murrah Boswell


[squid-users] redirect_children not independent ??

2003-11-22 Thread OTR Comm
Hello,

I have squid running with squidGuard as a redirector and ncsa_auth for
authentication.  I have put some debugging hooks (mostly some counters)
in the squidGuard code, and find that even if I start multiple sessions
with different userIDs under squid, all the squidGuard sessions share
the values for the counters.

That is, if I start a session in squid and goto five (5) blacklisted
sites, one of my counters has moved to five (5).  Then if I start
another squid session for a different user (while the first one is still
active), the counter for blacklisted sites visited is set at five (5)
from the first session, and if the second user visits a blacklisted
site, the counter moves to six (6).  So it appears that both sessions
visited six (6).

This seems to imply that the redirect_children are not really unique
threads from squid, is this correct?

Thanks,
Murrah Boswell


[squid-users] What is 'signal 6' ?

2003-11-20 Thread OTR Comm
Hello,

I am using squid-3.0-PRE3-20031028.

I get the following message pairs periodically (different process ID
each time) in my syslog:

Squid Parent: child process 6889 exited due to signal 6
Squid Parent: child process 8118 started

and it appears that squid has restarted itself.  However, the original
squid process, the one started as root with process ID 6887, remained
active.  It is just that all the child processes restarted.

I hope I have described this clearly!

Does anyone know what is going on here?

Thanks,
Murrah Boswell


Re: [squid-users] Dumb Cache Question

2003-11-14 Thread OTR Comm
Henrik Nordstrom wrote:

 Because
 
 a) Redirectors is really meant for redirecting the requests to local known
 mirrors.
 
 b) The rules in your redirector may differ for differen users.
 
  When does squid check it's cache for the information on any given
  request?  Is it after the call to squidGuard?
 
 Yes, and it need to be for proper redirector function, even when using
 SquidGuard for blocking purposes..

Thank you, this makes perfect sense.  So redirectors like squidGuard,
albeit very useful for my purposes, are really taking advantage (in a
sense) of the original purpose of the redirector design requirement in
squid?

So if I understand correctly, one use for a redirector could be for load
balancing on a network, is this correct?  If so and if not, how else
might the redirector be used, aside from blocking purposes?

Murrah Boswell


[squid-users] Dumb Cache Question

2003-11-13 Thread OTR Comm
Hello,

This may seem like a dumb question, but...

I have squid running with authentication and with squidGuard as a
redirect program.  All this is working okay.  I have set some debugging
hooks in the squidGuard code to watch operation and how squid and
squidGuard interface.

My question is this, if squid is a caching proxy, how come it sends all
GETs to the redirector?  That is, even a sight that is not blocked by
the squidGuard blacklist is passed to squidGuard for checking.  For
example, every time I go to my own web site (http://www.wildapache.net),
I see all the GETs go through squidGuard.

When does squid check it's cache for the information on any given
request?  Is it after the call to squidGuard?

I guess I do not understand how squid works.  It seems to me that squid
would check it's cache first before it called the redirector, but it
doesn't seem to work this way.  Could someone please explain to me the
functional model for squid and the justification for the model, or
direct me to a site that can explain this?  A functional flow diagram
would be helpful if one exists on the web.

Thanks,
Murrah Boswell


[squid-users] squid/squidGuard/MySQL Problem

2003-10-08 Thread OTR Comm
Hello,

I am running squid-3.0-PRE3-20030924 and have squidGuard-1.2 as my
redirector.  I have hacked squidGuard to support MySQL queries.  When I
run squidGuard in test mode from the command line, I can query MySQL
with no problems, and I can write to a 'debugging' file while I am
testing my code development.

My problem is that when I call this hacked version of squidGuard from
squid, none of the MySQL queries work and none of my 'debugging' files
are being written to.  This is only when squid makes the call to
squidGuard.

squid, squidGuard, and my apache server all run as nobody in this test
environment.  All squid and squidGuard files are owned by nobody.root,
and both owner and group have rwx permissions on files and directories.

I can 'su nobody' and run squidGuard from the command line, and the
MySQL queries work okay and the 'debugging' files get written to.

I asked this questions at the squidGuard list, but didn't get any help. 
I really believe it is a squid problem, so I am asking the question
here.

This issue is holding me up in my code development, so if anyone knows
what night be going on here, please let me know.

Also, if there are ways to put squid in a 'stepper debugging mode,' I
would appreciate help in how to do this.  I would like to watch squid
call squidGuard, and then step into squidGuard and step through the code
as it executes.  Is this possible?  Is this something that I can compile
in at build time?

Thanks,
Murrah Boswell


[squid-users] Authentication Question

2003-10-02 Thread OTR Comm
Hello,

I have my squid configuration set to require authentication.

Does anyone know how squid physically puts the box up resquesting the
username and password?  I know squid passes the information put inot
this box to the selected authentication program (like ncsa_auth), but
how does squid make the box display in the first place?

I would like to know this down at code level if possible.  That is, what
routine displays the login box and sends the information to the
authentication program.

I am using ncsa_auth, and when I run the binary that compiled with squid
from the command line with my passwordfile as an argument, i.e.,

./ncsa_auth /usr/local/squid/etc/passwd

it waits for me to enter a username/password pair separated by a space. 
If the username authenticates, ncsa_auth comes bake with OK.

Now I assume that squid slurps in the username/password pair and calls
ncsa_auth with the passwordfile and then passes the usrname/password
pair and waits for the response.  I just need to know where squid is
doing all of this.

Thank you,
Murrah Boswell

-- 
*Before I criticize a man, I walk a mile in his shoes.
 That way, if he gets angry, he's a mile away and barefoot.