Re: [squid-users] redirect based on url (302)

2018-11-16 Thread Amos Jeffries
On 16/11/18 9:22 PM, uppsalanet wrote:
> Just for documentation purpose. Amos suggestion works perfect:
> /# Ext magazine domains
> debug_options 11,10 58,10 82,10
> acl 302 http_status 302
> acl browzine dstdomain .browzine.com .thirdiron.com
> http_access allow browzine
> 
> external_acl_type whitelist_add ttl=10 %SRC % /etc/squid/add2db.pl
> 
> acl add_to_whitelist external whitelist_add
> http_reply_access allow browzine 302 add_to_whitelist
> http_reply_access allow all
> # Ext magazine domains /i>
> 
> Why it's not working for me is that the site Im reaching have turned on
> https encryption. TLS encrypted tunnel prevents me from seeing HTTP headers,
> which means I cannot distinguish individual responses :-(
> 


The only way around that is to intercept and decrypt the HTTPS using
Squid's SSL-Bump features.
 

SSL-Bump requires that you are in a situation where you can install
trusted CA certificates into all client devices. Even if the decrypt is
possible there are legal implications which vary around the world, so
please do check with a lawyer before going ahead with it.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-11-16 Thread uppsalanet
Just for documentation purpose. Amos suggestion works perfect:
/# Ext magazine domains
debug_options 11,10 58,10 82,10
acl 302 http_status 302
acl browzine dstdomain .browzine.com .thirdiron.com
http_access allow browzine

external_acl_type whitelist_add ttl=10 %SRC %

Why it's not working for me is that the site Im reaching have turned on
https encryption. TLS encrypted tunnel prevents me from seeing HTTP headers,
which means I cannot distinguish individual responses :-(

/F





--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-11-08 Thread uppsalanet
Im stucked again :-(

It stoped working for some reason. I'm not able to trap 302 anymore. This is
my squid.conf (snippet):

# Ext magazine domains
debug_options 11,10 58,10 82,10
acl 302 http_status 302
acl browzine dstdomain .browzine.com .thirdiron.com
http_access allow browzine

external_acl_type whitelist_add ttl=10 %SRC %https://api.thirdiron.com/v2/libraries/223/articles/201309075/content
HTTP/1.1 302 Found
Server: Cowboy
Connection: keep-alive
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Allow-Methods: DELETE,GET,PATCH,POST,PUT
Location: http://www.tandfonline.com/doi/full/10.1080/00020184.2018.1459287
Set-Cookie:
connect.sid=s%3AygAG53nVxrcphMYobmgFN4WIHWa2dgv0.29L5g8MvGC6Awk3pE5JZ4xKYcSqyI3L7vAiUXbAUmHM;
Path=/; HttpOnly
Date: Thu, 08 Nov 2018 15:39:10 GMT
Via: 1.1 vegur

I probably doing something wrong :-)
Regards
Fredrik





--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-31 Thread Amos Jeffries
On 31/10/18 11:27 PM, uppsalanet wrote:
> Hi Amos,
> Is there a git that I can use to push stuff up?
> 

Do you mean to make a change PR against the official code?

The key details for people wanting to assist with Squid development are
linked from here: 



> I think you need to split the string in an other way, look into this
> example:
> #!/usr/bin/perl
> use strict;
> use warnings;
> 
> $|=1;
> while (<>) {
>  my $string = $_;
>  print "Received '\$_' = ".$_."\n";   
> 
>  $string =~ m/^(\d+)\s(.*)$/;
>  print "After regexp '\$string' = ".$string."\n";
>  print "After regexp '\$1' = ".$1."\n";   
>  print "After regexp '\$2' = ".$2."\n"; 
> 
>  ### Original split from sorce ###
>  ### This doesn't split anything looks like elements of an array?
>  #my ($cid, $uid) = ($1, $2);
> 
>  ### Split the string ###
>  ### Those two split based on one or more spaces
>  #my ($cid, $uid) = split(/\s+/ ,$_);
>  my ($cid, $uid) = split;
>  $cid =~ s/%(..)/pack("H*", $1)/ge;
>  $uid =~ s/%(..)/pack("H*", $1)/ge;
>  print "After split \$cid = ".$cid."\n";
>  print "After split \$uid = ".$uid."\n";
> }
> 
> Output from above with intake value '*130.238.000.00 muse.jhu.edu -*':
> Received '$_' = 130.238.000.00 muse.jhu.edu -
> After regexp '$string' = 130.238.000.00 muse.jhu.edu -
> /Use of uninitialized value $1 in concatenation (.) or string at
> ./sed_test_reg.pl line 13, <> line 1.
> After regexp '$1' = 
> Use of uninitialized value $2 in concatenation (.) or string at
> ./sed_test_reg.pl line 14, <> line 1.
> After regexp '$2' = /
> *After split $cid = 130.238.000.00
> After split $uid = muse.jhu.edu*
> 


$cid should be the concurrency channel ID.  Configured with the
"concurrency=N" option to external_acl_type in squid.conf. (Seems I
missed another bit of the config.)

If you are wanting to assist with fixing the helper, it could do with a
change to auto-detect whether the first column is a CID (numeric only)
or not (anything but whitespace following a numeral).


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-31 Thread uppsalanet
Hi Amos,
Is there a git that I can use to push stuff up?

I think you need to split the string in an other way, look into this
example:
#!/usr/bin/perl
use strict;
use warnings;

$|=1;
while (<>) {
 my $string = $_;
 print "Received '\$_' = ".$_."\n";   

 $string =~ m/^(\d+)\s(.*)$/;
 print "After regexp '\$string' = ".$string."\n";
 print "After regexp '\$1' = ".$1."\n";   
 print "After regexp '\$2' = ".$2."\n"; 

 ### Original split from sorce ###
 ### This doesn't split anything looks like elements of an array?
 #my ($cid, $uid) = ($1, $2);

 ### Split the string ###
 ### Those two split based on one or more spaces
 #my ($cid, $uid) = split(/\s+/ ,$_);
 my ($cid, $uid) = split;
 $cid =~ s/%(..)/pack("H*", $1)/ge;
 $uid =~ s/%(..)/pack("H*", $1)/ge;
 print "After split \$cid = ".$cid."\n";
 print "After split \$uid = ".$uid."\n";
}

Output from above with intake value '*130.238.000.00 muse.jhu.edu -*':
Received '$_' = 130.238.000.00 muse.jhu.edu -
After regexp '$string' = 130.238.000.00 muse.jhu.edu -
/Use of uninitialized value $1 in concatenation (.) or string at
./sed_test_reg.pl line 13, <> line 1.
After regexp '$1' = 
Use of uninitialized value $2 in concatenation (.) or string at
./sed_test_reg.pl line 14, <> line 1.
After regexp '$2' = /
*After split $cid = 130.238.000.00
After split $uid = muse.jhu.edu*

Cheers
Fredrik



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-30 Thread Amos Jeffries
On 31/10/18 12:49 AM, uppsalanet wrote:
> Thanks, 
> Missed that I need to install squid-helpers "yum install squid-helpers" :-)
> Now it's there.
> 
> Now I use it like this:
> 
> external_acl_type whitelist ttl=60 children-max=1 %SRC %DST
> /usr/lib64/squid/ext_sql_session_acl --user root --password config  --table
> sessions --cond "" --debug
> 
> But receive this:
> /2018/10/30 12:38:37.279| 82,9| external_acl.cc(600) aclMatchExternal:
> acl="whitelist"
> 2018/10/30 12:38:37.280| 82,9| external_acl.cc(629) aclMatchExternal: No
> helper entry available
> 2018/10/30 12:38:37.280| 82,2| external_acl.cc(663) aclMatchExternal:
> whitelist("130.238.171.59 muse.jhu.edu -") = lookup needed
> 2018/10/30 12:38:37.280| 82,2| external_acl.cc(667) aclMatchExternal:
> "130.238.171.59 muse.jhu.edu -": queueing a call.
> 2018/10/30 12:38:37.280| 82,2| external_acl.cc(1031) Start: fg lookup in
> 'whitelist' for '130.238.171.59 muse.jhu.edu -'

Oh darn. Sorry, I forgot about the implicit %DATA parameters on external
ACL yet again. One of the things on my long todo list is to make that
optionally ignored.

For now the easiest fix/workaround is to have your custom helper append
that " -" string to the IDs in the database.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-30 Thread uppsalanet
Thanks, 
Missed that I need to install squid-helpers "yum install squid-helpers" :-)
Now it's there.

Now I use it like this:

external_acl_type whitelist ttl=60 children-max=1 %SRC %DST
/usr/lib64/squid/ext_sql_session_acl --user root --password config  --table
sessions --cond "" --debug

But receive this:
/2018/10/30 12:38:37.279| 82,9| external_acl.cc(600) aclMatchExternal:
acl="whitelist"
2018/10/30 12:38:37.280| 82,9| external_acl.cc(629) aclMatchExternal: No
helper entry available
2018/10/30 12:38:37.280| 82,2| external_acl.cc(663) aclMatchExternal:
whitelist("130.238.171.59 muse.jhu.edu -") = lookup needed
2018/10/30 12:38:37.280| 82,2| external_acl.cc(667) aclMatchExternal:
"130.238.171.59 muse.jhu.edu -": queueing a call.
2018/10/30 12:38:37.280| 82,2| external_acl.cc(1031) Start: fg lookup in
'whitelist' for '130.238.171.59 muse.jhu.edu -'
2018/10/30 12:38:37.280| 82,4| external_acl.cc(1071) Start:
externalAclLookup: looking up for '130.238.171.59 muse.jhu.edu -' in
'whitelist'.
2018/10/30 12:38:37.280| Starting new whitelist helpers...
2018/10/30 12:38:37.282| 82,4| external_acl.cc(1086) Start:
externalAclLookup: will wait for the result of '130.238.171.59 muse.jhu.edu
-' in 'whitelist' (ch=0x26782c8).
2018/10/30 12:38:37.282| 82,2| external_acl.cc(670) aclMatchExternal:
"130.238.171.59 muse.jhu.edu -": return -1.
Received: Channel=, UID=''
Query: SELECT '' as 'user', '' as 'tag' FROM sessions WHERE (id = ?) UID
queried: ''
Rows: 0
2018/10/30 12:38:37.420| 82,2| external_acl.cc(958) externalAclHandleReply:
reply={result=Unknown, other: "ERR message="unknown UID ''""}/

Looking into the code of ext_sql_session_ac and line 190:
*my ($cid, $uid) = ($1, $2);*

I assume this will split the $_into $cid and $uid. But debug says:
*Received: Channel=, UID=''*
Query: SELECT '' as 'user', '' as 'tag' FROM sessions WHERE (id = ?) UID
queried: ''
Rows: 0

Do I have done something wrong?
/Fredrik






--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-23 Thread Amos Jeffries
On 24/10/18 3:31 AM, uppsalanet wrote:
> Thanks Amos for all you help.
> I've done a few of your suggested steps:
> * Create the databas.
> createdb.sql
> 
>   
> * The acl to fill upp the database with values works fine :-)
> external_acl_type whitelist_add ttl=10 %SRC %<{Location}
> /etc/squid/add2db.pl
> add2db.pl
>   
> 
> So now i fill up the database with records like this:
> dbdump.txt
>   
> 
> My question is how i get the domains out from it? I don't really under stand
> this part:
>  external_acl_type whitelist ttl=60 %SRC %DST \
>/usr/lib/squid/ext_session_db_acl \
>--dsn ... --user ... --password ... \
>--table sessions --cond "" 
> 
> Do I need to write another script for that
> "/usr/lib/squid/ext_session_db_acl" 

Nope, Squid should have come with that helper. It may not be at that
exact path though.

All you should have to do now is find where that helper binary actually
is and setup those parameters so it can access your DB.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-23 Thread uppsalanet
Thanks Amos for all you help.
I've done a few of your suggested steps:
* Create the databas.
createdb.sql
  
* The acl to fill upp the database with values works fine :-)
external_acl_type whitelist_add ttl=10 %SRC %<{Location}
/etc/squid/add2db.pl
add2db.pl
  

So now i fill up the database with records like this:
dbdump.txt
  

My question is how i get the domains out from it? I don't really under stand
this part:
 external_acl_type whitelist ttl=60 %SRC %DST \
   /usr/lib/squid/ext_session_db_acl \
   --dsn ... --user ... --password ... \
   --table sessions --cond "" 

Do I need to write another script for that
"/usr/lib/squid/ext_session_db_acl" 

squid -v
squid_version.txt

  

Thanks in advance
Fredrik



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-08 Thread Eliezer Croitoru

Amos I probably missed couple lines.
It's doable but probably if there is a specific set of domains or urls 
then I will need to try and see what and how it works.


Eliezer

On 2018-09-24 12:30, Amos Jeffries wrote:

On 24/09/18 6:38 PM, uppsalanet wrote:

Hi Amos,
Today I have a conf like this:

acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
.bibliotekuppsala.se
http_access allow *LIB_domains*


Now I also need to open for *.browzine.com*. The problem with
*.browzine.com* is that it is a portal with many links to other sites. 
So I

basically need to open up and maintain 400 sites in a squid ACL.

I would like to take another approach then (but I don't know if it's
possible):
I know that browzine.com will reply 302 when trying to access a link 
on

their site. *So I would like to accept all redirect (302) sites from
browzine.com*.


Aha, that is clearer. Thank you.

I think you can possibly achieve this, but *only* because of those 302
existing. If the site were just a collection of links it would be very
much more difficult.


What I am thinking of is to use a custom external ACL script that
creates a temporary browsing session for a client when the 302 arrives
then the SQL session helper to allow matching traffic through for the
followup request from that client.

You will need a database with a table created like this:

 CREATE TABLE sessions (
  id VARCHAR(256) NOT NULL PRIMARY KEY,
  enabled DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
)

You need to write a script which receives an IP and a URL from Squid,
extracts the domain name from the URL, then adds a string "$ip $domain"
to that table as the id column, then returns the "OK" result to Squid.

The page at
 
has

details of the SQL session helper that uses that table to check for
whitelisted domains.


Your config would look like:

 acl 302 http_status 302
 acl browzine dstdomain .browzine.com

 external_acl_type whitelist_add %SRC %{Location} \
  /path/to/whitelist_script

 acl add_to_whitelist external whitelist_add

 http_reply_access allow browzine 302 add_to_whitelist
 http_reply_access allow all


 external_acl_type whitelist ttl=60 %SRC %DST \
   /usr/lib/squid/ext_session_db_acl \
   --dsn ... --user ... --password ... \
   --table sessions --cond ""

 acl whitelisted external whitelist
 http_access allow whitelisted


To have sessions expire simply remove them from the database table.
Squid will start rejecting traffic there within 60 seconds of the 
removal.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-07 Thread Amos Jeffries
On 8/10/18 4:37 AM, Eliezer Croitoru wrote:
> Hey Amos,
> 
> I still believe that if squid will manage the connections and the ICAP
> service will maintain the ACL list based on these 302
> it would be much faster then opening new connections to the WWW.

Where are you getting this "new connections to the WWW" idea?

My suggestion does not involve any extra connections.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-06 Thread Amos Jeffries
On 7/10/18 8:41 AM, Eliezer Croitoru wrote:
> Amos,
> 
> Would an ICAP service that sits on the RESPMOD vector would be a better
> solution other then opening a new session?
> 

"Opening a new session" is what any such ICAP would have to do. It is
also overkill for that small action.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-10-06 Thread Eliezer Croitoru

Amos,

Would an ICAP service that sits on the RESPMOD vector would be a better 
solution other then opening a new session?


Thanks,
Eliezer

On 2018-09-24 12:30, Amos Jeffries wrote:

On 24/09/18 6:38 PM, uppsalanet wrote:

Hi Amos,
Today I have a conf like this:

acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
.bibliotekuppsala.se
http_access allow *LIB_domains*


Now I also need to open for *.browzine.com*. The problem with
*.browzine.com* is that it is a portal with many links to other sites. 
So I

basically need to open up and maintain 400 sites in a squid ACL.

I would like to take another approach then (but I don't know if it's
possible):
I know that browzine.com will reply 302 when trying to access a link 
on

their site. *So I would like to accept all redirect (302) sites from
browzine.com*.


Aha, that is clearer. Thank you.

I think you can possibly achieve this, but *only* because of those 302
existing. If the site were just a collection of links it would be very
much more difficult.


What I am thinking of is to use a custom external ACL script that
creates a temporary browsing session for a client when the 302 arrives
then the SQL session helper to allow matching traffic through for the
followup request from that client.

You will need a database with a table created like this:

 CREATE TABLE sessions (
  id VARCHAR(256) NOT NULL PRIMARY KEY,
  enabled DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
)

You need to write a script which receives an IP and a URL from Squid,
extracts the domain name from the URL, then adds a string "$ip $domain"
to that table as the id column, then returns the "OK" result to Squid.

The page at
 
has

details of the SQL session helper that uses that table to check for
whitelisted domains.


Your config would look like:

 acl 302 http_status 302
 acl browzine dstdomain .browzine.com

 external_acl_type whitelist_add %SRC %{Location} \
  /path/to/whitelist_script

 acl add_to_whitelist external whitelist_add

 http_reply_access allow browzine 302 add_to_whitelist
 http_reply_access allow all


 external_acl_type whitelist ttl=60 %SRC %DST \
   /usr/lib/squid/ext_session_db_acl \
   --dsn ... --user ... --password ... \
   --table sessions --cond ""

 acl whitelisted external whitelist
 http_access allow whitelisted


To have sessions expire simply remove them from the database table.
Squid will start rejecting traffic there within 60 seconds of the 
removal.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-09-24 Thread Amos Jeffries
On 24/09/18 6:38 PM, uppsalanet wrote:
> Hi Amos,
> Today I have a conf like this:
> 
> acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
> .bibliotekuppsala.se
> http_access allow *LIB_domains*
> 
> 
> Now I also need to open for *.browzine.com*. The problem with
> *.browzine.com* is that it is a portal with many links to other sites. So I
> basically need to open up and maintain 400 sites in a squid ACL.
> 
> I would like to take another approach then (but I don't know if it's
> possible):
> I know that browzine.com will reply 302 when trying to access a link on
> their site. *So I would like to accept all redirect (302) sites from
> browzine.com*. 

Aha, that is clearer. Thank you.

I think you can possibly achieve this, but *only* because of those 302
existing. If the site were just a collection of links it would be very
much more difficult.


What I am thinking of is to use a custom external ACL script that
creates a temporary browsing session for a client when the 302 arrives
then the SQL session helper to allow matching traffic through for the
followup request from that client.

You will need a database with a table created like this:

 CREATE TABLE sessions (
  id VARCHAR(256) NOT NULL PRIMARY KEY,
  enabled DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
)

You need to write a script which receives an IP and a URL from Squid,
extracts the domain name from the URL, then adds a string "$ip $domain"
to that table as the id column, then returns the "OK" result to Squid.

The page at
 has
details of the SQL session helper that uses that table to check for
whitelisted domains.


Your config would look like:

 acl 302 http_status 302
 acl browzine dstdomain .browzine.com

 external_acl_type whitelist_add %SRC %{Location} \
  /path/to/whitelist_script

 acl add_to_whitelist external whitelist_add

 http_reply_access allow browzine 302 add_to_whitelist
 http_reply_access allow all


 external_acl_type whitelist ttl=60 %SRC %DST \
   /usr/lib/squid/ext_session_db_acl \
   --dsn ... --user ... --password ... \
   --table sessions --cond ""

 acl whitelisted external whitelist
 http_access allow whitelisted


To have sessions expire simply remove them from the database table.
Squid will start rejecting traffic there within 60 seconds of the removal.

HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-09-24 Thread uppsalanet
Hi Amos,
Today I have a conf like this:

acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
.bibliotekuppsala.se
http_access allow *LIB_domains*


Now I also need to open for *.browzine.com*. The problem with
*.browzine.com* is that it is a portal with many links to other sites. So I
basically need to open up and maintain 400 sites in a squid ACL.

I would like to take another approach then (but I don't know if it's
possible):
I know that browzine.com will reply 302 when trying to access a link on
their site. *So I would like to accept all redirect (302) sites from
browzine.com*. 

Hope that clarify and thanks in advance
Fredrik



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] redirect based on url (302)

2018-09-21 Thread Amos Jeffries
On 22/09/18 2:43 AM, uppsalanet wrote:
> Hi,
> We use squid to limit web traffic to a few internal sites, the computers are
> in in public areas. That works good. Now I have a new case:
> 
> If a user goes to page "https://browzine.com; and choose to view a magazine
> they get redirected (302) to an other site. I would like to open for that
> redirect if it's "https://browzine.com; (api.thirdiron.com) who does the
> redirect.


Can you explain that differently please?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users