Hi all,
I am a newbie to squid and I am wondering how to do the following:I tried FAQ
and user mailing archives on slow response and did not solve my issue.
I have a proxy code [openssl] and I need to integrate squid such that all HTTP
requests is forwarded to squid through our proxy
On mån, 2008-07-07 at 09:39 +1000, Mark Nottingham wrote:
FWIW, I've tested it, and have been using it in production on a fair
number of boxes for a little while; so far so good. Like H says, the
main thing is lacking Expect/Continue support.
Expect is there in the minimal conforming mode
On mån, 2008-07-07 at 11:37 +0800, Roy M. wrote:
1. Since memory is now 100% used, how do I know if there is a cache
miss in mem 48.3%,
how many % of them will trigger a LRU in memory cache?
What do you mean by trigger a LRU? That Squid removes the LRU object to
make room for new content?
If
On mån, 2008-07-07 at 10:05 +0530, Geetha_Priya wrote:
This is regarding the posting: Request header contains NULL characters.
http://www.mail-archive.com/squid-users@squid-cache.org/msg16754.html
I see back in 2004, Mozilla browser gives this error. But are there
any improvements to this. I
On sön, 2008-07-06 at 22:05 -0700, Shaine wrote:
Following is my script.
#!/usr/bin/perl
# no buffered output, auto flush
use strict;
use warnings;
my ($temp, $array, @array, $param_1, $param_2, $param_3, $new_uri);
$|=1;
$temp = ;
while (STDIN){
[EMAIL PROTECTED] = split(/
On mån, 2008-07-07 at 12:20 +0530, Geetha_Priya wrote:
I have a proxy code [openssl] and I need to integrate squid such that
all HTTP requests is forwarded to squid through our proxy only.The
purpose of proxy is more for controlling http access a this point.
Ok.
Client -- proxy Squid
Thanks for your reply, I agree the empty request is not a concern and is not a
part of squid.
The issue is
Accessing websites through proxy and squid has following response along with
being slow
1. No graphical images are obtained for a requested page, I don’t see
subsequent requests for
On mån, 2008-07-07 at 13:13 +0900, KwangYul Seo wrote:
Hi,
Is it possible to use squid with
ziproxy(http://ziproxy.sourceforge.net/)?
Should work, assuming ziproxy does things correctly and does not mess up
on ETag..
If so, what is the usual configuration?
Squid using ziproxy as a
Hi,
thanks for the hint, I added
http_port 127.0.0.1:3128
to my config. Now I can access port 3128 with telnet or squidclient, but
receive an access denied:
/var/log/squid/access.log:
127.0.0.1 - - [07/Jul/2008:10:16:43 +0200] GET
cache_object://localhost/info HTTP/1.0 403 1430 - -
On mån, 2008-07-07 at 13:01 +0530, Geetha_Priya wrote:
Accessing websites through proxy and squid has following response along with
being slow
1. No graphical images are obtained for a requested page, I don’t see
subsequent requests for obtaining images through squid. It gets the main page
Thank you Henrik. yes that script is very simple and now and its working. But
i have another requirement to capture Client IP which comes via the URL .
Its bit confuse at this time coz i had different idea .So now can u direct
me to how to capture client ip by that perl script which you have
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Monday, July 07, 2008 1:55 PM
To: Geetha_Priya
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Integrating squid with OpenSSL:very slow response
On mån, 2008-07-07 at 13:01 +0530, Geetha_Priya wrote:
Hi,
I have setup transparent intercepting proxy (squid 2.6 branch) in
RedHat EL5. It has 2 NICs. One is connected to router. The other is
connected to LAN. Client's gateway is LAN ip address of the proxy
server.Clients have 2 Dns entries. It works fine. If I remove dns
entires of clinets PCs. It
no, it´s now possible without dns ... browser need to resolve address
to ip to start connections
On Mon, Jul 7, 2008 at 6:19 AM, Indunil Jayasooriya [EMAIL PROTECTED] wrote:
Hi,
I have setup transparent intercepting proxy (squid 2.6 branch) in
RedHat EL5. It has 2 NICs. One is connected to
On Mon, Jul 7, 2008 at 3:19 PM, Alexandre Correa
[EMAIL PROTECTED] wrote:
no, it´s now possible without dns ... browser need to resolve address
to ip to start connections
Thanks for your quick responce. How Can I achieve it.
All clinets use IE and firefox.
Hope to hear from you.
--
Thank
no, it´s now possible without dns ... browser need to resolve address
to ip to start connections
Thanks for your quick responce. How Can I achieve it.
All clinets use IE and firefox.
Hope to hear from you.
--
Thank you
Indunil Jayasooriya
Well, I based my argument from the 10 instances of reverse proxies
I'm running. It has 266,268,230 objects and 3.7 TB of space. CPU
usage is always around 0.2 according to ganglia. So unless you have
some other statistics to prove CPU is that important, I'm stick w/ my
argument that disk
Hi!
On Monday 07 July 2008, Indunil Jayasooriya wrote:
no, it´s now possible without dns ... browser need to resolve address
to ip to start connections
There is a typo! The word should be not! Not now!
The client - no matter what they are need to resolve the dns name to an ip
address to
On fre, 2008-07-04 at 10:30 +0200, Henrik Nordstrom wrote:
On tor, 2008-07-03 at 12:39 +0200, [EMAIL PROTECTED] wrote:
Hi,
I also had problems with umlauts. We use our Lotus Domino Server as LDAP
server and since an update from version 6.5 to 8, our users are unable to
authenticate
On mån, 2008-07-07 at 10:19 +0200, David Obando wrote:
Hi,
thanks for the hint, I added
http_port 127.0.0.1:3128
to my config. Now I can access port 3128 with telnet or squidclient, but
receive an access denied:
/var/log/squid/access.log:
127.0.0.1 - - [07/Jul/2008:10:16:43 +0200]
On mån, 2008-07-07 at 14:48 +0530, Geetha_Priya wrote:
yes we use openssl libraries and created a proxy server that supports
persistent connections. Earlier we had wcol as http prefetcher. But we
had problems with long urls and less capabilities, we decided to move
to squid. Now we are
On mån, 2008-07-07 at 10:03 +, Shain Lee wrote:
Thank you Henrik. yes that script is very simple and now and its
working. But i have another requirement to capture Client IP which
comes via the URL . Its bit confuse at this time coz i had different
idea .So now can direct me to how to
On mån, 2008-07-07 at 15:27 +0530, Indunil Jayasooriya wrote:
no, it´s now possible without dns ... browser need to resolve address
to ip to start connections
Thanks for your quick responce. How Can I achieve it.
Only by configuring the clients to use the proxy.
Regards
Henrik
Hi,
I found out, I had to configure an acl in squidGuard.conf:
dbhome /var/lib/squidguard/db
logdir /var/log/squid
#
# DESTINATION CLASSES:
#
src local {
ip 127.0.0.1
}
dest good {
}
dest local {
}
acl {
local {
pass all
}
default {
On mån, 2008-07-07 at 07:34 -0300, Michel wrote:
ok, reverse proxy does not so very much, so sure it depends on what you do
with the
machine
The known configurations which can easily push Squid to CPU bound limits
is
a) reverse proxy setups with a reasonably small but very frequently
In other words please file a bug report at http://bugs.squid-cache.org/
I filed Bug 2403.
As advised, I turned via back on and it fixed the problem.
Thx a lot Henrik,
JD
I did same way . but client ip doesn't comes in the second possition. Its in
third.
my ($url, $x, $ip) = split(/ /);
but in squid guide says it shoudl be in second element. why is this
confusion ? URL ip-address/fqdn ident method.
If that thrid possition will not constant all will goes off. i
I have successfully implemented a reverse proxy for my http site. My
question is if whether or not there is an option so that it accepts on
the basis of the domain basically instead of having www.example.com just
have example.com and it will serve and of the sub domains if it exists.
Oh, you are using a url rewriter..
I would do it differently.
url_rewrite_access deny manager
this way you can still use squidclient on your published URLs and have
Squid react like expected on them, including URL rewrites...
On mån, 2008-07-07 at 14:25 +0200, David Obando wrote:
Hi,
I
Greetings!
I try to authenticate using digest_ldap, and I get this error on
/var/log/squid3/cache.log:
2008/07/07 09:25:36| helperHandleRead: unexpected read from
digestauthenticator #1, 32 bytes '6e0856007bdf46e7c908985ea25f'
2008/07/07 09:25:36| helperHandleRead: unexpected read from
Hi,
On 7/7/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:
What do you mean by trigger a LRU? That Squid removes the LRU object to
make room for new content?
Yes, since it is very easy to fill in 100% of memory cache.
Sometimes we might want to know if LRU occur in memory too frequent in
a
On mån, 2008-07-07 at 07:50 -0500, Thomas E. Maleshafske wrote:
I have successfully implemented a reverse proxy for my http site. My
question is if whether or not there is an option so that it accepts on
the basis of the domain basically instead of having www.example.com just
have
On tis, 2008-07-08 at 00:04 +0800, Roy M. wrote:
Sometimes we might want to know if LRU occur in memory too frequent in
a production server, then we might consider adding more memory, or
adjust the max. memory object size to reduce LRU for better
performance.
It should happen as frequent as
Hi,
On 7/8/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:
It should happen as frequent as you have new content entering the cache.
I would think the LRU age is more interesting, telling how long the
oldest object has stayed in cache...
Sure, but sometimes it would be interesting to see
On tis, 2008-07-08 at 00:47 +0800, Roy M. wrote:
Sure, but sometimes it would be interesting to see by adjusting the
max. memory size, you could be able to reduce / increase the LRU per
second. (Of coz I don't have real knowledge if LRU is costly in term
of CPU cycle)
It will be the same
It's Trend Micro way of telling the ICAP server (IWSS) that the ICAP
client (the proxy) is capable of forwarding the response from the ICAP
server before the entire object has been sent to the ICAP Server.
Most others assume this by default without requiring the private X-TE:
trailers header.
Hi again...
I don't know what happened (if I changed something
or if I wrongfuly thought it was fixed) but the siblings are not
talking anymore... at all. :(
No error message, no denied...
So let's start from the begining...
configure --prefix=$PREFIX \
--enable-time-hack \
Henrik Nordstrom wrote:
On mån, 2008-07-07 at 07:50 -0500, Thomas E. Maleshafske wrote:
I have successfully implemented a reverse proxy for my http site. My
question is if whether or not there is an option so that it accepts on
the basis of the domain basically instead of having
Hi All,
I have a machine here that is running 3.0.STABLE4 and I wish to upgrade
it to STABLE7. I compiled and installed STABLE4 with no problems.
However while attempting to compile the latest release I am getting lots
of errors during the configure script which are repeatedly saying to
On mån, 2008-07-07 at 13:43 -0500, Thomas E. Maleshafske wrote:
I have the vhost directive defined but am still have to list each
seperate subdomain.
List where?
In squid.conf, or in your web server?
I might have found a solution with using pound but
haven't tested yet with still have
Hi,
At 21.26 07/07/2008, Frog wrote:
Hi All,
I have a machine here that is running 3.0.STABLE4 and I wish to upgrade
it to STABLE7. I compiled and installed STABLE4 with no problems.
However while attempting to compile the latest release I am getting lots
of errors during the configure script
Henrik Nordstrom wrote:
On mån, 2008-07-07 at 13:43 -0500, Thomas E. Maleshafske wrote:
I have the vhost directive defined but am still have to list each
seperate subdomain.
List where?
In squid.conf, or in your web server?
I might have found a solution with using pound but
Hi guys,
On the access.log the Squid show TCP_DENIED entry to some part of website
I´m authenticating my users using NTLM and all entry on access.log that DENIED
part of site do not show the standard domain\username on log.
only - -...
For example:
192.168.15.13 - contac\xtz0001
Alexandre augusto escreveu:
Hi guys,
On the access.log the Squid show TCP_DENIED entry to some part of website
I´m authenticating my users using NTLM and all entry on access.log that DENIED
part of site do not show the standard domain\username on log.
only - -...
This is the
Leonardo Rodrigues Magalhães escreveu:
Alexandre augusto escreveu:
Hi guys,
On the access.log the Squid show TCP_DENIED entry to some part of
website
I´m authenticating my users using NTLM and all entry on access.log
that DENIED part of site do not show the standard domain\username on
Hi Leonardo,
The problem is that the website just show me part of website information.
The pictures (in most cases flash) is denied.
Do you have any idea ?
Thanks you
Alexandre
--- Em seg, 7/7/08, Leonardo Rodrigues Magalhães [EMAIL PROTECTED] escreveu:
De: Leonardo Rodrigues Magalhães
Alexandre augusto escreveu:
Hi Leonardo,
The problem is that the website just show me part of website information.
The pictures (in most cases flash) is denied.
Do you have any idea ?
Sure !!! First idea look for 403 DENIED and not 407 ones. Those
407 ones are part of the NTLM
On mån, 2008-07-07 at 15:10 -0500, Thomas E. Maleshafske wrote:
IN squid.conf
It's not needed to list the sites in squid.conf unless you need to send
different sites to different backend web servers.
If you have only one web server (or cluster) then just cache_peer is
sufficient without
On mån, 2008-07-07 at 20:26 +0100, Frog wrote:
Hi All,
I have a machine here that is running 3.0.STABLE4 and I wish to upgrade
it to STABLE7. I compiled and installed STABLE4 with no problems.
However while attempting to compile the latest release I am getting lots
of errors during the
Guido Serassio wrote:
Hi,
At 21.26 07/07/2008, Frog wrote:
Hi All,
I have a machine here that is running 3.0.STABLE4 and I wish to upgrade
it to STABLE7. I compiled and installed STABLE4 with no problems.
However while attempting to compile the latest release I am getting lots
of errors
On tis, 2008-07-08 at 00:31 +0200, Guido Serassio wrote:
So the patch should be applied to Squid3 STABLE, the fail during
build is not correct :-)
I am thinking we probably should stop using the getopt build
environments by default.
I.e. the result which is now (from tonight) seen if
Guido Serassio wrote:
Hi,
So the patch should be applied to Squid3 STABLE, the fail during build
is not correct :-)
Please, check with file if your binary is 32 or 64 bit, I'm suspecting
that it's a 32 bit binary.
Regards
Guido
Hi Guido,
Indeed the file is a 32bit binary which I
Henrik Nordstrom wrote:
On mån, 2008-07-07 at 15:10 -0500, Thomas E. Maleshafske wrote:
IN squid.conf
It's not needed to list the sites in squid.conf unless you need to send
different sites to different backend web servers.
If you have only one web server (or cluster) then just
Hello,
I am new to Squid and I'd like to ask a question about its internal
workings when operating as a transparent proxy.
I saw that one configure the host kernel with an iptables rule in the
nat table with the REDIRECT target to match packets destined to some
port (e.g 80) and redirect
On mån, 2008-07-07 at 18:05 -0500, Thomas E. Maleshafske wrote:
I managed to figure it out on a hunch.
http_port 80 accel vhost
forwarded_for on
refresh_pattern ^ftp: 144020% 10080
refresh_pattern ^gopher:14400% 1440
refresh_pattern . 0
On mån, 2008-07-07 at 19:46 -0400, Peter Djalaliev wrote:
(e.g 3128). From what I understand, when iptables matches a packet
against this rule, it overwrites the packet's destination IP address and
TCP port with, respectively, the local IP address and 3128.
How does Squid (e.g in the case
On mån, 2008-07-07 at 05:49 -0700, Shaine wrote:
I did same way . but client ip doesn't comes in the second possition. Its in
third.
It's the second..
http://www.squid-cache.org/ 127.0.0.1/localhost.localdomain - GET -
myip=127.0.0.1 myport=3128
unless you have enabled
Henrik Nordstrom wrote:
On mån, 2008-07-07 at 18:05 -0500, Thomas E. Maleshafske wrote:
I managed to figure it out on a hunch.
http_port 80 accel vhost
forwarded_for on
refresh_pattern ^ftp: 144020% 10080
refresh_pattern ^gopher:14400% 1440
refresh_pattern
Hello,
I am new to Squid and I'd like to ask a question about its internal
workings when operating as a transparent proxy.
I saw that one configure the host kernel with an iptables rule in the
nat table with the REDIRECT target to match packets destined to some
port (e.g 80) and redirect
59 matches
Mail list logo