Is there anyway to map single url's to multiple store url's based on a cookie?
Lets say I have a user cookie and I want to implement caching for logged in
users.
I there anyway in squid I can append the cookie to the cached url? (in squid
not on the client side url).
I've looked at doing
Hello,
When an object stays in cache for some time then get expired, will squid
delete this object at the moment?
Thanks.
Hello again,
I watched cache.log and found these info:
2009/06/26 14:04:36| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:07| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:18| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:48| TCP connection to
Amos wrote:
Why it takes so long time?
Because its 10* request timeout.
dear Amos,
what directive in squid.conf should I take to decrease this timeout
value?
Thanks.
Dear Gurus,
I have configured squid-3.0.stable15 with these rules for an
accelerator:
cache_peer 192.168.1.100 parent 80 0 no-query front-end-https=auto
originserver name=portsw_1_1 round-robin
cache_peer 192.168.1.101 parent 80 0 no-query front-end-https=auto
originserver name=portsw_1_2
request_body_max_size 0 KB
that mean no max size ? or ?
means no size limited. see squid.conf.
Dear All,
Can you please assist me to configure squid as a reverse proxy, where
to
start or what documents to read ?
http://wiki.squid-cache.org/SquidFaq/ReverseProxy
Hi ,
I have an interesting problem while logining in to a web site. I am
using squid 2.6 with OpenBSD 4.3 stable. Web site is opening without
any
problem. When I enter the username and password it waits for some
seconds then get timeout error from the remote server. I am looking
squid
is upload can be limit from squid ?
limits the size or what else?
squid supports POST method well.
Hi All,
any links on how to configure load balancing of squid
See the default squid.conf, :)
Hi,
I have two imageservers behind a squid.
My issue is that my imageservers are not sending any Expires headers
but
I would like to attaché one from the squid.
So by the time the image reaches the browser I have an Expires header
in
it.
if there is neither Expires nor max-age
Hi there.
I'm planning to build a new dedicated Squid-box, with amd64 and 4 gigs
of RAM, with two cache_dir's on two separate harddisks and Squid-3
doing
application level striping, all servicing around 6k users. Will two
recent IDE disks of 7200 rpm suffice, or I'm better off getting two
The squid is adding the max-age header but not the expires. So it
cache
them.
are you sure? I remember Squid adds an age header, not max-age header.
but maybe I'm wrong.
Yes, you are right it's the age header ... :)
But I did some tests and it's cache them ...
that's b/c images have a Last-Modified-Since header, squid calculate it
based on that.
you can't force squid to insert a max-age or expires headers in the
response.
I saw this in squid.conf:
# TAG: read_timeouttime-units
# The read_timeout is applied on server-side connections. After
# each successful read(), the timeout will be extended by this
# amount. If no data is read again after this amount of time,
# the request is
Hello,
I want to choose a reverse proxy for one of our dynamic applications.That
application is powered by modperl, running on a host with heavy load.
Is squid suitable for this pure dynamic application? Will it improve the
performance?
Thanks.
Has anyone get Squid's best performance datas on a server box with common
hardware (ie,DELL 1950)? These datas include:
1) concurrent connections;
2) flow capacity;
3) TPS (http transaction per second).
Thanks.
bits or bytes? Thanks.
Ken.
--- On Thu, 11/27/08, Adrian Chadd [EMAIL PROTECTED] wrote:
From: Adrian Chadd [EMAIL PROTECTED]
Subject: Re: [squid-users] improve flow capacity for Squid
To: [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Date: Thursday, November 27, 2008, 11:09 PM
Is that per-flow, or in total?
epoll/select/poll, not the
threads/multi-processes. Thanks.
Ken
--- On Sat, 11/29/08, Adrian Chadd [EMAIL PROTECTED] wrote:
From: Adrian Chadd [EMAIL PROTECTED]
Subject: Re: [squid-users] improve flow capacity for Squid
To: [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Date: Saturday, November 29, 2008, 11:23 AM
Heh. The best way under unix is a
We have some web servers for videos playing (the FLV format,like youtube).
Could we deploy squid to act as a reverse-proxy for this application?
What's the recommend configure for squid? Thanks.
Ken.
Hello,
I was just finding the flow capacity for Squid is too limited.
It's even hard to reach an upper limit of 150 MBits.
How can I improve the flow capacity for Squid in the reverse-proxy mode?
Thanks in advance.
Ken
Hello,
I was just finding the flow capacity for Squid is too limited.
It's even hard to reach an upper limit of 150 MBits.
How can I improve the flow capacity for Squid in the reverse-proxy mode?
Thanks in advance.
Ken
Hello,
My original server includes the expire headers in its response.
When an object cached on squid get expired, for the succedent requests
to this object, does squid revalidate it to original server every
time? If so, does this bring down squid's performance?
Thanks for any helps.
Hello,
Does squid support for FTP connection?
Will squid cache the FTP objects as HTTP's?
Is there a config sample for FTP?
Thanks in advance.
Thanks. I have two 1000M cards.
Does it support all web applications like videos, webIM etc?
2008/6/16 Indunil Jayasooriya [EMAIL PROTECTED]:
I will run Squid on Linux OS, with transparent mode.
Should I use iptables to do the http intercept?
what's the iptables syntax? please help, thank you.
Hello,
Now I have the plan to config a squid box for my office for web browsering.
I will run Squid on Linux OS, with transparent mode.
Should I use iptables to do the http intercept?
what's the iptables syntax? please help, thank you.
Regards,
Ken W.
.
Thank you.
2008/6/13 Ken W. [EMAIL PROTECTED]:
Thanks Henrik.
Can source-hash algorithm in cache_peer handle this case?
since source-hash is based on users' IP for redirection.
Thanks again.
2008/6/12 Henrik Nordstrom [EMAIL PROTECTED]:
On tor, 2008-06-12 at 18:13 +0800, Ken W. wrote:
I
Hello,
I have two original servers behind Squid boxes.
How to keep requests from the same client always going to the same
original server?
Because we have to use session for web applications, so assigning
requests to original servers by random is not right.
Thanks.
--Ken
Thanks Henrik.
Can source-hash algorithm in cache_peer handle this case?
since source-hash is based on users' IP for redirection.
Thanks again.
2008/6/12 Henrik Nordstrom [EMAIL PROTECTED]:
On tor, 2008-06-12 at 18:13 +0800, Ken W. wrote:
I have two original servers behind Squid boxes.
How
? The two lines have the same values of
'name=' , is it right?
Thank you.
--Ken
2008/6/10 Ben Hollingsworth [EMAIL PROTECTED]:
In my testing, I found that the names had to be slightly different. For
instance:
cache_peer INTERNALIP1 parent 80 0 no-query originserver login=PASS
name=INTERNALNAME1-peer sourcehash
cache_peer INTERNALIP2 parent 80 0 no-query originserver
Hello,
When running squidclient mgr:info, there is an item called Request
failure ratio.
What does this mean? Does it include the requests which were rejected
by ACL rules?
Thanks.
2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]:
Thanks Henrik.
Under my setting, can squid work correctly for this flow?
clients --https-- squid --http-- webserver
webserver --http-- squid --https-- clients
Again, yes, provided your web server application has support for being
used
'
'--with-filedescriptors=51200' '--enable-ssl'
I'm running it under redhat linux AS5.
Please help, thanks.
--Ken
2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]:
On lör, 2008-06-07 at 09:58 +0800, Ken W. wrote:
2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]:
But you are quite likely to run
( I'm sorry that this was my third message for the same question to the list.
b/c the before two messages sent from yahoo got lost...)
Hello members,
I want to set squid, which accepts https from clients, then forward the
request to original server with http protocal.
This is the setting I
Hello members,
I want to set squid, which accepts https from clients, then forward the
request to original server with http protocal.
This is the setting I considered:
https_port 443 accel vhost cert=/squid/etc/xxx.crt key=/squid/etc/xxx.key
protocol=http
cache_peer 10.0.0.1 parent 80 0
Hello members,
I want to set squid, which accepts https from clients, then forward the
request to original server with http protocal.
This is the setting I considered:
https_port 443 accel vhost cert=/squid/etc/xxx.crt key=/squid/etc/xxx.key
protocol=http
cache_peer 10.0.0.1 parent 80 0
2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]:
But you are quite likely to run into issues with the server sending out
http:// URLs in it's responses unless the server has support for running
behind an SSL frontend. See for example the front-end-https cache_peer
option.
Thanks Henrik.
Under
Dear Sir,
I use delay pool option to limit bandwidth,
all internet traffic will through this server in my
network, i want to know whether it can limit all the
traffic (for example, MSN, ICQ, BT, etc.) and how can
i know does it work ?
Thank you very much.
Best Regards,
Ken Lei
to set
--enable-async-io paramter when i compile ?
Thank you very much!!!
Ken Lei
___
想即時收到新 email 通知?
下載 Yahoo! Messenger http://messenger.yahoo.com.hk
rather than send an error page.
Ken
I have seen this question asked before but I have
been unable to find the answer.
Using squid-2.5.STABLE9 as reverse proxy, I try to
defend my server against assorted nasties using
lots of 'src' and 'browser' acls.
But in access.log, when a 403
which rule has been invoked to
deny access.
What, if any, solutions exist?
TIA,
Ken
__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
mydomain2.com. The hostname will be
mydomain2.myisp.com. I have been informed by the ISP
that requests for mydomain2 will be forwarded to port
3128 through some magic outside my jailhost.
Do I need a second Squid? What is the recommended way
of doing this?
Thanks,
Ken
Henrik,
HTTP does not allow proxies to retry POST requests and requires proxies
to
retry failed GET request.
Thanks for your info. I should have looked into the protocol by myself...
shame on me!
Regards,
Ken Sugawara [EMAIL PROTECTED]
Linux @ IBM http://www-6.ibm.com/jp/linux/
---|
| |---GET---| 1st retry
| |---FIN---|
| |---GET---| 2nd retry
| |---FIN---|
|---503---| |
|---POST--| |
| |---POST--|
| |---FIN---|
|---503---| |
Is this distinction by design?
Thanks and regards,
Ken
fresh files and never allow a CLIENT_REFRESH_MISS
for any files called 'rss.xml'.
I would appreciate any help!
Thanks, Ken
--- Henrik Nordstrom [EMAIL PROTECTED] wrote:
On Thu, 30 Dec 2004, Ken Ara wrote:
Could Zope somehow tell Squid to perform the
refresh?
Or could an acl be used
Thanks Kinkie!
Since I am setting an Expires header, I did not
think this would work.
According to _Squid: The Definitive Guide_,
The refresh_pattern rules apply only to responses
without an explicit expiration time.
Perhaps I should try anyway and see what happens...
Ken
--- Kinkie
-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT
5.1; .NET CLR 1.1.4322)\r\nHost:
www.xx-websystem.xx-intra.net\r\nContent-Length:
143\r\nProxy-Connection: Keep-Alive\r\nPragma: no-cache\r\nCookie:
SIDE-A=xx; LtpaToken=...
Here, the second entry records a 503 error.
Regards.
Ken
).
As for information you requested, we are trying to obtain it.
Thanks and regards,
Ken Sugawara [EMAIL PROTECTED]
Linux @ IBM http://www.ibm.com/linux/
in a condition like the one described above, or are we
going to just have to move away from using persistent connection between
Squid and Web server?
Regards,
Ken Sugawara [EMAIL PROTECTED]
Linux at IBM http://www.ibm.com/linux/
I am using Squid in accelerator mode to cache complex
pages generated by Zope.
I use a combination of Python and DTML to update
individual database records. This Method ends with a
redirect to the publicly viewable page that has just
been modified, however to view my changes I need to
force a
)?
Thanks,
Ken Ara
__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
accelerator.
Here's the access log entry for it
1093519569.133 75316 127.0.0.1 TCP_MISS/301 569 GET
http://192.168.1.1/mrtg - TIMEOUT_DIRECT/192.168.1.1
text/html
1000 apologies if this is an already posted for
message!
Regards
Ken
__
Do You Yahoo
copies of ephemeral content, that would be easier
on my database...
I tried this:
acl aggressive src xx.xxx.xx.xx
never-direct allow aggressive
but this only got them an error, since I have no neighbors or parent.
How should this be done?
TIA,
Ken
on how I can fix or further diagnose
this problem?
Regards,
Ken.
Rebuilt the swap.state file and all is well. Must have got corrupted
somehow.
BTW - the old swap.state file was 4317888 bytes in size, the new one
119042256 bytes in size. Why the BIG increase in file size?
Cheers,
Ken.
? Can I assign different
delay pools to different authenticated users? Can I assign different delay pools
using a src ACL?
Thanks for any info.
Regards,
Ken.
validation was attempted'.
If I understand correctly from the log extracts below (access.log, store.log and
Zope's Z2.log), the object has been dropped and re-entered into the cache. Why? How
can I prevent this?
Thanks in advance!
Ken
--
202.202.202.202 - - [14/Jan/2004:01:47:45 +0100] GET
http
httpd_accel_uses_host_header on
---
I hope the above will provide some clue about what I am doing wrong!
TIA,
Ken
Thank you Henrik! We're caching!
milles mercis,
Ken
Have your read the Caching Tutorial for Web Authors and Webmasters
tutorial? url:http://www.mnot.net/cache_docs/
The Cacheability Engine url:http://www.mnot.net/cacheability/ is also
useful when looking into these kinds of problems
that, I had a beer... maybe that's what you meant by 'fix'?
Ken wrote:
Thank you Henrik! We're caching!
milles mercis,
Ken
What was your particular fix? I think one of my proxies is doing the
same thing...
Have your read the Caching Tutorial for Web Authors and Webmasters
tutorial
.
There are some other things happening that I don't understand, but I'll save those for
later...
Can someone help me to unravel this mystery?
TIA,
Ken
bypass_delay proxy_auth -i ABC\\username username username$
acl delayed_users proxy_auth -i ABC\\username2 username2 username2$
This doesn't work either.
In all cases it is delay pool 5 which is applied.
Any ideas on what I could do to get to my intended outcome?
Regards,
Ken.
option
So do I need to do a compile on some code to enable this option as it
sounds?
Many thx indeed, and if there is a free C compiler for windows, could you
recommend (otherwise I will have to find a Unic/Linus box)
Many thx
Ken
Thx Robert.
Would someone be able to supply with with the binary as I am not a
developer? I am getting confused with all the compilers/sources I need to
do?
Many thx indeed,
Ken
-Original Message-
From: Robert Collins [mailto:[EMAIL PROTECTED]
Sent: 21 September 2003 22:59
To: [EMAIL
Mozilla 1.4 claims to support NTLM authentication.
-Original Message-
From: Adam Aube [mailto:[EMAIL PROTECTED]
Sent: Friday, 11 July 2003 11:41
To: [EMAIL PROTECTED]
Subject: Re: [squid-users] Re: ntlm won't prompt
Please excuse my ignorance. Would passwords be passed in clear text
\
--with-winbind-auth-challenge
There doesn't seem to be a way to disable SWAT - so I just deleted it.
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Thursday, 3 July 2003 5:25 AM
To: Ken Thomson; [EMAIL PROTECTED]
Subject: Re: [squid-users] NTLM authentication with winbind
Hi Adam,
Yes - that was the problem. I didn't realise that the RedHat RPMS had
not enabled -with-winbind-auth-challenge. Installed Samba from source
and it worked fine :-)
Cheers,
Ken.
-Original Message-
From: Adam Aube [mailto:[EMAIL PROTECTED]
Sent: Thursday, 3 July 2003 5:43 AM
to compile with more than 1024 file descriptors (I think it
is a ulimit thing).
Regards,
Ken.
PS. I tried going through the mailing list archive - but without a full text search it
just takes too long trying to guess subjects.
Apologies - forgot to mention running on Linux (Redhat 7.3).
-Original Message-
From: Ken Thomson
Sent: Monday, 30 June 2003 17:04
To: [EMAIL PROTECTED]
Subject: [squid-users] Large squid cache configuration
A long time ago I found a website that went through the process of setting up
setup using NTLM authentication and SMB experience
this log problem?
2) Is the problem with the client or with the squid setup?
3) Is there a way to fix it?
4) What is the winbind NTLM helper? How does it differ to SMB?
Thanks in advance to any help or discussion people can provide.
Regards,
Ken.
73 matches
Mail list logo