Hi Solomon,
Solomon Asare wrote:
Hi,
--- Manoj_Rajkarnikar [EMAIL PROTECTED] wrote:
Great job solomon. Many of us have been trying to
achieve similar with
youtube and google vids. this will help a great
deal. how big of a
cachedir do you keep for youtube vids. should be
quite a big to be
Dear Tek,
Thanks for your intent to help.
I have updated squid to 2.6
Squid is loaded on the fedora 6 server which has two LAN cards (web
Internal) and present sole purpose of this server is of Internet
sharing, which is doing perfectly. But outlook ...?
IPTABLES is not running on the box
Hi,
--- Tek Bahadur Limbu [EMAIL PROTECTED] wrote:
Are you intending to run a single cache of 500 GB in
size or a couple of
proxy caches amounting to 500 GB in size?
It's because running a 500 GB cache in a single
machine is going to
cause you problems along the way. Such a large cache
On Mon, Oct 08, 2007, Solomon Asare wrote:
If for some reasons, your cache gets corrupted, it
might take a very
long time to fix it and I am sure that Squid's
median response might
will get higher.
Anyway, it's just my suggestion.
Now that you have mentioned it I will do some
Hi Todd,
Todd Harris a écrit :
Hi Sylvain -
I'm working to do the exact same thing that you are, although I'm
jumping right from 2.4 to 3.0PRE which has some nice features for
load-balancing.
I'm also interested in using ACLs over an external redirector. But I
see from your current
Hi,
--- Adrian Chadd [EMAIL PROTECTED] wrote:
That said, you're currently caching the videos in
the apache proxy and not in the
Squid proxy.
Adrian
Thanks for the info on large file sizes and memory
requirements. I store the objects in squid, not in
apache. The apache is a
On Mon, Oct 08, 2007, Solomon Asare wrote:
Thanks for the info on large file sizes and memory
requirements. I store the objects in squid, not in
apache. The apache is a non-caching proxy.
Essentially, it rewrites the headers to clear the
no-store, expiration, etc., limitations. Thats all.
Hi,
--- Adrian Chadd [EMAIL PROTECTED] wrote:
I'll have to look some more into it later on. Squid
could probably
be patched to do what you're using Apache for..
Adrian
That will be great, but what I don't understand is why
they don't want to make such a static object
cacheable.
On Mon, Oct 08, 2007, Solomon Asare wrote:
That will be great, but what I don't understand is why
they don't want to make such a static object
cacheable. Microsoft, Google, and those whose traffic
take up most of the Internet's capacity should make
their content cacheable so as to help all
Hi all, just wanted to know if its possible to create access groups for
different levels of access control by using NTLM authentication.
Abd-Ur-Razzaq Al-Haddad
IT Analyst
9 Queen
Hello H. Nordstrom,
I had already read that but unfortunately it didn't work. For some
reason when I negate ICAP for some ACL it bypass cache_peer too. Debug
all 9 could help us?
Regards,
Thiago Cruz
On 10/6/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
On fre, 2007-10-05 at 19:05 -0300,
Thiago Cruz wrote:
Hello H. Nordstrom,
I had already read that but unfortunately it didn't work. For some
reason when I negate ICAP for some ACL it bypass cache_peer too.
Most weird. Would you mind posting the related config both negated and
non-negated for comparison?
Debug
all 9 could
On Mon, Oct 08, 2007, Abd-Ur-Razzaq Al-Haddad wrote:
Hi all, just wanted to know if its possible to create access groups for
different levels of access control by using NTLM authentication.
Yes - there's a winbind group external ACL helper in the Squid distribution.
--
- Xenion -
Good morning!
One of my two instances of squid is failing today. It appears to start
normally, but then it will stop within a couple of seconds.
I am a running 2.5.STABLE3
Any guesses? I am at a loss...
-Steven E.
On Mon, Oct 08, 2007, Steven Engebretson wrote:
Good morning!
One of my two instances of squid is failing today. It appears to start
normally, but then it will stop within a couple of seconds.
I am a running 2.5.STABLE3
Its an old version of Squid! Run away! Run Away! :)
Any guesses?
Hi,
I am running a cache cluster and on the controller its running NTLM
authentication. The problem comes in when the user/me loads any SSL
site, it takes ages before it starts loading. Once its loaded if you
refresh/use it it goes quickly, just that initial load that takes very
long to load. I
On Mon, 8 Oct 2007 12:17:08 +0530
Arun Shrimali [EMAIL PROTECTED] wrote:
Dear Tek,
Thanks for your intent to help.
I have updated squid to 2.6
Squid is loaded on the fedora 6 server which has two LAN cards (web
Internal) and present sole purpose of this server is of Internet
sharing,
Hi All,
pls., is there a native filter infrasructure for
squid? I have seen such a patch on the net by Olaf
Titz. Has such a system been integrated into any of
the recent releases, eg Squid3?
Thanks,
solomon.
Of course not, here is it:
+++
http_port 8080
icp_port 0
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp: 144020% 10080
refresh_pattern ^gopher:14400% 1440
refresh_pattern .
Hi All,
pls should anyone want to try the scripts I posted, do
not use /tmp.
I irretrievably trashed my 300+ rules which have been
built over a week. I now have to crawl up all over
again. I wonder why I didn't see this coming. You may
use /var/log/squid/.
Thanks,
solomon.
Hi All,
pls., is there a native filter infrasructure for
squid? I have seen such a patch on the net by Olaf
Titz. Has such a system been integrated into any of
the recent releases, eg Squid3?
Depends on your definition of filter. It could mean ACLS or re-writer.
Assuming this is a
Of course not, here is it:
Thank you. Everything look normal to me.
What do you do to negate ICP for some ACL?
Amos
+++
http_port 8080
icp_port 0
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:
On tor, 2007-10-04 at 18:51 +0200, Philipp Rusch wrote:
How would I define the correct ACL and/or http_access rule
to access external hosts, that are to be reached through a https-
admin interface that is using port 8080 ?
I tried to add 8080 to the list of SSL-Ports like
acl SSL_ports 443
On fre, 2007-10-05 at 15:50 +0200, polloxx wrote:
http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication#head-1d6e24e071a1a5e65f112d9a96cdf1320684a8f2
q
For Samba-3.X the winbind helpers which was shipped with Squid should
not be used (and won't work if you attempt to do so), instead
On tor, 2007-10-04 at 11:14 +0100, Robert French wrote:
That's true, it's just a normal HTTP and HTTPS proxy
I have managed a workaround by forcing connections to the problem site
through a different Squid proxy which works fine
I'm just a little confused on what could be causing the issue
Please file a bug report
http://www.squid-cache.org/bugs/
On fre, 2007-10-05 at 11:40 +0200, Ali resting wrote:
Hi Adrian,
I get the following stack traces. I had changed the refresh_pattern a while
back so my current refresh_pattern:
refresh_pattern ^ftp: 144020%
On mån, 2007-10-08 at 10:49 -0700, Solomon Asare wrote:
Hi All,
pls., is there a native filter infrasructure for
squid? I have seen such a patch on the net by Olaf
Titz. Has such a system been integrated into any of
the recent releases, eg Squid3?
Kind of. Squid-3 supports ICAP which means
On tis, 2007-10-02 at 15:54 +0200, Reinhard Haller wrote:
urlgroup is not yet ported to 3.0pre6/7
Seems so.
Regards
Henrik
signature.asc
Description: This is a digitally signed message part
On Tue, 2007-10-09 at 12:21 +1300, Amos Jeffries wrote:
Hi All,
pls., is there a native filter infrasructure for
squid? I have seen such a patch on the net by Olaf
Titz. Has such a system been integrated into any of
the recent releases, eg Squid3?
Depends on your definition of
hello
As best as I can explain it, many sites, typically newspaper or
media outlets auto refresh after a certain time. I presume that the
code
meta http-equiv=Refresh content=0300 / which I've taken from a
particular site is the entry that causes it to happen.
This means that users
Ah.. a Gotcha !!
Noted.
Thanks
Manoj
On Mon, 8 Oct 2007, Solomon Asare wrote:
Hi All,
pls should anyone want to try the scripts I posted, do
not use /tmp.
I irretrievably trashed my 300+ rules which have been
built over a week. I now have to crawl up all over
again. I wonder why I didn't
31 matches
Mail list logo