Hi all
I was trying to configure the ssl-bump feature. I forgot to allow the
initial CONNECT (or the fake CONNECT, in case of intercepting proxy). This
led me to some strange results which I'd like to point out. I am using
CentOS 8 with squid 6.13 recompiled from the Fedora RPM.
First case
page? I just want to only aloow
the internal IPs and cut everyone else off.
I've tried taking out the deny_info, but that sends the user and tool to a
squid error page which basically fails the test as well since it's on the same
site.
I've also tried doing a TCP_RESET instead, but for some
.4
and .5 as well.
How do I ensure that www.example.com/tst/map1/. and map2 only go to .4 and
.5 while still correctly being consistent with the domain was you suggested.
Thanks.
On Fri, Aug 30, 2019, at 11:41 AM, Alex Rousskov wrote:
> On 8/30/19 11:44 AM, cred...@eml.cc wrote:
Server: squid
Mime-Version: 1.0
Date: Wed, 27 Mar 2019 20:36:20 GMT
Content-Type: text/html
Content-Length: 5
X-Squid-Error: TCP_RESET 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from www.example.com
X-Cache-Lookup: NONE from www.example.com:80
Via: 1.0 www.example.com (squid
Slightly off topic but am I correct in thinking TLS supersedes SSL?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
> So, Squid is installed on an Ubuntu VM, which runs on your laptop?
Correct
> So, the phone is either - direct connection via mobile Internet access, or
> via Squid and your home Internet connection - no way for the phone to use the
> Internet connection without going via Squid?
Ye
Hi,
Re network diagram - Mish Mash / blended / spaghetti I think :p
Squid is installed on the Ubuntu virtual machine. Sorry forgot to draw that on.
The phone connects to mobile internet when out of the house, then reverts back
to going via squid proxy when my laptop wifi is turned
actual sites visited?
Can anyone point out any flaws or issues.
Thanks
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
I asked this some time ago and am bringing it up again to see if there are any
suggestions since we haven't been able to fix it.
We are using squid as reverse proxy and we have disabled SSLv3 :
https_port XXX.XXX.XXX.XXX:443 accel defaultsite=www.example.com vhost
cert=/etc/cert.pem key
Hello squid users,
I'm trying to understand a strange problem with requests to edge.apple.com,
which I think may be related to IPv6 DNS resolution.
To set the scene - we operate a large (1,000+) fleet of Squid 3.5.25 caches.
Each runs on a separate LAN, connected to the internet via another
We are using squid as reverse proxy and we have disabled SSLv3 :
https_port XXX.XXX.XXX.XXX:443 accel defaultsite=www.example.com vhost
cert=/etc/cert.pem key=/etc/privkey.pem
options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE,CIPHER_SERVER_PREFERENCE
cipher=ECDHE-ECDSA . . .. dhparams=/etc
Hi, I have just installed squid on windows 10, open the port 3128 in the
firewall and configured FF to use the proxy on localhost:3128 for all the
requests, but every request ends with the following page
Hi Squid users,
I'm having some trouble understanding Squid's peer selection algorithms, in
a configuration where multiple cache_peer lines reference the same host.
The background to this is that we wish to present cache service using
multiple accounts at an upstream provider, with account
ting the same issue a few times a day. I suspect it's
mainly due to clients accessing Windows Updates, but difficult to tell.
I am automatically restarting squid, but the delays for other users
while all this is happening can generate a poor browsing experience.
Thanks
Mark
__
Thanks Garry and Amos! My problem is solved.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Can anyone point out what I'm doing wrong in my config?
Squid config:
https://bpaste.net/show/796dda70860d
I'm trying to use ACLs to direct incoming traffic on assigned ports to
assigned outgoing addresses. But, squid uses the first IP address
assigned to the interface not listed in the config
You are not allowed to post to this mailing list, and your message has
been automatically rejected. If you think that your messages are
being rejected in error, contact the mailing list owner at
squid-users-ow...@lists.squid-cache.org.
--- Begin Message ---
Can anyone point out what I'm doing
efox x" ipv4-1 ->
tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-1
acl ipv4-2 myportname 3129 src xxx.xxx.xxx.xxx/24 -> http_access allow ipv4-2
-> request_header_replace User Agent "Internet Explorer x" ipv4-2 ->
tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-2
Thanks!
______
certainly
> creates problems for those who want to [ab]use http_reply_access as a
> delay hook. FWIW, Squid had this exception since 2007:
Thanks, makes sense. It would be great if there was a way to slow down 407
responses; at the moment the only workaround I can think of is to write a
log-wa
lly generated responses get http_reply_access applied to them.
> Yet no sign of that in your log.
>
> Is this a very old Squid version?
It's a recent Squid version - 3.5.20 on CentOS 6, built from the SRPM kindly
provided by Eliezer.
> Or are the "checking http_reply_access" lines
a::match: checking 'http://www.theage.com.au/'
2016/10/04 22:37:18.160 kid1| 28,3| RegexData.cc(62) match:
aclRegexData::match: looking for '(^cache_object://)'
2016/10/04 22:37:18.160 kid1| 28,3| RegexData.cc(62) match:
aclRegexData::match: looking for '(^https?://[^/]+/squid-internal-mgr/)'
2
onse. Running strace across the
pid of each child helper doesn't show any activity across those processes
either.
I also tried the approach suggested by Amos:
> The outcome of that was a 'ext_delayer_acl helper in Squid-3.5
>
> <http://www.squid-cache.org/Versions/v3/3.5/manuals/ex
>
>
> > If you input http://www.yahoo.com/page.html, this will be transformed
> > to http://192.168.1.1/www.google.com/page.html.
>
> I got the impression that the OP wanted the rewrite to work the other way
> around.
My apologies, that does seem to be t
w.yahoo.com
> the proxy can pickup the the host "http://www.yahoo.com; from the URI, and
> retrieve the info for me,
> so it need to get the new $host from $location, and remove the $host from the
> $location before proxy pass it.
> it is doable via squid?
Yes it is doable (but
Hi Squid users,
Seeking advice on how to slow down 407 responses to broken Apple & MS
clients, which seem to retry at very short intervals and quickly fill the
access.log with garbage. The problem is very similar to this:
http://www.squid-cache.org/mail-archive/squid-users/201404/0326.
missing? I'm sort of out of my
league here
so I may just quit and wait for v4. ;)
Thanks,
Jamie
>Sadly, that is kind of expected at present for any single client
>connection. We have some evidence that Squid is artificially lowering
>packet sizes in a few annoying ways. Used to m
My squid server has 1Gbps connectivity to the internet and it routinely gets
600 Mbps up/down to speedtest.net.
When a client computer on the same network has a direct connection to the
internet it, too, gets 600 Mbps up/down.
However, when that client computer connects through the squid
Hi,
Trying to use Squid 3.5 to filter a white list on wifi hotspot. Got
http support without issue.
Tried lots of things to get https to work but always kills http, all
the http requests time out.
So I am starting to think that maybe
http 3128 transparent
is not compatible with ssl_bump
> Thanks. The current maximum_object_size_in_memory is 19 MB.
>
>>
>> In summary, dealing with in-RAM objects significantly larger than 1MB
>> bigger the object, the longer Squid takes to scan its nodes.
>>
>> Short term, try limit
stent_connections off
This option didn't fix the problem. The CPU usage went wild again after
about a day.
I've changed the maximum_object_size_in_memory setting as suggested by
Alex, and I'll report back on that.
Mark
_______
squid-users mailing list
squid-u
On 2016-03-31 18:44, Alex Rousskov wrote:
>
> My working theory is that the longer you let your Squid run, the bigger
> objects it might store in RAM, increasing the severity of the linear
> search delays mentioned below. A similar pattern may also be caused by
> larger object
On 2016-03-31 16:07, Yuri Voinov wrote:
>
> Looks like permanently running clients, which is exausted network
> resources and then initiating connection abort.
>
> Try to add
>
> client_persistent_connections off
>
> to squid.conf.
>
> Then observe.
Tha
Hi,
I'm running:
Squid Cache: Version 3.5.15 (including patches up to revision 14000)
on FreeBSD 9.3-STABLE (recently updated)
Every week or so I run into a problem where squid's CPU usage starts
growing slowly, reaching 100% over the course of a day or so. When
running normally its CPU
On 2016-03-15 09:40, sq...@peralex.com wrote:
> On 2016-03-15 09:05, Amos Jeffries wrote:
>> On 15/03/2016 7:34 p.m., squid wrote:
>>
>> This is bug 4447. Please update to a build from the 3.5 snapshot.
>>
>
> Thanks. I'll give that a try.
>
Looks like it's
Hi,
I've installed a Squid reverse proxy for a MS-Exchange Test-Installation to
reach OWA from the outside.
My current environment is as follows:
Squid Version 3.4.8 with ssl on a Debian Jessie (self compiled)
The Squid and the exchange system are in the internal network with private
ip
I'm running FreeBSD 9.3-STABLE and Squid 3.5.15 and I'm getting regular
core dumps with the following stack. Note that I have disabled caching.
Any suggestions? I've logged a bug (4467):
#0 0x000801b8c96c in thr_kill () from /lib/libc.so.7
#1 0x000801c55fcb in abort () from /lib
Hi Squid users,
I'm seeking some guidance regarding the best way to debug the http_access
and http_reply_access configuration statements on a moderately busy Squid
3.5 cache. In cases where a number (say, 5 or more) of http_access lines
are present, the goal is to find which configuration
Hello together,
I am using Squid 3.5.12 with Kerberos Authentication only and ClamAV
on Debian Jessie.
My Proxy is working very nice, but now I've found an issue with just
one SSL Website.
It would be nice to know if others can reproduce this Issue.
Target website is: https://www.shop
Dear Alex,
unfortunately not really fixed.
The upload speed using squid 4.0.1 with this patch has bettered significant
but is far away from squid 3.4.x performance.
The used test client can reach a maximum upload speed of 115 MBIT if the
apache server is directly reachable.
If a SQUID 3.4.X
Dear Alex,
using squid 3.5.10 with patch the upload speed problem seems to be fixed.
Now I get 112Mbit upload speed from a possible maximum of 115Mbit.
Squid 4.0.1 still has a performance problem on unencrypted POST upload ...
BR, Toni
(TSO off)
12:10:16.343559 IP 10.1.1.210.49388
Dear squid team,
first of all thanks for developing such a great product!
Unfortunately on uploading a big test file (unencrypted POST) to
apache webserver using a squid proxy (V 3.5.10 or 4.0.1) the upstream
pakets get slized into thousands of small 39 byte sized pakets.
Excerpt from
Thanks for your valuable information Amos.
Regards,
Nithi
On Friday 26 June 2015 10:48 AM, Amos Jeffries wrote:
On 26/06/2015 4:36 p.m., Squid List wrote:
Hi,
Is the Squid can cache Microsoft Updates and IOS Updates?
If its cache means, please help me out for cache Chrome OS updates
Hi,
Is the Squid can cache Microsoft Updates and IOS Updates?
If its cache means, please help me out for cache Chrome OS updates in
latest squid version that is installed in CentOS 6.6.
Thanks Regards,
Nithi
___
squid-users mailing list
squid
in particular column in db, you can store it
in separate txt file and can control the site access of the users.
Squid will support user defined helper. If it necessary to verify site
from db, you can create your own helper as per you requirement and you
can use it. If you need any customization assistance
http_access deny google
but I suspect maybe you might not actually like the results of what
you are asking for.
What's the best directive to use to make sure that google doesn't go
through the proxy at all?
acl google dstdom_regex -i google
?
___
squid
.
It's only since the upgrade of squid so must be something in the config.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
at all?
acl google dstdom_regex -i google
?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
as an issue.
auth_param basic realm AAA proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
Hi Amos,
The configuration I post last time still cannot accomplish the tasks. So, you
mean the CONNECT ACL and must pair with normal GET command ACL to be
evaluated by squid ?
Best,
Kelvin Yip
-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org
information is not displayed.
Seems squid do not use the authentication information when matching the this
rule: http_access allow CONNECT google.
The CONNECT method is success. Then, the squid will continue use no
authentication information to process the GET command, causing
Hi,
Check the below acl rule in your squid configuration file to Block the
particular Domain URLs and also block keywords itself.
# ACL block sites
acl blocksites dstdomain .youtube.com
# ACL block keywords
acl blockkeywords url_regex -i .youtube.com
#Deny access to block keywords ACL
preview: Hi, Yes, we can redirect the ports to squid through our
firewall
rules. Check below lines to redirect the ports. We have some different
methods
to do. 1. In first Method: First, we need to machine that squid will be
running
on, You do not need iptables or any special kernel
Hi,
The http://www.squid-cache.org/ domain web site is working fine.
We have accessed the site a min ago.
Regards,
ViSolve Squid
On 9/30/2014 1:47 PM, Neddy, NH. Nam wrote:
Hi,
I accidentally access squid-cache.org and get 403 Forbidden error,
and am wondering why NOT redirect to WWW.squid
Hi Fred,
Sounds good, Already we have some proxy servers (like squid with
dansguardian ) tools to block the Nudity sites(including the images,
contents and videos etc..).
Is their any specific reason for going this API (
nudityimagesfilterforsquid )?
Thanks,
Visolve Squid
On 8/23/2014
will automatically block
the sites.
And also we are not sure with very newly released domains.
Thanks Regards,
Visolve Squid
On 8/23/2014 2:47 PM, Vdoctor wrote:
Hello Visolve,
Is your DansGuardian able to block all porn/sexy websites/images, including
the very new domains just released ?
How do
Hello Fred,
Thanks for your suggestion. Surely we will look for your API.
Regards,
Visolve Squid
On 8/23/2014 5:57 PM, Vdoctor wrote:
Hi Visolve,
Sure, you could do it with DansGuardian, personaly I prefer and advise
UfdbGuard that is - from my point of view - much more powerful in term
Hi Jason Haar,
Trend micro (Stop inbound threats Secure outbound data) is one of the
best Inter Scan Web Security Virtual Appliance.
And also have listed other AV vendor:
Samba-vscan-ICAP isilonicap AV scan (EC2) , etc..
Regards,
Visolve Squid
On 8/18/2014 3:00 PM, Jason Haar wrote:
Hi
Hello Stepanenko,
The store.log is a record of Squid's decisions to store and remove
objects from the cache. Squid creates an entry for each object it stores
in the cache, each uncacheable object, and each object that is removed
by the replacement policy.
The log file covers both in-memory
why are you using unbound for this at all?
Well, we use a geo location service much like a VPN or a proxy.
For transparent proxies, it works fine, squid passes through the SSL
request and back to the client.
For VPN, everything is passed through.
But with unbound, we only want to pass through
which one?
It's client -- unbound -- if IP listed in unbound.conf -- forwarded
to proxy -- page or stream returned to client
For others it's client -- unbound -- direct to internet with normal DNS
Take a look at:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP
Your squid.conf seems to be too incomplete to allow SSL-Bump to work.
Eliezer
I recompiled to 3.4.6 and ran everything in your page there.
squid started correctly.
However, it is the same problem. Any https page
What are the iptables rules for that?
Also look at:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP
I recompiled to 3.4.6
and ran everything in your page there.
squid started correctly.
However, it is the same problem. Any https page that I had configured
does not resolve
squid it askes for the certificate password and starts
ok but it still won;t resolve the SSL websites.
I also added an iptables forward directive:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j
REDIRECT --to-port 3130
CONF:
acl manager proto cache_object
acl localhost src
to enable ssl
redirects in unbound or squid?
Handle port 443 traffic and the encrypted traffic there.
You are only receiving port 80 traffic in this config file.
I am already redirecting 443 traffic but the proxy won't pick it up.
There is a SSL ports directive in the squid.conf so it should accept
mask.
If the number of cache-engines is low, one could think on having a mask
of just 1 or 2 bits for instance, so that the processing time at the
router is minimized.
What do you think?
Thanks.
On 08/06/2014 11:16 AM, Amos Jeffries wrote:
On 5/08/2014 12:27 a.m., Squid user wrote:
Hi
Hi.
Could you provide any help on the below?
Basically, what I need is to know whether Squid has a directive to be
used when Mask assignment is used, allowing to send to the WCCP client
what is the mask that should be used.
I have seen none, so far.
It is possible to set the assignment
Hi Amos.
Could you please be more specific?
I cannot find any wccp-related directive in Squid named IIRC or similar.
Yes, it can be set in the router, but, according to the WCCP Internet Draft:
It is the responsibility of the Service Group’s designated web-cache to
assign each router’s mask
Hi Amos.
When you say it is the same flags .
Do you mean the same flags hash assignment uses?
Thanks.
On 08/04/2014 02:42 PM, Squid user wrote:
Hi Amos.
Could you please be more specific?
I cannot find any wccp-related directive in Squid named IIRC or similar.
Yes, it can be set
in unbound or squid?
squid.conf
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from
Hi.
I'm trying to configure my squid as a WCCPv2 cache engine, according to
the following requirements:
- Assignment method: Mask assignment
- Mask based on source ip (for one service group)
- Mask based on destination ip (for another service group)
The problem is I do not know how to specify
in unbound or squid?
squid.conf
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from
That's very odd. I'd try calling them... There are quite a few folks
blocking proxies these days. What I do is remove the via and
forwarded for headers with the following command:
check_hostnames off
forwarded_for delete
via off
The same configuration in an earlier version of squid doesn;t
Check whether your browser goes through squid or not?
You can find this by using the url: http://cbe.visolve.com/
If your browser goes through squid then the above url shows that the
proxy detected column. Eventhough your access log is not shown
anything then let us know your squid.conf file
How about contacting google for advise?
They are the one that forces you to the issue.
They don't like it that you have a 1k clients behind your IP address.
They should tell you what to do.
You can tell them that you are using squid as a forward proxy to
enforce usage acls on users inside
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port
press play, they just hang
continiuously on downloading.
auth_param basic realm proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl all src all
Hi! This is the ezmlm program. I'm managing the
squid-users@squid-cache.org mailing list.
I'm working for my owner, who can be reached
at squid-users-ow...@squid-cache.org.
This is an automated response from the squid-cache.org list server
to confirm the requested action.
If you did not send
Hi! This is the ezmlm program. I'm managing the
squid-users@squid-cache.org mailing list.
I'm working for my owner, who can be reached
at squid-users-ow...@squid-cache.org.
The address
arch...@mail-archive.com
was already on the squid-users mailing list when I received
your request
I configured squid to cache large files i.e. 100MB
but it does not cache these files.
any idea?
--
Aris System Squid Development
On 1/5/2014 4:45 PM, Kinkie wrote:
On Sun, Jan 5, 2014 at 1:06 PM, Aris Squid Team
squid@arissystem.com wrote:
I configured squid to cache large files i.e. 100MB
but it does not cache these files.
any idea?
Have you checked whether these files are cacheable, e.g. with redbot ?
(http
Hi,
Is there any way to apply refresh patterns to object of specific size
range. I want to apply refresh patterns to objects which are bigger than
a specific size.
thanks
--
Aris System Squid Development
On 12/26/2013 9:31 AM, m.shahve...@ece.ut.ac.ir wrote:
Not possible because there is none that recognize request protocol.
What happens is admin configure squid.conf ports manually, one per
protocol type to be recieved. Squid only supports HTTP, HTTPS, ICP,
HTCP, and SNMP incoming traffic
On 12/12/2013 1:50 PM, 0bj3ct wrote:
Hi, Am using Squid 3.3.8. Want to prevent the Squid server to change ip
addresses of clients. How can I do it? How to disable ip replacing in Squid?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/ip-hiding-in-squid-3-3
Hello community,
I know that the question of the emptying of the Squid cache has already been
asked very often.
Is there now a function to empty the cache without deleting the directory
structure? A customer asks for it, because he wants to use Squid as a reverse
proxy in a high availability
, but it seems like
these unmatched lengths are preventing the caching.
FWIW I suspect that is a bug in the header vs object calculations.
A tcpdump of the corresponding connections shows that squid is closing
the connection to the server before the download is finished. I'll
continue looking
In the sizes fields of store.log, what do negative sizes mean? For
instance, I'm getting this, and I'm interested in knowing the meaning of
the -312:
... -1 application/octet-stream 96508744/-312 GET
http://au.v4.download.windowsupdate.com/msdownload/update/software
Thanks
Mark
Hello,
We have a problem with the squid when accessing a servlet page through
the squid proxy.
It is report page where the inputs are taken from the user and the
servlet manipulates the report and present it in the page.
Normally it takes around 45-60 seconds to generate the report. So we
Hi,
Any advice on the unit of measure for st??? Thank you.
From: squid...@hotmail.com
To: squid-users@squid-cache.org
Date: Thu, 11 Oct 2012 23:26:56 +0800
Subject: [squid-users] Unit of measure for st
Hi,
There is a parameter in the logformat and the format code is st which is
Sent
Hi,
There is a parameter in the logformat and the format code is st which is Sent
reply size including HTTP headers.
I would like to know is the unit of measure of unit for this parameter in bits
or bytes.
Thank you.
On Apr 29, 2012, at 10:36 PM, Amos Jeffries wrote:
On 28/04/2012 10:37 a.m., Squid Tiz wrote:
I am kinda new to squid. Been looking over the documentation and I just
wanted a sanity check on what I am trying to do.
I have a web client that hits my squid server. The squid connects
Hi,
I have a server running Squid 2.7 stable 15 and facing client spamming. The
problem happen when a client press and hold on to the F5 button on the PC and
this will generate few hundred of requests to the my squid proxy.
Please advise how can I prevent or drop the client traffic when
I am kinda new to squid. Been looking over the documentation and I just wanted
a sanity check on what I am trying to do.
I have a web client that hits my squid server. The squid connects to an apache
server via ssl.
Here are the lines of interest from my squid.conf for version 3.1.8
I limit the maximum file size an employee can download to our network using
reply_body_max_size 100 MB proxy_user1. If this limit is hit, Squid returns a
403.
My problem is that I would like to differentiate between the status code 403
that comes from a target website that does not allow
Dear all,
I'm using accel (reverse proxy) with vhost in squid, but it not work
when received https request. I know i can set the https_port and add
the cert to my squid. But I just want to pass my squid cache server
and let the request just redirect to the web server. How to do
(Help)
im new one in squid
im will build squid server with youtube cache.
can anyone squid master here to help me to enable youtube with squid
or third software (like youtube_cache).
thanks before
--
Best regards,
rioda78.squid mailto:rioda7878.sq...@gmail.com
Good day,
Thanks all for concern. The network topology is as follow:
Workstations are installed with Windows 7 Pro with spyware terminator with
integrated ClamAV all link to a Cisco 2950 switch and a multihome server
with Windows 7 Ultimate with ESET AV and Squid has one NIC connected to
the Cisco
Good day,
Some times when i check my ESET Antivirus LogFile, it shows that some
activities of clients in my network are attacking my network especially
squid port (3128) with TCP Flooding or DNS Poisioning. I check the
internet for there meaning and found out that they are not good activities
/11 12:35:50| storeAufsOpenDone: (13) Permission denied
2011/04/11 12:35:50|d:/squid/var/cache/00/0D/0DE0
2011/04/11 12:35:50| storeSwapOutFileClosed: dirno 1, swapfile 0DE0,
errflag=-1
(13) Permission denied
2011/04/11 12:35:50| httpReadReply: Excess data from GET
http
1 - 100 of 569 matches
Mail list logo