Since they are using the same dns server there is no need to run some
trials.
The only test you should in any case test is to see how long is the IP
list from the DNS request for the domain name.
Eliezer
On 08/10/2015 12:12, Roel van Meer wrote:
Eliezer Croitoru writes:
Are the users
Hey,
Are the users and proxy using different dns server?
Can you run dig from the proxy on this domain and dump the content to
verify that the ip is indeed there?
Eliezer
On 06/10/2015 14:55, Roel van Meer wrote:
Hi everyone,
I have a Squid setup on a linux box with transparent
Just wondering if you can contribute to the StoreID DB at:
http://wiki.squid-cache.org/Features/StoreID/#A_CDN_Pattern_Database
Eliezer
On 07/10/2015 12:10, Yuri Voinov wrote:
Sure.
Look at the typical fb URL:
http://i.imgur.com/3xQxD1z.png
It uses Akamai CDN and, without store-id, you will
Hey Robert,
If you have an access_denied then something should show up in the
access.log.
It is pretty hard to tell from what it comes if the settings are unknown.
If you have about 900 users and it's static then using using conf files
is fine.
But it it's a dynamic application, you should
On 02/10/2015 15:47, Александр Демченко wrote:
https_port squid_ip:3129 intercept ssl-bump \
key=/etc/squid/certs/squid.pem \
cert=/etc/squid/certs/squid.pem \
generate-host-certificates=off \
dynamic_cert_mem_cache_size=0MB \
sslflags=NO_DEFAULT_CA
Why no mem cache exactly? this is might be a
Hey Paul,
From what I have seen until now I believe that the ICAP service
response is for a CONNECT request.
From security reasons browsers are not allowing or rather then not
implanting support for a direct HTTP response to a CONNECT(tunnel) requests.
This is why you see this reaction from
I already had a plan to write something like that in the past and I had
some time so I wrote this store.log tool:
http://paste.ngtech.co.il/pr3kbbf4q
The tool is written in ruby and what it does is "estimating" what is in
the cache_dir now based on reading the store.log.
Since I have not
using audit2allow.
So supply the exact OS and also if possible squid.conf(removing
password, spaces,comments etc)
Eliezer
On 29/09/2015 16:34, Veiko Kukk wrote:
On 24/09/15 03:00, Eliezer Croitoru wrote:
Since it's a security release I will not write an article this time.
But I am happy
On 29/09/2015 20:51, Leonardo Rodrigues wrote:
That's i was afraid, there's no tools to analyze the data. Anyway,
thanks for the answer.
These can be written.
First there is a need to actually write the goal of the tool.
Then learn the structure of the log.. then write a small app.
I can
Thanks for the insight.
You are right, it is not well defined.
I will try to rephrase or clear couple things.
Mainly content filtering is for offensive content.
This by definition is not the goal of a security related product that
would not like to reveal the client attempts to reach the site
Hey Manuel,
The reason the client receives the destination IP or other details is
due to the structure of the ERROR page.
Depends on your OS you can find the ERROR page file and modify it so the
format will meet your requirements.
You can take a look at the wiki about custom error pages:
Not related exactly to the bug but an updates version is preset as far
as I know and I will update to 3.5.9 in the next week or two.
Eliezer
On 23/09/2015 09:16, Степаненко Сергей wrote:
OS - Centos6.7, squid - 3.5.7 from www1.ngtech.co.il repo
PS
Sorry for bad English.
On 23/09/2015 16:55, FredB wrote:
I don't know about freebsd, diskd is a separate process with a light consumption
Top with 3000 simultaneous users (2 x caches 250 Go full)
Just as a side note:
I have tested and compared RAM only squid FreeBSD VS Linux and it seems
like FreeBSD tests results
An update.
I closed the API since there was almost no use in this one.
But I have been working on some ways to handle youtube using ECAP.
It's not yet clear when but it will happen some day.
Eliezer
On 29/08/2014 14:12, Eliezer Croitoru wrote:
Inspired by unveiltech.com I have tried to write
Since it's a security release I will not write an article this time.
But I am happy to release the new RPMs for squid cache 3.5.9.
In this release the major thing is a security update while I have ECAP
support for the CentOS 7 RPMs.
It is now a requirement for squid on CentOS 7 to have libecap
Hey Mumin,
What do you need from the db?
If you need a blacklist I can offer you to use SquidBlocker which I wrote:
http://ngtech.co.il/squidblocker/
The DB is not fully documented but it works under a very heavy load and
seems to give good results.
Eliezer
On 15/09/2015 12:23, Mumin Coder
Hey,
Why it is exiting is one thing.
But with your settings you can disable it and feel OK with it.
add "pinger_enable off" to your squid.conf.
Take a look at:
http://www.squid-cache.org/Doc/config/pinger_enable/
Eliezer
On 19/09/2015 10:58, TarotApprentice wrote:
Running 3.5.7 under Debian.
Hey Yuri,
I have compiled the services for solaris and windows and can be
downloaded at:
http://ngtech.co.il/squidblocker/downloads/
Also I am publishing the client source code at:
https://github.com/elico/squidblocker-client
This is one piece of the puzzle that takes a very high load.
One
Hey Marcio,
It is unclear what exactly you are trying to do and with what.
You might need to add the ports to tha SSL_ports and not only to the
Safe_ports.
All The Bests,
Eliezer
On 12/09/2015 18:51, Marcio Demetrio Bacci wrote:
Hi,
I need free access to wattsapp by squid. The ports (5222,
Hey List,
I have compiled SquidBlocker for windows and was wondering if there is
someone who will want to use it.
If you do please contact me.
SquidBlocker is an alternative to squidguard built in GOLANG and
supports more then 2k requests per second per process with a http
interface which
In a case you would want to change the size you could just directly
patch the sources instead of configuring it.
Eliezer
On 08/09/2015 05:11, Jason Enzer wrote:
trying to build in larger maxtcplistenports into 3.5.7 for centos 6
what would i need out of here to get a build working? i mean
On 10/09/2015 01:12, Ralf Hildebrandt wrote:
Do I need to set any library in apache2 ?
No.
Library not but maybe a file type.
Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
a special action from fail2ban in the
mangle table of iptables.
An example fail2ban action file: “action.d/iptables-redirect.conf”
# Fail2Ban configuration file
#
# Author: Cyril Jaquier
# Modified by Yaroslav Halchenko for multiport banning
# Modified by Eliezer Croitoru for DNAT into a ban page
a little
more closely.
Cheers,
Howard
On Mon, Sep 7, 2015 at 1:58 PM, Eliezer Croitoru <elie...@ngtech.co.il>
wrote:
Hey Howard,
On 07/09/2015 21:32, Howard Waterfall wrote:
1) Earlier in the thread, Amos suggested I run:
apt-get build-dep squid
You would need to us
Hey Howard,
On 07/09/2015 21:32, Howard Waterfall wrote:
1) Earlier in the thread, Amos suggested I run:
apt-get build-dep squid
You would need to use the "squid3" and not "squid" since this is the
package ubuntu builds squid for.
so the command should be:
apt-get build-dep squid3
I have
Well I cannot change the current snmp client but I can create a bridge
from squidclient interface into SNMP.
Eliezer
On 06/09/2015 14:57, FredT wrote:
Amos,
Ok, i can understand that definition but why does squid display a wrong
number ? in the concept that is the question !
If you must
On 02/09/2015 12:46, Yuri Voinov wrote:
all, but I assume that you do not want innocent victims, like the few
gifs that actually have a different image depending on the parameter.
May be, may be not. Most often I deal with unscrupulous webmasters who
deliberately do the same unfriendly content
On 02/09/2015 13:00, Yuri Voinov wrote:
I'm getting a very high hit ratio in my cache.And I do not intend to
lower its with myself. Enough and that on the opposite side of the
thousands of webmasters counteract caching their content on its own
grounds. Beginning from YouTube.
Well, Most sane
Hey Kinkie,
If you want to publish this specific version as an RPM I would be happy
to build couple of them with this patch.
Eliezer
On 01/09/2015 11:26, Kinkie wrote:
Hi all,
I am currently working on some performance improvements for the
next version of squid; I need some help from
Works for me:
#curl -Iv wiki.squid-cache.org
* Rebuilt URL to: wiki.squid-cache.org/
* Hostname was NOT found in DNS cache
* Trying 2001:4b78:2003::1...
* Connected to wiki.squid-cache.org (2001:4b78:2003::1) port 80 (#0)
> HEAD / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: wiki.squid-cache.org
Hey Joe,
Can you give more details? I didn't understood what was the issue.
Eliezer
On 29/08/2015 12:31, joe wrote:
ok guys i solve it it was one of the command future inside the conf its a bug
but next cpl day i re test them again cause my client upset
i took off cpl of the option and my CPU
What speeds are you talking about?
Eliezer
On 30/08/2015 01:04, joe wrote:
hi Eliezer one of the option in squid.conf has bug or duno wat to say going
to see wish one i took of cpl of it and i solve the issu i did not know
witsh one but i have backup for the squid.conf going to re use it to
I am gathering information on different routing options for squid tproxy
mode for quite some time.
I have a working settings for:
- Cisco
- Linux
- FreeBSD
- OpenBSD
- Mikrotik
The topology I have tested it until now is at:
http://ngtech.co.il/squidblocker/topology1.png
The Edge router
Solve what exactly??
If the site is broken I think that the only solution is for the site
admin to fix the issue...
Eliezer
On 27/08/2015 22:51, Jorgeley Junior wrote:
You're the man Amos!!! You're the man!!! Thanks!!! Thanks so so much!!!
that's solved the problem, but I'm thinking if it
02:32, Eliezer Croitoru wrote:
After remembering this thread:
http://www.squid-cache.org/mail-archive/squid-users/201102/0236.html
I had some time to run tests here and there, I am testing now FreeBSD
traffic diverting with PF and seems to not understand something.
The topology is:
client
which was beta
tested for weeks now. It doesn't contains all that I want but it does
give more then many other tools.
I am not opening the sources for this tool yet and it will probably
happen later in the future.
All The Bests,
Eliezer Croitoru
On 25/08/2015 18:14, Yuri Voinov wrote:
Eliezer,
how to take a look on sources?
The sources are not publicly available for now.
It is however written in GoLang and the algorithms are described in the
software page.
It should not be very hard to write a similar application just by
Two things:
- take a look at this helper to see something that works:
http://bazaar.launchpad.net/~squid/squid/trunk/view/head:/helpers/storeid_rewrite/file/storeid_file_rewrite.pl.in
- newlines are important in the communication between squid and the
helper.
perl's print is not sending a new
After remembering this thread:
http://www.squid-cache.org/mail-archive/squid-users/201102/0236.html
I had some time to run tests here and there, I am testing now FreeBSD
traffic diverting with PF and seems to not understand something.
The topology is:
client(192.168.12.150/24) --
Great to hear it works!
And since you are using CentOS I would just say, take a look at the
wiki at:
http://wiki.squid-cache.org/KnowledgeBase/CentOS#Squid-3.5
In a case you would want a squid 3.5 version.
Eliezer
On 19/08/2015 19:26, adricustodio wrote:
thanks dude!
I fixed!
I was
Hey,
Currently I do not know of such a helper but it is possible to write one.
If you have a DB machine I can run tests against I might be able to
write a small helper for a basic authentication or session helper based
on OracleDB.
For a captive portal you would need two separated systems:
-
is written in perl :\
* This is what happens when working for an hour or two and diving into
the other world of snmp :D
Eliezer
On 18/08/2015 16:08, Amos Jeffries wrote:
On 19/08/2015 12:15 a.m., Eliezer Croitoru wrote:
Hey,
Currently I do not know of such a helper but it is possible to write one
Hey Berni,
I was wondering to myself, why do you need to rewrite the url?
Can't you just use a cache_peer and couple acls?
Eliezer
On 18/08/2015 16:43, Hicham Berni wrote:
Hi,
We have a squid reverse configuration, and we need to change backend
webserver with a new webserver with new IP
Hey,
It is possible to export and import but you need to ask the question if
you need to always update the mysql DB you should run it every minute or
so.(it is being done in production systems in many places, dump and update)
The other question is how to do it.. I do not know yet how.
: SHA256
I still see no problem, if the same content under HTTP/HTTPS will
deduplicated as one record.
12.08.15 20:06, Eliezer Croitoru пишет:
On 12/08/2015 16:44, Yuri Voinov wrote:
Hmm. You want to say will better to have HTTP/HTTPS duplicate rules
for the same content? This can lead problems
On 12/08/2015 16:12, Yuri Voinov wrote:
Thank you, Amos, for explanation.
It is an exhaustive answer to my doubts.:)
So, finally, I can write Store-ID map rules without any protocol prefix,
or use any, no matter?
I.e., ^https?:\/\/(.*?)\/(.*?)\;(?:.*?)$anysite$1.SQUIDINTERNAL/$2
?
Hey
I have a suggestion!!
These:
https://addons.mozilla.org/he/firefox/search/?q=video
https://addons.mozilla.org/he/firefox/addon/adblock-plus/?src=search
Should help you to figure out couple things.
If you don't figure it out by your self send another email to this
thread and I will be able to
Does the client has the option to access some internal webserver which
can reflect the IP address of the client?
If so.. you can redirect from an error page to it.
Eliezer
On 07/08/2015 12:53, Kazuhiro Asakura wrote:
Thank you Amos, again.
I will investigate solution of javascript again.
I also now found that the example for ldap search in squidguard is
similar to my conclusion.
http://www.squidguard.org/Doc/authentication.html
##START
ldapbinddn cn=root, dc=example, dc=com
ldapbindpass myultrasecretpassword
# ldap cache time in seconds
ldapcachetime 300
src
Hey Dan,
It's pretty simple to write this rule since its a counted+pattern match
and that's it nothing more.
If it fits your need you can add a send mail target instead of a ban one.
Eliezer
On 03/08/2015 10:25, Dan Charlesworth wrote:
Thanks Antony.
Fail2ban looks like a viable option
I managed to make it work!
I am using ubuntu 14.04.2 with openLDAP and phpldapadmin.
I have changed my server to look like yours and it still didn't work.
So what I did was this: I changed the command to:
/usr/lib/squid3/ext_ldap_group_acl -d -b dc=ngtech,dc=local -D
cn=admin,dc=ngtech,dc=local
On 31/07/2015 15:37, brendan kearney wrote:
Pretty sure memberOf is an overlay you have to enable in openldap
I have tried to use this:
http://www.schenkels.nl/2013/03/how-to-setup-openldap-with-memberof-overlay-ubuntu-12-04/
But it doesn't mention that you need to put the file in the scheme
I wanted to test the ext_ldap_group_acl so I created a ldap domain.
The command I am testing is:
/usr/lib/squid3/ext_ldap_group_acl -b DC=ngtech,DC=local -D
CN=admin,DC=ngtech,DC=local -w password -f
((objectclass=person)(sAMAccountName=%v)(memberof=CN=%a,DC=ngtech,DC=local))
-h 127.0.0.1
Just wondering how new is this option?
Eliezer
On 29/07/2015 03:50, Amos Jeffries wrote:
That can be resolved somewhat by turning the logging on dynamically with
squid -k debug shortly before and after a test is run.
Amos
___
squid-users mailing
It's pretty famous.
I have even used it for sometime in the past and from many firewall
distros it was one of the good ones.
Eliezer
On 26/07/2015 18:26, Stanford Prescott wrote:
The OS is Smoothwall Express v3.1. A linux firewall distro not really based
on any other of the major distros.
On 26/07/2015 03:33, Stanford Prescott wrote:
I did a new install of Squid 3.5.6 and it seems to be working now.
On what OS?
Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hey Joe,
I understand the need for caching youtube but it might be not as
possible as in the past.
There was someone here on the list that offers a product that helps to
cache youtube videos but I do not know the secret behind it.
The partial content has special key in it and youtube kind
Can you share the relevant squid.conf settings? Just to reproduce..
I have a dedicated testing server here which I can test the issue on.
8GB archive which might be an ISO and can be cached on AUFS\UFS and
LARGE ROCK cache types.
I am pretty sure that the maximum cache object size is one
: Eliezer Croitoru elie...@ngtech.co.il
An: squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
Can you share the relevant squid.conf settings? Just to reproduce..
I have a dedicated testing server here which I can test the issue on.
8GB archive
On 22/07/2015 21:59, Eliezer Croitoru wrote:
Hey Jens,
I have tested the issue with LARGE ROCK and not AUFS or UFS.
Using squid or not my connection to the server is about 2.5 MBps (20Mbps).
Squid is sitting on an intel atom with SSD drive and on a HIT case the
download speed is more
On 21/07/2015 10:59, Jens Offenbach wrote:
Is there someting wrong with my config? I have already used Squid 3.3.14. I get
the same result. Unfortunately, I was not able to build Squid 3.5.5 and 3.5.6.
What was the issue?
I am using 3.5.6 on 14.04.2 64 bit.
Eliezer
On 19/07/2015 13:23, HackXBack wrote:
yes am using AUFS cache_dir directive
With how many workers?
Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Just adding something to the subject.
HDD vs SSD speeds are quite something.
I have tried to test the benefits of a SSD in the past and in many cases
it was a great addition of speed.
Eliezer
On 15/07/2015 15:27, Stakres wrote:
Amos,
We're using the latest 3.5.6 build, and we have not yet
Hey,
It is avaliable for both 6 and 7 centos.
Eliezer
On 13/07/2015 09:38, Amos Jeffries wrote:
Eliezer mentioned having 3.5.6 RPMs available last night. I'm not sure
if CentOS 6 was included in that first bunch, but it wouldn'y hirt to
check with him.
Amos
:59, Eliezer Croitoru elie...@ngtech.co.il wrote:
Hey list,
I have created the new RPM's for CentOS 6 and 7 while not mentioning
I also
created the package for OracleLinux.(which was very annoy to find out
that
the download file from Oracle was not matching an ISO but something
else)
The 3.5.5
Hey list,
I have created the new RPM's for CentOS 6 and 7 while not mentioning I
also created the package for OracleLinux.(which was very annoy to find
out that the download file from Oracle was not matching an ISO but
something else)
The 3.5.5 and 3.5.4 was published here:
Is there any particular reason you are using ufs\aufs cache_dir? I
assume this system works with multiple cores and there is a chance that
ufs cache_dir is being managed by two squids.
This is not something I am speculating and might be wrong about it.
Eliezer
On 29/06/2015 00:11, Mohammad
First goes first...
Upgrade to 3.5 or 3.4 branch.
Then try to use top or htop to get a snapshot of the virtual memory and
resident memory that squid uses.
Eliezer
On 19/06/2015 13:19, Alex Samad wrote:
this is on centos 6.6
still using the redhat build squid !
rpm -q squid
May I ask about the setup?
Is this setup of 20 pxy are running in interceot\transparent mode?
Eliezer
On 18/06/2015 06:28, Michael Pelletier wrote:
Which one would be good for capacity\load? I have a very, very large
environment. I have 220,000 users on 8 Gig to the INTERNET. I am running a
Hey Brian,
Can you test this issue with the 3.5.x or 3.4.x RPM's I released?
I have couple production servers running with 3.4 and 3.5 with
truncate option to allow the backhand servers see the client IP.
Eliezer
* http://wiki.squid-cache.org/KnowledgeBase/CentOS
On 11/06/2015 16:38,
What is the issue??
Did you tried the latest RPM's ??
http://wiki.squid-cache.org/KnowledgeBase/CentOS
Eliezer
On 11/06/2015 21:29, Tory M Blue wrote:
I've got logs and now finally a core (the whole 'squid' isn't signed with
proper key) thang took a bit to get around.
Rather not post the core
Hey Amos,
I didn't had the chance to follow the PROXY protocol advancements.
Was there any fix for the PROXY protocol issue that I can test?
Thanks,
Eliezer
On 09/06/2015 02:06, Amos Jeffries wrote:
We are somewhat recently added basic support for the PROXY protocol to
Squid. So HAProxy can
Hey Marcel,
First goes first... update to latest 3.5.5.
After the update We might be able to see the full picture.
Eliezer
On 31/05/2015 14:24, Marcel wrote:
Hi All
let see if some of you can help me troubleshoot the issue I have with
squid-3.5.0.4
on centos 6.6 configure with tproxy
in fact
Hey Eugene,
Since I do not have the full details about the issue and related areas I
cannot answer and I think later others will answer this better then me.
But as for the last question about squid being a DB.
Squid in a way is also a DB like any OS is a DB.
Due to the fact that squid is kind
The Bests,
Eliezer Croitoru
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
and we are
here to help them and all the other humans that are on the plant in this
case that a mistake is happening.
Eliezer Croitoru
On 24/03/2015 23:46, Yuri Voinov wrote:
So far, this has not been done. You can be the first!;)
___
squid-users
Alberto,
What are the details of the machine?
Can you run the next script on the machine?
http://ngtech.co.il/squid/basic_data.sh
Eliezer
On 20/03/2015 05:37, Alberto Perez wrote:
Another one here not using SMP, and using aufs.
I stopped seen this issue frequently when I reduced my cache
Hey Samuel,
Not related to your post at squid-cache, I have tried to access your
site from my testing grounds and I do not seem to be able to access it.
Not even an ICMP echo ping.
It is maybe something in the route between my client to your server but
I was wondering if I should contact my
Hey Dan and John,
If indeed this bug is only for UFS\AUFS cache_dir then I would try to
make sure that large-rock will not sustain the same issue.
I have not seen in any of the bug reports anything that would reproduce
the issue.
To make sure the issue is understood and can or cannot be
Hey List,
Sorry but it takes time(for me) to test squid 3.5.
I have built a testing beta of 3.5 for CentOS 7 but yet to publish it
officially.
Since you have asked, then the main issue is that RH RPM auto building
tools helps to find dependencies and there for most of the helpers
infrastructure is designed and implemented and which I know
nobody planned to show me.
All The Bests,
Eliezer Croitoru
On 03/10/2013 13:26, Babelo Gmvsdm wrote:
Hi,
First of all Thx Amos for your enlightenment, even if I had to admit that it's
not yet
all clear for me, My knowledge of proxy is very light
On 13/03/2015 05:22, Daniel Greenwald wrote:
Ah that would be a clever way to implement pki authentication but i was
thinking of something more that browser natively support..
Hey Daniel,
What is the direction of what you are thinking about?
I do not know about a browser natively support
Hey Hack,
I wsa talking about radius server like free radius.
Which by the way dmasoftlab uses in their product\s.
Eliezer
On 12/03/2015 07:14, HackXBack wrote:
are you talking about radius server like free radius ?
or like dmasoftlab.com ?
___
Hey,
I was left in the dark and still unsure what the situation is??
Did you made it work fine?
Eliezer
On 11/03/2015 11:09, johnzeng wrote:
Hello Amos:
Ok, I see
Thanks again.
Have a good day with
Thanks Amos,
So NTLM has two steps authentication which means that there is a basic
negotiation over the http connection to the proxy which makes it less
secure then kerberos.
(speculating)
The main reason it's less secure then kerberos is that every part of the
password negotiation steps
wrong passwords should be considered a cracking trial?
If you have more ideas about the subject I would be happy to see them here.
Thanks In Advance,
Eliezer Croitoru
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org
Hey Fred,
It is unclear what doesn't work for you.
What would you expect to work and how it works or doesn't work from a
user perspective rather then an admin?
Is there any trouble from the user side about this issue?
Eliezer
On 04/03/2015 00:14, Stakres wrote:
Hi All,
Does someone know
Hey Yuri,
On 01/03/2015 20:17, Yuri Voinov wrote:
Normally you never use CONNECT method over HTTP ports. This is
prohibited by squid basic security requirements.
The above statement is true only if the proxy admin prohibit this.
A CONNECT method can be allowed and can be used for any purpose
confidential information)
All The Bests,
Eliezer Croitoru
On 28/02/2015 05:18, johnzeng wrote:
Hi all :
i meet a problem ,Squid cannot currently deal with such connections (
non-HTTP connections ) based 80 port , and We get some error ,
Unsupported Request Method and Protocol'' for https URLs
Hey Donny,
What OS are you using?
Eliezer
On 27/02/2015 06:41, Donny Vibianto wrote:
is there any change in 3.5.2 regarding basic ldap auth? i cant find ldap
helpder in my helper list.
Squid Cache: Version 3.5.2
Service Name: squid
configure options: '--enable-basic=LDAP'
On 25/02/2015 06:18, Alex Samad wrote:
Hi
I am running squid on Centos 6.5
squid-3.1.10-29.el6.x86_64
Hey Mike,
Can you share your squid.conf?
It's unreal that you will have the feature you might want in 3.1.10.
Are you trying to intercept ssl traffic or just use it as a reverse proxy?
On 26/02/2015 20:43, Yuri Voinov wrote:
Directly, Eliezer:)
His installation doesn't work. Somebody have forgotten about NAT;)
It happen to me many times and still happen to me here and there when
the memory is getting old.
Eliezer
___
On 26/02/2015 19:12, Monah Baki wrote:
Hi all,
I have client who has his Policy Based Routing as:
interface GigabitEthernet0/0/1.1 (route policy on the LAN interface)
ip policy route-map CFLOW
Hey Monah,
How is it all related to squid?
What OS are you using for squid?
Eliezer
On 26/02/2015 20:53, Yuri Voinov wrote:
Parity Check?;) You need better RAM with ECC;)
I have used ECC for couple month(7-8) but it used too much Watts.
Thanks,
Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
On 24/02/2015 00:53, HackXBack wrote:
there is a way without using ssl_bump
without forwarding https
but this will work with browsers and not with youtube mobile app.
its in header replace
Hey HackXBack,
I am not to familiar with all of the mobile apps but if the client needs
filtering he
of the squid release.
All The Bests,
Eliezer Croitoru
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
On 22/02/2015 13:56, Amos Jeffries wrote:
The google page about forcing safesearch currently recommends hijacking
DNS. Which may also work for YouTube but its not clear.
I must mention also:
If only youtube is the issue, there is an idea to pre-identify these dns
requests and only ssl-bump
Hey Alan,
I am unsure but is this SSL library headers files are compatible with
OpenSSL or it would require some existing OpenSSL APIs calls changes?
Eliezer
On 21/02/2015 17:00, Alan Palmer wrote:
[apalmer]:/data/src/squid-3.5.2# openssl version
LibreSSL 2.0
Alan Palmer
DO NOT SPAM
On 22/02/2015 02:47, maxt wrote:
Each tenant has a unique domain that has a trust relationship with our
management domain. They also have a unique IP address range so there is no
need for VLANS.
Hey Max,
You can use deny_info with a specific ip range or ip list and somehow
make acls that
On 19/02/2015 11:49, Odhiambo Washington wrote:
I have been hoping that 3.5.2 would possibly help address my problems with
ACLs, but alas!
Sorry for hijacking the thread but the wiki freebsd buildfarm node
install page:
http://wiki.squid-cache.org/BuildFarm/FreeBsdInstall
Doesn't include
801 - 900 of 982 matches
Mail list logo