Ricardo,
You cannot do it with a transparent proxy.
If you want Squid to handle https traffic, you must
use Squid in a non-transparent setup.
-Marcus
Ricardo Augusto de Souza wrote:
I am still not able to block https sites.
I tested all you sugested here.
I am using transparent proxy. I am
I am the author of ufdbGuard which is based on squidGuard.
ufdbGuard is free and can be used with both free and commercial databases.
-Marcus
a bv wrote:
Hi,
What is /are the popular /commanly used open source (and maybe also
the other free ones) URL/content filtering solution/software. And
The story about Squid and HTTP 1.1 is long...
To get your LiveUpdate working ASAP you might want to
fiddle with the firewall rules and to NOT redirect
port 80 traffic of Symantec servers to Squid, but
simply let the traffic pass.
Nathan Eady wrote:
Okay, we've got port 80 traffic going
The ACL blocks URLs that end with .com
i.e. it blocks a URL which is www.example.com while it does not block
www.example.com/index.html
If you change the patterns to include a slash you are fine.
The slash must prevent that domains with .com are matched.
e.g.
..*\.com$ becomes
to evaluate technical professionals based on their own lack of knowledge
***---***--***
--- On Sat, 3/28/09, Marcus Kool marcus.k...@urlfilterdb.com wrote:
From: Marcus Kool marcus.k...@urlfilterdb.com
Subject: Re: [squid-users] .com extension blocking
Try ufdbGuard. It has the script that you asked for and
built-in enforcement of Google SafeSearch
HTTPS tunnel detection
enforcement of safer HTTPS traffic
Marcus
Amos Jeffries wrote:
Thys de Beer wrote:
HI All,
I am terrible with cgi scripts infect null and void ... where do i get
a
When squid receives a signal for reconfiguration
it restarts all ufdbGuard processes and it seems that the
newly started ufdbGuard processes rebuild the database.
I am the (biased) author of ufdbGuard.
ufdbGuard is faster, has more features and also
does not have the problem that is described
Hi Lee,
I am the author of ufdbGuard.
ufdbGuard is based upon squidGuard 1.2.x and is heavily
modified, is a lot faster and has new features that
squidGuard does not have.
I suggest that you try it. It is free.
-Marcus
Lee Higginbotham wrote:
Good afternoon,
We are currently have squid 3.0
Hi Sean,
You cannot have 2 or more ACLs matching the same source.
The first ACL for source 'client' is matched for a PC with
IP address range 10.0.0.0 - 10.0.255.255 and then
the 'pass rule' is used to make a decision on whether
to block or not.
The second ACL for 'client' is never used.
The
Hi Hims,
I am the author of ufdbGuard which is based on squidGuard.
ufdbGuard is free software which does 5 URL lookups/sec
on a recent CPU and has no problems with large databases.
-Marcus
hims92 wrote:
hello,
I performed the tests (to block sites using squidguard) with some less
my 2 cents:
someone needs to explain how to set a breakpoint
because when the assertion fails, the program exits
(see previous emails: Program exited with code 01)
The question is where to set the breakpoint
but probably Amos knows where to set it.
Marcus
Silamael wrote:
-BEGIN PGP SIGNED
What are the values for the parameters cache_swap_low and cache_swap_high ?
For a large cache it is recommended to have them close to each other. E.g.
cache_swap_low 90
cache_swap_high 91
You can also add
refresh_pattern (cgi-bin|\?)0 0% 0
since dynamic pages should not be
Luis Daniel Lucio Quiroz wrote:
Le mercredi 30 septembre 2009 11:14:43, Marcus Kool a écrit :
What are the values for the parameters cache_swap_low and cache_swap_high ?
For a large cache it is recommended to have them close to each other. E.g.
cache_swap_low 90
cache_swap_high 91
You can
in case it is not clear: the 'aufs' option for cache_dir is much faster
than the 'ufs' which you are using now.
Marcus
George Herbert wrote:
Multiple hard disks, and spreading out Squid's logs and cache dirs
onto separate disks, helps a lot.
The big prod squid environment I was running for a
Matt,
Setting read_timeout to 1min and connect_timeout to 20sec should do the trick.
And I recommend to look for users who download large files or
watch CNN video news all day long.
Marcus
Matthew Young wrote:
Hello all
I have a group of proxy users who are not technical at all, and it is
Everybody is entitled to have its own opinion and I respect them.
I agree that a company should have a internet usage policy and
communicate this clearly with all staff.
Nevertheless, there are many persons who simply do not obey such
policy and tracking those persons consumes too much time
There are over 75000 proxy sites and every day new ones appear.
There are numerous Yahoo groups, Google groups and mailing lists
who distribute new proxy sites every day.
Sure, a network admin can make it a full daytime job to
run the race against the clock; block used proxy sites and block
Henrik Nordstrom wrote:
ons 2009-11-04 klockan 09:59 -0200 skrev Marcus Kool:
A URL filter is definitely a good option and a doomed success.
Sorry if you got the impression that I think URL filters are a bad idea.
I do not. Just that implementing URL filters alone without also having
Ultrasurf can be blocked by ufdbGuard, a free URL rewriter for Squid.
ufdbGuard uses various techniques to block Ultrasurf:
- verifying the HTTPS connections by opening a new HTTPS connection
and check if the other side speaks SSL+HTTP
- blocking HTTPS to sites without a FQDN in the URL
-
Robert Collins wrote:
On Mon, 2009-11-23 at 21:40 -0500, Linda Messerschmidt wrote:
Maybe. We would like to diagnose this problem and fix it properly,
but if
its too much hassle you can go that way.
It would definitely be my preference to diagnose and fix the problem
and I can live with a
Linda started this thread with huge performance problems
when Squid with a size of 12 GB forks 15 times.
Linda emailed me that she is doing a test with
vm.pmap.pg_ps_enabled set to 1 (the kernel will
transparently transform 4K pages into superpages)
which gives a big relief for TLB management
likely very simple but it is unknown how much it helps.
option 4 is simple, but depending on the functionality
of the rewriter, it is or is not acceptable. You need to experiment
to see if it helps.
Marcus
Linda Messerschmidt wrote:
On Tue, Nov 24, 2009 at 9:52 PM, Marcus Kool
marcus.k
Linda Messerschmidt wrote:
On Wed, Nov 25, 2009 at 7:43 AM, Marcus Kool
marcus.k...@urlfilterdb.com wrote:
The result of the test with vm.pmap.pg_ps_enabled set to 1
is ... different than what I expected.
The values of vm.pmap.pde.p_failures and vm.pmap.pde.demotions
indicate that the page
Stripes need be be larger than the average object size to have
concurrent access to more than one object at the same time.
The *average* objects size is 13 KB so to be on the safe side
I would use a stripe size of 32K or more.
The optimal size also depends on the file system type that you use.
is the simplest rescue until
the Squid developers come with a solution.
Marcus
Linda Messerschmidt wrote:
On Wed, Nov 25, 2009 at 11:18 AM, Marcus Kool
marcus.k...@urlfilterdb.com wrote:
The FreeBSD list may have an explanation why there are
superpage demotions before we expect them (when
John Doe wrote:
From: Matus UHLAR - fantomas uh...@fantomas.sk
On 08.12.09 02:41, John Doe wrote:
Yes but, as long as squid does not handle disk crashes gracefully, I am
stuck with RAID...
what kind of RAID? for mirrors, you don't need stripe size. Stripes aren't
safer than single disks.
It depends on the number of disks thats you use for the cache on disk.
as a rule of thumb: 10 I/Os per disk is fine, so 10 threads per disk.
Only if you use very high performance disk arrays you may
increase the number of threads per (logical) disk.
Marcus
J. Webster wrote:
Would this
Landy,
If you are desperate for bandwidth I suggest to block
ads (e.g. a.rad.msn.com) and 'user behaviour analysis'
(e.g. scorecardresearch.com).
Furthermore, you may consider blocking mp3 files.
Depending on what type of users you have, this can save
a lot of bandwidth.
Marcus
Landy Landy
Landy Landy wrote:
If you are desperate for bandwidth I suggest to block
ads (e.g. a.rad.msn.com) and 'user behaviour analysis'
(e.g. scorecardresearch.com).
Furthermore, you may consider blocking mp3 files.
Depending on what type of users you have, this can save
a lot of bandwidth.
acl blockanalysis01 dstdomain .scorecardresearch.com .google-analytics.com
acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com
ads4.msn.com
acl blockads02 .adserver.yahoo.com pagead2.googlesyndication.com
http_access deny blockanalysis01
http_access deny blockads01
Kinkie wrote:
On Thu, Feb 25, 2010 at 5:19 PM, Denys Fedorysychenko
nuclear...@nuclearcat.com wrote:
On Thursday 25 February 2010 13:42:52 Amos Jeffries wrote:
My opinion of RAID behind Squid is very poor. Avoid if at all possible.
HW RAID is claimed to be workable though, particularly as
Michel,
Proxies are the URL filter circumventors, so if you like
to use a URL filter, you should always block proxies.
Henrik stated in a separate response that some browsers have
problems with HTTP 302 redirect responses. I have no access
to all types of web browsers, and Microsoft Internet
mic...@casa.co.cu wrote:
Marcus Kool marcus.k...@urlfilterdb.com escribió:
Michel,
Proxies are the URL filter circumventors, so if you like
to use a URL filter, you should always block proxies.
Henrik stated in a separate response that some browsers have
problems with HTTP 302 redirect
Jaap,
URLfilterDB has over 95.000 proxy servers in its commercial URL database.
Each day there are many new ones.
If you are serious about blocking access to them you need a
good URL filter.
I represent URLfilterDB but with some googling you will find
lots of others.
Best regards,
Marcus Kool
Or use an alternative: ufdbGuard.
ufdbGuard is a URL filter for Squid that has a much easier
configuration file than the Squid ACLs and additional
configuration files.
ufdbGuard is also multithreaded and very fast.
And a tip: if you are really serious about blocking
anything, you should also
I use squid
Squid Cache: Version 3.0.STABLE20
configure options: '--prefix=/local/squid' '--with-default-user=squid'
'--with-filedescriptors=2400' '--enable-icap-client' '--enable-storeio=aufs,ufs,null'
'--with-pthreads' '--enable-async-io=8' '--enable-removal-policies=lru'
Henrik Nordström wrote:
mån 2010-03-29 klockan 13:58 -0300 skrev Marcus Kool:
0.33 epoll_wait(6, {{EPOLLIN, {u32=23, u64=8800387989527}}}, 2400,
10) = 1
0.32 gettimeofday({1269878848, 223083}, NULL) = 0
0.31 read(27, 0xffd3de98, 256) = -1 EAGAIN (Resource
Henrik Nordström wrote:
fre 2010-04-02 klockan 15:41 -0300 skrev Marcus Kool:
strange indeed, but this is strace output with which I am not very familiar.
Strace should print the whole array that it uses as argument to
epoll_wait or just prints the first element ? (and the 2nd argument
Martin,
Valgrind is a memory leak detection tool.
You need some developer skills to run it.
If you have a test environment with low load you may want
to give it a try.
- download the squid sources
- run configure with CFLAGS=-g -O2
- run squid with valgrind
- wait
- kill squid with a TERM
Ricardo,
ufdbGuard is a URL redirector for Squid.
Its main purpose is URL filtering and it is also capable
of filtering Skype the way that you want.
Skype uses direct communication (blocked by your firewall),
HTTP [proxy] (blocked by Squid since Skype does not obey HTTP protocol)
and HTTPS
.
Has someone an idea where the problem could be?
Martin
Marcus Kool marcus.k...@urlfilterdb.com wrote on 17.06.2010 16:15:09:
Martin,
Valgrind is a memory leak detection tool.
You need some developer skills to run it.
If you have a test environment with low load you may want
to give
yes.
1) the index is in memory and needs 10-20 MB index in memory for each GB on disk
2) the housekeeping of the index costs more CPU cycles for a larger cache
3) the housekeeping of the cached objects on disk costs time and grows when the cache is
larger. Can be minimised by having
If you want to block HTTPS for Google you need to block it for all domains
including google.co.uk, google.com.br, google.co.nz google.com.au and
130 more.
Henrik Nordström wrote:
tor 2010-05-27 klockan 15:35 -0400 skrev Dave Burkholder:
Is there some way to specify via a Squid ACL that
Isaac Witmer wrote:
On Wed, Jul 21, 2010 at 4:57 PM, Marcus Kool
marcus.k...@urlfilterdb.com wrote:
yes.
1) the index is in memory and needs 10-20 MB index in memory for each GB on disk
I was under the impression (from the oriely squid manual) that recent
versions do not use up extra RAM
Francesco,
Here is a biased answer: check out http://www.urlfilterdb.com
Marcus @ URLfilterDB
Francesco Collini wrote:
Hello,
actually we use urlblacklist.com, we are registered users for providers.
It seems the Blacklist is not well maintained: updates are often
missing many censored
Nyamul Hassan wrote:
Hi,
I would build with the following in mind:
1. Better to have a separate disk for the cache stores.
2. Have a COSS store for objects less than 256k. And let AUFS handle
larger objects.
3. Don't have more than 75% of your disk allocated.
4. Only one AUFS store per disk.
Ralf Hildebrandt wrote:
* Marcus Kool marcus.k...@urlfilterdb.com:
Nyamul Hassan wrote:
Hi,
I would build with the following in mind:
1. Better to have a separate disk for the cache stores.
2. Have a COSS store for objects less than 256k. And let AUFS handle
larger objects.
3. Don't have
Heinz Diehl wrote:
On 08.08.2010, Marcus Kool wrote:
vm.swappiness=20
vm.vfs_cache_pressure=50
Do you have some numbers that actually show a significant improvement?
No. I have experience. It seems that Amos has the same.
I think at least swappiness should better be 100 here, to free
Heinz Diehl wrote:
On 09.08.2010, Marcus Kool wrote:
I think at least swappiness should better be 100 here, to free as much as
possible memory. Unused applications hanging around for a long
time can conserve quite a lot of pagecache which otherwise could be used
actively.
Do you have any
Jose Ildefonso Camargo Tolosa wrote:
Hi!
On Tue, Aug 24, 2010 at 12:59 AM, Hamza Sani Abubakar Usman
hamza_s...@hotmail.com wrote:
Hi,
Can you please tell me that How much amount of ram will required if we use
100GB partition for squid caching.
I don't remember. A quick google search
The old setting for cache_swap_high was 95.
A background process monitors the cache usage and
purges old objects. If you retrieve new large files
faster than the background process purges old ones,
you are in trouble.
Marcus
Rich Rauenzahn wrote:
[resending, I accidentally left off the list
The code example that you sent earlier shows it clearly:
there is an overflow bug.
it is extremely easy to fix too.
Marcus
Rich Rauenzahn wrote:
On Mon, Oct 4, 2010 at 2:56 AM, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
On 29.09.10 17:42, Rich Rauenzahn wrote:
This code strikes me as
Read carefully the code and its output that was sent on
09/29/2010 09:42 PM
There is an overflow error.
Matus UHLAR - fantomas wrote:
On 05.10.10 09:14, Matus UHLAR - fantomas wrote:
well the same applies here means (or at least it should) that you must
make your program capable, by using
There are over 10 proxy sites and you need a blacklist
if you do not want to end up googling all day.
There is also software for VPNs and SSH tunnels that you will
never block with a blacklist. You need a professional
URL filter.
Marcus
John Dakos wrote:
Kromonos thank you for your
Gerson fserve Barreiros wrote:
Can i have your 90k url database?
Sorry, like I said in my previous posting:
solutions that block 99% are all paid.
block UltraSurf is easy btw.
one thing that i learned in all that time as a sysadmin is that is
painfull to a user when the site is not
short.cut...@yahoo.com.cn wrote:
--- On Thu, 7/10/10, Marcus Kool marcus.k...@urlfilterdb.com wrote:
From: Marcus Kool marcus.k...@urlfilterdb.com
Subject: Re: [squid-users] How to Block ByPass proxy Sites..
To: Gerson fserve Barreiros fse...@gmail.com
Cc: squid-users@squid-cache.org
I am author of ufdbGuard, a free URL filter for Squid.
You may want to check it out: ufdbGuard is multithreaded and supports
POSIX regular expressions.
If you do not want to use ufdbGuard, here is a tip:
ufdbGuard composes large REs from a set of simple REs:
largeRE = (RE1)|(RE2)|...|(REn)
DNS lookups are done by the resolver.
Options on Linux can be set in /etc/resolv.conf (see also man resolv.conf).
The default timeout is only 5 seconds and any program, including Squid,
that does a nameserver query should get an answer (including an error)
in 5 seconds.
In my case I have 3
It is technically impossible to hide your WAN IP.
There are low level OS calls to retrieve the address of
the other party.
Marcus
Tony wrote:
I was told that this is all I need to get this to work. I'm using
the latest version of squid 3.1.9 My browser proxy setting is set
to localhost 3128
Optimum Wireless Services wrote:
On Thu, 2010-12-16 at 21:21 +1300, Amos Jeffries wrote:
On 16/12/10 15:03, Optimum Wireless Services wrote:
Hello.
I don't know if this is the right place to ask about this issue, if is
not then, please apologize.
I have a small WISP in my town and I would
It is technically feasible to share one or more fiber-attached
disks between multiple hosts when used with a disk array (a lot
more expensive and a lot faster than a single host-attached disk).
The more difficult part is to keep this shared disk synchronised
between hosts and to make sure that
Bob,
blocking proxies is challenging...
the-cloak.com can easily be blocked by the free ufdbGuard (it
is a Squid redirector) because it has an invalid SSL certificate
and ufdbGuard has an option to block that.
But there are many more HTTP-based proxy sites and to block those
you need a URL
Amos Jeffries wrote:
Ralf Hildebrandt wrote:
* Christos Tsantilas [EMAIL PROTECTED]:
But since, i had heard that Squid 2.6 version had better performance
than Squid 3.0, i would like to try that also as a backup.
Squid 3 is enough fast for most cases. You will not see any
difference in
Gary,
ufdbGuard is free. You can can download it from
http://sourceforge.net/projects/ufdbguard
and you can use it with free URL databases.
You only need a database license if you use it with the
commercial URL database from URLfilterDB.
-Marcus
Gary wrote:
On Jan 23, Marcus Kool wrote
Joe,
you are not allowed to use echo statements that write to stdout because
Squid is expecting ONE line and one line only per line that the script reads.
In case of an error Squid gets a second line from the script and issues an
'I do not expect this' error.
the exit is also not very nice.
Matus UHLAR - fantomas wrote:
On 17.02.08 18:10, Sam Przyswa wrote:
We use Squid and SquidGuard to control webmails access, that work fine,
but for those who use HTTPS protocole Squid/SquidGuard doesn't operate.
Is it a way to control HTTPS as well HTTP trafic ?
no. the HTTPS traffic
This topic should be discussed in the squidguard mailing list...
You might want to try ufdbguard.
-Marcus
Steve B wrote:
Sorry Mark, the problem didn't help and I don't have the email anymore...
Anyways. I am trying to get squidGuard reinstalled on Fedora 8, which
it was installed before. I
Well,
I am interested in speed, features and ICAP.
So I like -2 and -3 to merge.
It seems to me that for the sake of being polite with each other
we do not want to call the -2 / -3 issue a fork, but effectively
it really is a fork.
So here is my question back to the main maintainers:
do you
Steve,
adzapper uses a very large amount of CPU time compared to other redirectors.
On my box Squid uses 6.5 times more CPU time than the redirector (ufdbGuard).
Marcus
Steve Snyder wrote:
On Thursday 06 March 2008 11:05:24 am Adrian Chadd wrote:
Well, the way I'd approach it is to first get
I wish that the wiki for RIAD is rewritten.
Companies depend on internet access and a working Squid proxy
and therefore the advocated no problem if a single disk fails
is not from today's reality.
One should also consider the difference between
simple RAID and extremely advanced RAID disk
The point of why I started the discussion is that the statement in the wiki
Do not use RAID under any circumstances is at least outdated.
Most companies will trade in performance for reliability because they depend
on internet access for their business and cannot afford to have 2-48 hours
of
Kinkie wrote:
On Wed, Mar 26, 2008 at 3:30 PM, Marcus Kool
[EMAIL PROTECTED] wrote:
The point of why I started the discussion is that the statement in the wiki
Do not use RAID under any circumstances is at least outdated.
Well, it says: Don't. Agreed, it's a bit radical. You're welcome
.
-Marcus
Richard Wall wrote:
On Tue, Mar 25, 2008 at 1:23 PM, Marcus Kool
[EMAIL PROTECTED] wrote:
I wish that the wiki for RIAD is rewritten.
Companies depend on internet access and a working Squid proxy
and therefore the advocated no problem if a single disk fails
is not from today's
Dennis,
A negation (!) is needed if you want Pornography NOT to pass.
The pass line should be:
pass !Pornography !Warez all
-Marcus
PS: if you do not block proxies, users still have access to all pornography
Dennis B. Hopp wrote:
Ooops... the acl should be
acl {
default {
If you are serious about blocking proxies and ssh/vpn tunnels,
you have 20 or so options and they are all commercial.
-Marcus
Anil Saini wrote:
how to stop anonymous browsing
we have huge collection of web-proxies to bybass acl blocked list
Is thr any sol to block them all without making
Dwayne,
If you do not redirect+filter HTTPS you can never block
HTTPS-based proxies. To be able to filter HTTPS the
browsers must be configured to use Squid for HTTP and HTTPS.
Once Squid also proxies the HTTPS traffic, you may use
ufdbGuard.
ufdbGuard is a free redirector which can block
Shaine,
Squid only expects a rewritten URL back.
You may add information to a URL with parameters e.g.
http://www.example.com/blocked.cgi?naughtuser=usernameip=10.1.1.1;...
But this can only useful if the script blocked.cgi on the webserver
parses the parameters.
Marcus
Shaine wrote:
Dear
Guillaume BRAUX wrote:
Hello,
I use a HTTP captive portal to authenticate users and gives them access to
network resources. It actually store Username/MAC/IP in a database when a
user authenticate, and add the needed filtering rules in Iptable/Netfilter
(based on IP and MAC) to open usual
Kurt,
Since disk access must be minimal, the access_log should be none.
Make sure that the other log files are also none.
-Marcus
Kurt Buff wrote:
All,
I'm running squid 2.6.17 on a FreeBSD 6.2-STABLE. This should be
mostly irrelevant, as I am trying to ameliorate a hardware issue while
I
Shaine,
Because you use the 302: prefix the URL that you pass back from the redirector
to Squid is sent back to the browser and because of the 302 the browser
sends a new request to Squid and the new URL is the URL that the redirector sent
in the first place. This URL is passed by Squid to the
Michel wrote:
On tor, 2008-07-03 at 12:04 +0800, Roy M. wrote:
We are planning to replace this testing server with two or three
cheaper 1U servers (sort of redundancy!)
Intel Dual Core or Quad Core CPU x1 (no SMP)
Squid uses only one core, so rather Dual core than Quad...
I am not
Shaine,
When a program runs fine from the command line but not from a daemon,
the cause is usually the change of environment: PATH or other
environment variable, user ID
Remember that squid is started with a very clean environment.
I suggest to run the redirect program from at:
# su - squid
Hi Ismail,
I would add a redirect statement to the int_net acl rule.
observation: blocking porn without blocking proxies is the same as blocking
nothing.
You might want to try ufdbGuard: it is faster than squidguard, and has
additional features for enforcing Google SafeSearch and verifying
Ismail,
ufdbGuard is free.
It can be used with a free URL database and
with a commercial database.
-Marcus
İsmail ÖZATAY wrote:
Marcus Kool yazmış:
Hi Ismail,
I would add a redirect statement to the int_net acl rule.
observation: blocking porn without blocking proxies is the same
Hi Martin,
Squid is a little awkward:
the URL returned by squidguard must have the protocol as the original URL.
So for a URL with HTTPS protocol, squidguard must return a URL that uses the
HTTPS protocol.
This is really not nice but the workaround is to use a 302 redirection:
redirect
Or use a commercial URL filter from URLfilterDB.
Marcus
Amos Jeffries wrote:
Johnson, S wrote:
Anyone have recommendations for a URL filtering list through squid?
Yes. Don't
Or if you do use a well maintained one. Such an SURBL.
Amos
ufdbGuard can block Skype.
ufdbGuard is a free URL redirector which works with Squid.
Blocking Skype is based on SSL connection verification
and since Skype using port 443 but has no SSL handshake,
the connection is blocked when the option
enforce-https-official-certificate is set ON.
Note that
Amos Jeffries wrote:
On 24/01/11 23:09, Michael Hendrie wrote:
On 24/01/2011, at 8:17 PM, Saiful Alam wrote:
OK I have kept your suggestion in my mind, but right now I'm not in
a position to buy two HDD's. May be I can afford to buy 15 days
later. For the time being, my prime problem is
Michelle
most likely you have Squid 2.6 on your system and now also
installed 3.2 in a different location.
What is the output of
/usr/local/squid/sbin/squid -v
Marcus
Michelle Dawson wrote:
Hi Guys,
I have just compiled the Squid 3.2.0.4 from source. But now that it is
compiled it the
Leonardo,
I suggest to look at ufdbGuard. It is a free URL filter with
additional security features and SafeSearch enforcement for
many search engines.
Marcus
Leonardo wrote:
Dear all,
I have a working install of Squid 3.1.7 with Squirm 1.0-BetaB, which
provides URL rewriting. The Squid
You can use ufdbGuard. It is a URL filter for Squid.
ufdbGuard accepts URLs (domain and path), domains and expressions.
Marcus
Zartash . wrote:
Dear All,We are blocking urls using url_regex feature (urls are stored in a
file), but we are unable to block urls having special characters (like
ufdbGuard is a URL filter for Squid that does exactly what Zartash needs.
It transforms codes like %xx to their respective characters and does
URL matching based on the normalised/translated URLs.
It also supports regular expressions, Google Safesearch enforcement and more.
Marcus
Amos
There seems to be a misconception about what sslbump can and cannot do.
sslbump can only decrypt SSL connections.
sslbump cannot decrypt all other types of traffic that use the
HTTPS port and CONNECT method.
So, for example, it cannot decrypt Skype traffic and files
containing a virus can still
Zartash,
can you upload the files
cache.log
ufdbguardd.log
ufdbGuard.conf
to http://upload.urlfilterdb.com ?
In case that the files are small you can send them directly to me.
Marcus
Zartash . wrote:
Thanks, I have installed ufdbGuard and defined it in squid but it doesnt
seem to redirect
I heard that development of Dansguardian stopped, so I suggest to investigate
other solutions. You could reduce squid+DG+Squid to Squid+ufdbGuard.
ufdbGuard is a free URL filter and works with various URL database providers.
Marcus
bwright wrote:
Any other ideas?
I know there have to be
Osmany,
look in access.log.
It should say what is happening:
I expect this:
... TCP_MISS/301 GET http://kaspersky
... TCP_MISS/200 GET ftp://dnl-kaspersky.quimefa.cu:2122/Updates
and does the client use Squid for the ftp protocol ??
And the RE matches too many strings.
I recommend to
:2122/Updates/index/u0607g.xml.klz
I've changed the script many time so that I can get what I want but I
had no success. can you please help me?
On Sun, 2011-03-13 at 21:27 -0300, Marcus Kool wrote:
Osmany,
look in access.log.
It should say what is happening:
I expect this:
... TCP_MISS/301
Dejan,
Squid is known to be CPU bound under heavy load and the
Quad core running at 1.6 GHz in not the fastest.
A 3.2 GHz dual core will give you double speed.
The config parameter minimum_object_size 10 KB
prevents that objects smaller than 10 KB are not written to disk.
I am curious to know
If your users do not mind, you can block ads and user tracking
sites of which many produce 1x1 gifs.
Most ads and tracking codes are not cacheable and may consume a lot.
This all depends on which sites your users visit of course.
Marcus
Amos Jeffries wrote:
On 31/03/11 01:38, Ed W wrote:
Hi,
The cache_mem parameter is 10 MB so the cached objects in memory are 10 MB.
The cache_dir is 10 GB so the cached objects on disk are 10 GB.
Most likely squid is slow because of the I/O.
If you have 16 GB of memory and a 64-bit OS and 64-bit Squid you can set
cache_mem to 4 GB to have a lot more
1 - 100 of 395 matches
Mail list logo