with an empty cache.
It didn't help. It created a new fake facebook cert - but the cert
doesn't fully match the characteristics of the real cert
http://bugs.squid-cache.org/show_bug.cgi?id=4102
Please add weight to bug report :)
Amm.
___
squid-users
at SSL_ports and Safe_ports in your squid.conf (unless you rewrote
it completely)
Amm.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
I had pointed this out few months back but I suppose it was not
corrected or not considered necessary.
Amm.
On 09/30/2014 02:15 PM, Дмитрий Шиленко wrote:
without www.* -- Forbidden You don't have permission to access /
on this server.
Visolve Squid писал 30.09.2014 11:42:
Hi,
The http
:
via off
forwarded_for delete
Amm
-For headers.
I was lazy to find which header exactly but I disabled both anyway.
Amm.
regardless of value of mozillapkix
Thanks and regards,
Amm
,
PS: Sorry for being off-topic on squid mailing list.
AMM
On 07/26/2014 02:36 PM, Amos Jeffries wrote:
On 26/07/2014 8:36 p.m., Stakres wrote:
HI Amm,
Everyone is free to modify the script (client side) by sending YouTube urls
only, no need to send all the Squid traffic.
...
Bye Fred
It would be better practice to publish a script which is pre
in background.
Amm
On 07/11/2014 09:45 AM, Alex Rousskov wrote:
On 04/11/2014 11:01 PM, Amm wrote:
I recently upgraded OpenSSL from 1.0.0 to 1.0.1 (which supports TLS1.2)
Now there is this (BROKEN) bank site:
https://www.mahaconnect.in
This site closes connection if you try TLS1.2 or TLS1.1
snip
When
.
Please see bug report for details.
Thanks and regards,
Amm.
.
Thanks and regards,
Amm.
On 05/05/2014 07:39 PM, Martin Sperl wrote:
Hi Amos!
...
So I wonder if it is really a wise move to potentially cut off
people from security patches because they can no longer
compile squid on the system they want to use it on just
due to the build-tool dependencies
192.168.10.8 --dport 443 -j DNAT
--to-destination 192.168.10.254:3127
for 443 intercept use https_port not http_port.
Amm.
On 04/13/2014 04:27 PM, Amos Jeffries wrote:
On 12/04/2014 5:23 p.m., Amm wrote:
So I ran this command:
openssl s_client -connect 192.168.1.2:8081
where 8081 is https_port on which squid runs. (with sslbump)
And BOOM, squid went in to infinite loop! And started running out of
file
should be allowed. But this sslbump
still continues and cause infinite loop.
Eliezer
Amm.
resolved?
Is it Firefox bug or squid bug?
Thanks in advance,
Amm.
On Friday, 11 April 2014 4:46 PM, Amos wrote:
On 11/04/2014 10:16 p.m., Amm wrote:
After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving
certificate error stating sec_error_inadequate_key_usage.
This does not happen for all domains but looks like happening ONLY
for google
is
regenerated?
Raf
I had cleared the ssl cert store but the issue still occured (without patch).
So finally I gave up trying different things and used the patch.
Here is exact same issue discussed earlier in mailing list:
http://www.squid-cache.org/mail-archive/squid-users/201311/0310.html
Amm
, but then how come Firefox did not throw warning display just yesterday?
Squid version and configuration were exact same yesterday.
Unfortunately I can not switch OpenSSL back to older version else I
would have checked if squid mimicked key_usage in that version
as well or not?
Amm
not tell bank to upgrade)
Amm.
-first?
Or shouldn't squid check something like this?
If (destIP == selfIP and destPort == selfPort) then break?
I am also not sure if this can be used to DoS. So just reporting,
Amm.
discussion done in list previously as
well (in 2013), here is the link:
http://www.squid-cache.org/mail-archive/squid-users/201303/0421.html
Regards
Amm
On 03/15/2014 05:11 PM, Amos Jeffries wrote:
On 15/03/2014 6:46 p.m., Amm wrote:
I would like to mark outgoing packet (on server side) with SAME MARK as on
incoming (NATed or CONNECTed) packet.
http://www.squid-cache.org/Doc/config/qos_flows/
Squid default action is to pass
On 03/15/2014 08:03 PM, Amm wrote:
On 03/15/2014 05:11 PM, Amos Jeffries wrote:
On 15/03/2014 6:46 p.m., Amm wrote:
I would like to mark outgoing packet (on server side) with SAME MARK
as on incoming (NATed or CONNECTed) packet.
http://www.squid-cache.org/Doc/config/qos_flows/
Squid
,
Amm.
with timeout error.
Anyway, my bug reports follows below.
-
From: Amm ammdispose-sq...@yahoo.com
To: squid-users@squid-cache.org squid-users@squid-cache.org
Sent: Monday, 26 August 2013 4:18 PM
Subject: squid -z for SMP does not create worker's directories
Hello all,
I have following
the client end. Even NAT interception transparency the proxy IP
is hidden from the client. The server knows, but not the client.
Amos
Unless I misunderstood the original question, combination of curl,
www.whatismyip.comand grep should work, shouldn't it?
Amm.
http_access, always_direct is not for access restriction.
Amm.
- Original Message -
From: csn233 csn...@gmail.com
To: Amm ammdispose-sq...@yahoo.com
Cc:
Sent: Tuesday, 30 July 2013 2:03 PM
Subject: Re: [squid-users] Basic questions on transparent/intercept proxy
Thanks to all who replied. Looks like the ssl_bump none all is
required
anything?
No. But it will log only IPs not the host name or URL.
Amm
-cache.org mail exchanger = 10 squid-cache.org.
squid-cache.org mail exchanger = 90 mx2.squid-cache.org.
mx2 does not seem to be working.
Regards,
Amm.
- Original Message -
From: Amm ammdispose-sq...@yahoo.com
To: squid-users@squid-cache.org squid-users@squid-cache.org
Cc:
Sent
From: csn233 csn...@gmail.com
Sent: Monday, 29 July 2013 10:40 PM
Subject: Re: [squid-users] Basic questions on transparent/intercept proxy
On Sun, Jul 28, 2013 at 9:11 PM, Amm ammdispose-sq...@yahoo.com wrote:
- Original Message -
From: csn233 csn...@gmail.com
To: squid-users
My previous e-mail bounced back.
squid-users@squid-cache.org: Mail server for squid-cache.org unreachable
for too long
So reposting, sorry if already it had reached the group.
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
On 20/07/2013 2:04 p.m., Amm wrote
Second on will log FULL query for a particular IP.
Thanks in advance,
Amm
anything in access log.
If I stop squid then clients cant access any website. (this indicates that
packets are indeed routing through squid).
I gave up later on. I might give it a try again after few days.
Amm.
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Sent: Tuesday, 28 May 2013 4:15 PM
Subject: Re: [squid-users] Re: TPROXY
On 28/05/2013 8:11 p.m., Amm wrote:
From: alvarogp alvarix...@gmail.com
://pkgs.fedoraproject.org/cgit/squid.git/tree/
Amm
Eliezer
- Original Message -
From: Alex Domoradov alex@gmail.com
To: Amm ammdispose-sq...@yahoo.com
Cc: squid-users@squid-cache.org squid-users@squid-cache.org
Sent: Monday, 13 May 2013 6:22 PM
Subject: Re: [squid-users] Looking for squid spec file
On Mon, May 13, 2013 at 3:45 PM
kid2| clientProcessHit: Vary object loop!
Squid works fine though. (from just 5-10minutes testing)
Any idea what is the issue? Can it make squid unstable? Or its just a warning
of some sort which can be ignored safely?
Thanks and regards,
Amm.
= HIER_DIRECT;
#endif
++n_tries;
+ request-hier.note(serverConn, request-GetHost());
request-flags.pinned = 1;
if (pinned_connection-pinnedAuth())
request-flags.auth = 1;
Regards
Amm.
From: Amos
with which you can disable DNS checks (which cause
the crash)
Amm.
it worked earlier but you need to put
http_access deny ads
before
http_access allow LAN
Amm
core
Now squid forks 2nd worker and lands on 1st again?!
Is this possible? (kind of race condition)
Thanks and regards,
Amm.
- Original Message -
From: Alex Rousskov rouss...@measurement-factory.com
To: squid-users@squid-cache.org squid-users@squid-cache.org
Cc:
Sent: Saturday, 9 March 2013 11:54 AM
Subject: Re: [squid-users] Squid 3.3.2 and SMP
On 03/08/2013 07:40 PM, Amm wrote:
Lets say I have
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Cc:
Sent: Thursday, 7 March 2013 1:11 PM
Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
On 7/03/2013 7:22 p.m., Amm wrote:
snip
For testing, URL was accessed
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Cc:
Sent: Friday, 8 March 2013 2:47 AM
Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
On 7/03/2013 10:54 p.m., Amm wrote:
- Original Message -
[%h{Host
-cache.org/Versions/v3/3.HEAD/changesets/squid-3-12620.patch
But I doubt that has any relation to this.
Regards,
Amm.
://mail.google.com/mail/images/c.gif? - PINNED/2404:6800:4009:801::1015
image/gif
(Note: URL may not be same in both cases, these are just example)
I dont have IPv6, why is it showing IPv6 address, in 2nd case?
Using squid 3.3.2.
Regards
Amm
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Cc:
Sent: Thursday, 7 March 2013 4:11 AM
Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
On 7/03/2013 2:03 a.m., Amm wrote:
I just tried 443 port interception
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Cc:
Sent: Thursday, 7 March 2013 11:19 AM
Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
On 7/03/2013 5:30 p.m., Amm wrote:
- Original Message -
From: Amos
- Original Message -
From: Alex Rousskov rouss...@measurement-factory.com
To: squid-users@squid-cache.org squid-users@squid-cache.org
Cc:
Sent: Wednesday, 6 March 2013 6:20 AM
Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
On 03/04/2013 10:11 PM, Amm wrote
establishing connection with client. (I have personally not tried this setup so
can not tell for sure)
Or you need to create rules at firewall level which will *not* divert traffic
for those sites to squid.
Amm.
- Original Message -
From: Sandrini Christian (xsnd) x...@zhaw.ch
To: squid-users@squid-cache.org squid-users@squid-cache.org
Cc:
Sent: Wednesday, 20 February 2013 3:29 PM
Subject: [squid-users] squid running out of filedescriptors
Hi
Today squid was suddenly running at
ulimit -n must be run as the same user that the proxy is running.
In debian/ubuntu that user is proxy, and if you type ulimit as root you
will get a different answer that if you type ulimit logged in as proxy user.
Be sure to check the ulimit for the right user
Or you can check
youtube out a
second ISP line. We have two connections and I'd like to push all youtube
out the second connection.
Try this:
acl dstdom_regex yt -i youtube
tcp_outgoing_address yt 1.2.3.4
1.2.3.4 is IP address of 2nd line (should be on same machine as squid).
Amm.
Amm.
Umm your reply confused me further! :)
Please see below inline.
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
On 14/02/2013 10:12 p.m., Amm wrote:
I compiled squid using --with-filedescriptors=16384.
So do I still need
in it.
.include /lib/systemd/system/squid.service
[Service]
LimitNOFILE=16384
3) systemctl daemon-reload
4) systemctl restart squid.service
Hope it helps
Amm
- Original Message -
From: Amm ammdispose-sq...@yahoo.com
To: squid-users@squid-cache.org squid-users@squid-cache.org
Cc:
Sent
is not deemed ready for production use, we believe it is
ready for wider testing by the community.
Also I have not seen any official announcement here in mailing list. Sorry if I
missed it.
So please clarify if squid 3.3.1 is released as stable and production use, or
not?
Thank you,
Amm.
in 3.2 as I did not notice it happening in 3.2
So please fix it.
Regards,
AMM
- Forwarded Message -
From: Amm ammdispose-sq...@yahoo.com
To: squid-...@squid-cache.org
Cc:
Sent: Thursday, 13 December 2012 1:28 PM
Subject: assertion failed with dstdom_regex with IP based URL atleast
and regards,
Amm.
- Original Message -
From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Cc:
Sent: Thursday, 20 December 2012 7:56 AM
Subject: Re: [squid-users] Squid 3.2.5 wants to use IPv6 address?
For the record squid-3.2 tries all the destination IPs
http://www.squid-cache.org/Doc/config/ssl_bump/
- Original Message -
From: Sharon Sahar sharon.sa...@gmail.com
For such connections, is there an option to:
1. Disable SSL Bump for certain domains / IPs?
2. Disable squid for certain domains / IPs?
- Original Message -
From: Alex Rousskov rouss...@measurement-factory.com
Hi Amm,
There is a solution, but it requires switching from a url_rewriter
script to an eCAP adapter. Adapters can set annotations (name:value
tags) that Squid can log via %adapt::last_h logformat code
/config/external_acl_type/
which says: tag =Apply a tag to a request (for both ERR and OK results)
So can redirector do the same?
Thanks in advance,
Amm
of the browser. And
people get confused.
And if I recall right then I have also seen some browser complaining
about XSS or something, because URL domains do not match.
I suppose as of now there is no solution. But thanks again.
Regards,
Amm
--
On Wed 31 Oct, 2012 9:03 PM IST Heinrich Hirtzel wrote:
http_port 10.0.1.1.:3128 intercept
https_port 10.0.1.1.:443 ssl-bump cert=/user/local/squid3/ssl_cert/myCA.pm
you have forgotten intercept on https line
Amm
Further to this on running squidclient mgr:info
I always get:
Maximum number of file descriptors: 16384
be it after start or after reload OR even if i mention max_filedescriptor 1024
or 4096.
looks like somewhere this number 16384 is hard-coded in 3.3.0.1
Amm
- Original Message
Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Sent: Wednesday, 24 October 2012 6:44 PM
Subject: Re: [squid-users] 3.3.0.1 warning on reload - max_filedescriptors
disabled
On 24/10/2012 8:00 p.m., Amm wrote:
looks like somewhere this number 16384 is hard-coded in 3.3.0.1
crashed and restarted. But there is not much information
on why? May be something in forward.cc:217
So just reporting - please check.
Thank you,
Amm.
69 matches
Mail list logo