headers.
I was lazy to find which header exactly but I disabled both anyway.
Amm.
a solution:
via off
forwarded_for delete
Amm
regardless of value of mozillapkix
Thanks and regards,
Amm
On 07/26/2014 02:36 PM, Amos Jeffries wrote:
On 26/07/2014 8:36 p.m., Stakres wrote:
HI Amm,
Everyone is free to modify the script (client side) by sending YouTube urls
only, no need to send all the Squid traffic.
...
Bye Fred
It would be better practice to publish a script which is pre
urity risk.
Regards,
PS: Sorry for being off-topic on squid mailing list.
AMM
background.
Amm
On 07/11/2014 09:45 AM, Alex Rousskov wrote:
On 04/11/2014 11:01 PM, Amm wrote:
I recently upgraded OpenSSL from 1.0.0 to 1.0.1 (which supports TLS1.2)
Now there is this (BROKEN) bank site:
https://www.mahaconnect.in
This site closes connection if you try TLS1.2 or TLS1.1
When I
.
Please see bug report for details.
Thanks and regards,
Amm.
.
Thanks and regards,
Amm.
On 05/05/2014 07:39 PM, Martin Sperl wrote:
Hi Amos!
...
So I wonder if it is really a wise move to potentially cut off
people from security patches because they can no longer
compile squid on the system they want to use it on just
due to the build-tool dependencies
192.168.10.8 --dport 443 -j DNAT
--to-destination 192.168.10.254:3127
for 443 intercept use https_port not http_port.
Amm.
should be allowed. But this sslbump
still continues and cause infinite loop.
Eliezer
Amm.
On 04/13/2014 04:27 PM, Amos Jeffries wrote:
On 12/04/2014 5:23 p.m., Amm wrote:
So I ran this command:
openssl s_client -connect 192.168.1.2:8081
where 8081 is https_port on which squid runs. (with sslbump)
And BOOM, squid went in to infinite loop! And started running out of
file
ing server-first?
Or shouldn't squid check something like this?
If (destIP == selfIP and destPort == selfPort) then break?
I am also not sure if this can be used to DoS. So just reporting,
Amm.
n not tell bank to upgrade)
Amm.
> Amos
Ok, but then how come Firefox did not throw warning display just yesterday?
Squid version and configuration were exact same yesterday.
Unfortunately I can not switch OpenSSL back to older version else I
would have checked if squid "mimicked" key_usage in that version
as well or not?
Amm
d-users/201311/0310.html
Amm
On Friday, 11 April 2014 4:46 PM, Amos wrote:
> On 11/04/2014 10:16 p.m., Amm wrote:
>> After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving
>> certificate error stating "sec_error_inadequate_key_usage".
>>
>> This does not happen for all
R this has been resolved?
Is it Firefox bug or squid bug?
Thanks in advance,
Amm.
utgoing_mark.
I just noticed that there was same discussion done in list previously as
well (in 2013), here is the link:
http://www.squid-cache.org/mail-archive/squid-users/201303/0421.html
Regards
Amm
On 03/15/2014 08:03 PM, Amm wrote:
On 03/15/2014 05:11 PM, Amos Jeffries wrote:
On 15/03/2014 6:46 p.m., Amm wrote:
I would like to mark outgoing packet (on server side) with SAME MARK
as on incoming (NATed or CONNECTed) packet.
http://www.squid-cache.org/Doc/config/qos_flows/
Squid
On 03/15/2014 05:11 PM, Amos Jeffries wrote:
On 15/03/2014 6:46 p.m., Amm wrote:
I would like to mark outgoing packet (on server side) with SAME MARK as on
incoming (NATed or CONNECTed) packet.
http://www.squid-cache.org/Doc/config/qos_flows/
Squid default action is to pass the
,
Amm.
timeout error.
Anyway, my bug reports follows below.
-
From: Amm
To: "squid-users@squid-cache.org"
Sent: Monday, 26 August 2013 4:18 PM
Subject: squid -z for SMP does not create worker's directories
Hello all,
I have following configuration: (For SMP)
workers 2
cache_dir
-regex-Not-working-td4661633.html
@ranmanh
Use http_access, always_direct is not for access restriction.
Amm.
>Not from the client end. Even NAT interception transparency the proxy IP
>is hidden from the client. The server knows, but not the client.
>
>Amos
Unless I misunderstood the original question, combination of curl,
www.whatismyip.comand grep should work, shouldn't it?
Amm.
- Original Message -
> From: csn233
> To: Amm
> Cc:
> Sent: Tuesday, 30 July 2013 2:03 PM
> Subject: Re: [squid-users] Basic questions on transparent/intercept proxy
>Thanks to all who replied. Looks like the "ssl_bump none all" is
> required to sto
> From: csn233
>Sent: Monday, 29 July 2013 10:40 PM
>Subject: Re: [squid-users] Basic questions on transparent/intercept proxy
>On Sun, Jul 28, 2013 at 9:11 PM, Amm wrote:
>> - Original Message -
>>
>>> From: csn233
>>> To: "squid-u
-cache.org mail exchanger = 10 squid-cache.org.
squid-cache.org mail exchanger = 90 mx2.squid-cache.org.
mx2 does not seem to be working.
Regards,
Amm.
- Original Message -
> From: Amm
> To: "squid-users@squid-cache.org"
> Cc:
> Sent: Sunday, 28 July 2013
it will log only IPs not the host name or URL.
Amm
My previous e-mail bounced back.
: Mail server for "squid-cache.org" unreachable
for too long
So reposting, sorry if already it had reached the group.
- Original Message -
> From: Amos Jeffries
>> On 20/07/2013 2:04 p.m., Amm wrote:
>> Hello,
>>
ing, yahoo
Second on will log FULL query for a particular IP.
Thanks in advance,
Amm
> From: Amos Jeffries
>To: squid-users@squid-cache.org
>Sent: Tuesday, 28 May 2013 4:15 PM
>Subject: Re: [squid-users] Re: TPROXY
>
>
>On 28/05/2013 8:11 p.m., Amm wrote:
>>
>>> From:
ly
>> and the web pages are showed perfectly. The problem I have is that this
>> accesses are not reflected in the access.log and cache.log, so could be
>> possible that squid is not caching any cacheable content?
I have had exact same problem when I was trying TPROXY with similar
configuration.
Squid would route packets but not LOG anything in access log.
If I stop squid then clients cant access any website. (this indicates that
packets are indeed routing through squid).
I gave up later on. I might give it a try again after few days.
Amm.
,%20deflate"'
2013/05/13 20:36:21 kid2| clientProcessHit: Vary object loop!
Squid works fine though. (from just 5-10minutes testing)
Any idea what is the issue? Can it make squid unstable? Or its just a warning
of some sort which can be ignored safely?
Thanks and regards,
Amm.
- Original Message -
> From: Alex Domoradov
> To: Amm
> Cc: "squid-users@squid-cache.org"
> Sent: Monday, 13 May 2013 6:22 PM
> Subject: Re: [squid-users] Looking for squid spec file
>
> On Mon, May 13, 2013 at 3:45 PM, Amm wrote:
>>
&
For which version of squid do you need spec file?
> 3.2
> 3.3
> 3.head
>
> any of the above ^^
> I had 3.2 but now 3.3 is stable so I don't really care which one of them
> I will customize it again.
See if this helps in anyway, its from Fedora tree and for 3.3.4
http://pkgs.fedoraproject.org/cgit/squid.git/tree/
Amm
>
> Eliezer
ype = HIER_DIRECT;
#endif
++n_tries;
+ request->hier.note(serverConn, request->GetHost());
request->flags.pinned = 1;
if (pinned_connection->pinnedAuth())
request->flags.auth = 1;
Regards
Amm.
>__
che.org/show_bug.cgi?id=3717
Note that patch does not solve the actual bug. Patch just adds -n acl
option with which you can disable DNS checks (which cause
the crash)
Amm.
gt; acl ads dstdom_regex -i "/etc/squid3/adservers"
> http_access deny ads
Dont know how it worked earlier but you need to put
http_access deny ads
before
http_access allow LAN
Amm
- Original Message -
> From: Alex Rousskov
> To: "squid-users@squid-cache.org"
> Cc:
> Sent: Saturday, 9 March 2013 11:54 AM
> Subject: Re: [squid-users] Squid 3.3.2 and SMP
>
> On 03/08/2013 07:40 PM, Amm wrote:
>
>
>> Lets say I have tw
other process starts (or forks) which lands on 2nd core
Now squid forks 2nd worker and lands on 1st again?!
Is this possible? (kind of race condition)
Thanks and regards,
Amm.
a wrong warning or something is really wrong. Because
ignoring 127.0.0.1 from localhost, can cause many side effects.
I did not happen in 3.3.1.
Just for the info, I am using patch for -n acl option at: (to avoid DoS or
crashes)
http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-12620.patch
But I doubt that has any relation to this.
Regards,
Amm.
- Original Message -
> From: Amos Jeffries
> To: squid-users@squid-cache.org
> Cc:
> Sent: Friday, 8 March 2013 2:47 AM
> Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
>
> On 7/03/2013 10:54 p.m., Amm wrote:
>> - Original Message -
- Original Message -
> From: Amos Jeffries
> To: squid-users@squid-cache.org
> Cc:
> Sent: Thursday, 7 March 2013 1:11 PM
> Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
>
> On 7/03/2013 7:22 p.m., Amm wrote:
>>
>
>> For testing,
- Original Message -
> From: Amos Jeffries
> To: squid-users@squid-cache.org
> Cc:
> Sent: Thursday, 7 March 2013 11:19 AM
> Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
>
> On 7/03/2013 5:30 p.m., Amm wrote:
>> - Original Message ---
- Original Message -
> From: Amos Jeffries
> To: squid-users@squid-cache.org
> Cc:
> Sent: Thursday, 7 March 2013 4:11 AM
> Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
>
> On 7/03/2013 2:03 a.m., Amm wrote:
>>>
>> I just tried 44
63.101.48 -
if sslbump server-first applied for request then log shows:
1362574001.569 294 192.168.1.1 TCP_MISS/200 515 GET
https://mail.google.com/mail/images/c.gif? - PINNED/2404:6800:4009:801::1015
image/gif
(Note: URL may not be same in both cases, these are just example)
I dont have IPv6, why is it showing IPv6 address, in 2nd case?
Using squid 3.3.2.
Regards
Amm
- Original Message -
> From: Alex Rousskov
> To: "squid-users@squid-cache.org"
> Cc:
> Sent: Wednesday, 6 March 2013 6:20 AM
> Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
>
> On 03/04/2013 10:11 PM, Amm wrote:
>
>>> # L
or IP range. Ofcourse this assumes
that IP will never change for those banks.
I am also assuming that squid checks IP based ACLs for ssl_bump before
establishing connection with client. (I have personally not tried this setup so
can not tell for sure)
Or you need to create rules at firewall level which will *not* divert traffic
for those sites to squid.
Amm.
- Original Message -
> From: Sandrini Christian (xsnd)
> To: "squid-users@squid-cache.org"
> Cc:
> Sent: Wednesday, 20 February 2013 3:29 PM
> Subject: [squid-users] squid running out of filedescriptors
>
> Hi
>
>
> Today squid was suddenly running at 100% CPU and a lot of "runnin
id to redirect youtube out a
> second ISP line. We have two connections and I'd like to push all youtube
> out the second connection.
Try this:
acl dstdom_regex yt -i youtube
tcp_outgoing_address yt 1.2.3.4
1.2.3.4 is IP address of 2nd line (should be on same machine as squid).
Amm.
> ulimit -n must be run as the same user that the proxy is running.
>
> In debian/ubuntu that user is proxy, and if you type ulimit as root you
> will get a different answer that if you type ulimit logged in as proxy user.
>
> Be sure to check the ulimit for the right user
Or you can check
es in it.
.include /lib/systemd/system/squid.service
[Service]
LimitNOFILE=16384
3) systemctl daemon-reload
4) systemctl restart squid.service
Hope it helps
Amm
- Original Message -
> From: Amm
> To: "squid-users@squid-cache.org"
> Cc:
> Sent: Thursday, 14 Febr
Umm your reply confused me further! :)
Please see below inline.
- Original Message -
> From: Amos Jeffries
> To: squid-users@squid-cache.org
>
> On 14/02/2013 10:12 p.m., Amm wrote:
>>
>> I compiled squid using --with-filedescriptors=16384.
>>
>&g
Amm.
of assertion
fails.
This bug does not exist in 3.2 as I did not notice it happening in 3.2
So please fix it.
Regards,
AMM
----- Forwarded Message -
> From: Amm
> To: ""
> Cc:
> Sent: Thursday, 13 December 2012 1:28 PM
> Subject: assertion failed with dstdom_regex with IP based URL atleast for
> 3.3.0.2
deemed ready for production use, we believe it is
ready for wider testing by the community.
Also I have not seen any official announcement here in mailing list. Sorry if I
missed it.
So please clarify if squid 3.3.1 is released as stable and production use, or
not?
Thank you,
Amm.
suppose])
Thanks and regards,
Amm.
- Original Message -
> From: Amos Jeffries
> To: squid-users@squid-cache.org
> Cc:
> Sent: Thursday, 20 December 2012 7:56 AM
> Subject: Re: [squid-users] Squid 3.2.5 wants to use IPv6 address?
>
>
> For the record squid-3.2 trie
http://www.squid-cache.org/Doc/config/ssl_bump/
- Original Message -
> From: Sharon Sahar
> For such connections, is there an option to:
>
> 1. Disable SSL Bump for certain domains / IPs?
> 2. Disable squid for certain domains / IPs?
- Original Message -
> From: Alex Rousskov
> Hi Amm,
>
> There is a solution, but it requires switching from a url_rewriter
> script to an eCAP adapter. Adapters can set annotations (name:value
> "tags") that Squid can log via %adapt::
--
On Wed 31 Oct, 2012 9:03 PM IST Heinrich Hirtzel wrote:
>http_port 10.0.1.1.:3128 intercept
>https_port 10.0.1.1.:443 ssl-bump cert=/user/local/squid3/ssl_cert/myCA.pm
>
you have forgotten intercept on https line
Amm
tains original URL of the page in browser.
Redirect otherwise changes the URL in location bar of the browser. And
people get confused.
And if I recall right then I have also seen some browser complaining
about XSS or something, because URL domains do not match.
I suppose as of now there is no solution. But thanks again.
Regards,
Amm
/config/external_acl_type/
which says: tag =Apply a tag to a request (for both ERR and OK results)
So can redirector do the same?
Thanks in advance,
Amm
: Amos Jeffries
>To: squid-users@squid-cache.org
>Sent: Wednesday, 24 October 2012 6:44 PM
>Subject: Re: [squid-users] 3.3.0.1 warning on reload - max_filedescriptors
>disabled
>
>On 24/10/2012 8:00 p.m., Amm wrote:
>> looks like somewhere this number 16384 is hard-
Further to this on running "squidclient mgr:info"
I always get:
Maximum number of file descriptors: 16384
be it after start or after reload OR even if i mention max_filedescriptor 1024
or 4096.
looks like somewhere this number 16384 is hard-coded in 3.3.0.1
Amm
- Origin
4-unknown-linux-gnu...
It appears that squid crashed and restarted. But there is not much information
on why? May be something in forward.cc:217
So just reporting - please check.
Thank you,
Amm.
66 matches
Mail list logo