Re: [squid-users] https://wiki.squid-cache.org provides invalid certificate chain ...

2017-11-17 Thread Kinkie
I have already acted on it but couldn’t communicate in time, sorry. Thanks
for notifying and for looking into it.


On Fri, 17 Nov 2017 at 17:52, Amos Jeffries  wrote:

> On 18/11/17 01:39, Walter H. wrote:
> > for more information see
> > https://www.ssllabs.com/ssltest/analyze.html?d=wiki.squid-cache.org
> >
> > - missing intermediate certificate
> > - ssl3 active, poodle vulnerable ...
> >
>
> None of those issues appear in the test results I get from that URL you
> referenced. SSLv3 is definitely not even supported by our wiki server.
>
> The tester appears to be broken in regards to the chain test. There is
> *no* chain. Our cert is directly signed by the LetsEncrypt CA.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
-- 
@mobile
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] wiki.squid-cache.org SSL configuration problem ...

2017-08-20 Thread Kinkie
Hi,
  it's been fixed last week. Thanks again for the heads-up!


On Tue, Aug 8, 2017 at 9:00 PM, Francesco Chemolli  wrote:
> On 8 Aug 2017, at 19:06, Walter H.  wrote:
>
> Hello,
>
> the intermediate certificate which is provided doen't go with the end
> entitiy certificate ...
>
> the intermediate that is provided:  Let's Encrypt Authority X1
> the intermediate that should be provided:  Let's Encrypt Authority X3
>
> for more see:
> https://www.ssllabs.com/ssltest/analyze.html?d=wiki.squid-cache.org=104.130.201.120
>
>
>
>
> Thanks for letting us know.
> We'll look into it ASAP.
>
> Francesco



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Wiki outage - solved

2017-03-13 Thread Kinkie
Hi all,
   due to a hard drive failure on one of the servers we run, the wiki has
been unavailable in the past couple of days.
   The volunteers overseeing the project infrastructure have been able to
restore the service by moving it to a different hardware, and the wiki
should now become progressively available again as DNS records propagate.

   Please join me in thanking the volunteers who help run the squid
infrastructure for donating their time and expertise.

   It is a good momento to remind everyone that the Squid project and the
Squid Software Foundation rely on everyone's effort and on generous
donations by individuals, companies and organizations to continue
supporting squid and accompanying services.

The list of main sponsors is at
http://www.squid-cache.org/Support/sponsors.html

Please refer to http://www.squid-cache.org/Foundation/donate.html if you
wish to donate financial or material resources to the Squid project.

-- 
Francesco Chemolli
Vice President, Squid Software Foundation
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] GoLang Based delayer

2016-10-25 Thread Kinkie
Hi Eliezer,
   Please list it as "realted software" on the wiki.

On Tue, Oct 25, 2016 at 3:53 PM, Eliezer Croitoru  wrote:
> Inspired by Francesco Chemolli delayer at:
> http://bazaar.launchpad.net/~squid/squid/trunk/view/head:/src/acl/external/d
> elayer/ext_delayer_acl.pl.in
>
> I wrote a delayer in golang:
> http://wiki.squid-cache.org/EliezerCroitoru/GoLangDelayer
>
> The binaries for the helper are at:
> http://ngtech.co.il/squid/helpers/delayer/squid-externalacl_delayer.tar.xz
>
> For windows, linux, BSD, Darwin, arm
>
> Eliezer
>
> 
> Eliezer Croitoru 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-09-20 Thread Kinkie
Hi Fred,
  I assume that by "implicit" you mean "transparent" or
"interception". Short answer, not possible: there is nothing to anchor
cookies to. It could be possible to fake it by having an auxiliary
website doing standard SAML and feeding a database of associations
userid-ip. It will fail to account for cases where multiple users
share the same IP, but that doesn't stop many vendors from caliming
they do "transparent authentication".

On Tue, Sep 20, 2016 at 9:58 AM, FredB  wrote:
> I forgot, if possible a method without active directory
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] We have a big problems with Squid 3.3.8, it's a bug ?

2016-03-30 Thread Kinkie
Are you using BASIC, ntlm or kerberos?
Do you know that user's password in order to run some tests?
Do you have some other proxy or box where you can run some tests?
AD is a complex system, so the first thing to do is to understand I'd the
problem is caused by ad, by the system, by something related to the user or
to the author helper or to squid.
On Mar 30, 2016 9:50 AM, "Olivier CALVANO"  wrote:

> Anyone know this problems ?
>
>
> 2016-03-29 18:22 GMT+02:00 Olivier CALVANO :
>
>> Hi
>>
>> we use on a new server Squid 3.3.8 on CentOS 7 with a Active Directory
>> Authentification (tested in negotiate_wrapper but same
>> problems with ntlm_auth) .
>>
>> That's work's very good a time but without reason, a limited user can't
>> access to internet and i don't know why.
>>
>> In the logs, we have:
>>
>> 1459266547.967 1200888 172.16.6.39 NONE_ABORTED/000 0 GET
>> http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?
>> olivier HIER_NONE/- -
>> 1459266567.771 3538111 172.16.6.14 NONE_ABORTED/000 0 GET
>> http://yahoo.fr/ olivier HIER_NONE/- -
>> 1459267856.877  30609 172.16.6.39 NONE_ABORTED/000 0 GET
>> http://officecdn.microsoft.com/Office/Data/v32.cab olivier HIER_NONE/- -
>> 1459267917.860  60713 172.16.6.39 NONE_ABORTED/000 0 HEAD
>> http://officecdn.microsoft.com/Office/Data/v32.cab olivier HIER_NONE/- -
>>
>>
>> I don't know why but all logs have "NONE_ABORTED/000"
>> anyone know this errors ?
>>
>>
>> If, on the same PC, i change the username, that's work ! reconnect with
>> the old username and the problems start
>>
>> regards
>> Olivier
>>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Kinkie
Hi,
  it's all in the logs you posted:

ipcCreate: fork: (12) Cannot allocate memory
WARNING: Cannot run '/lib/squid3/ssl_crtd' process.
...
FATAL: Failed to create unlinkd subprocess

You've run of system memory during startup.


On Tue, Feb 9, 2016 at 4:47 PM, Panda Admin  wrote:
> Hello,
>
> I am running squid 3.5.13 and it crashes with these errors:
>
> 2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
> 2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
> x86_64-pc-linux-gnu...
> 2016/02/09 15:43:24 kid1| Service Name: squid
> 2016/02/09 15:43:24 kid1| Process ID 7279
> 2016/02/09 15:43:24 kid1| Process Roles: worker
> 2016/02/09 15:43:24 kid1| With 1024 file descriptors available
> 2016/02/09 15:43:24 kid1| Initializing IP Cache...
> 2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
> 2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
> 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from /etc/resolv.conf
> 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from /etc/resolv.conf
> 2016/02/09 15:43:24 kid1| Adding domain nuspire.com from /etc/resolv.conf
> 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
> processes
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> process.
> 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
> processes
> 2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
> needed.
> 2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
> 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> FATAL: Failed to create unlinkd subprocess
> Squid Cache (Version 3.5.13): Terminated abnormally.
> CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
> Maximum Resident Size: 4019840 KB
> Page faults with physical i/o: 0
>
>
> Anybody have an idea why?
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Kinkie
If you are swapping performance will suffer terribly. How large are these
files and how much ram do youbhave?
On Feb 9, 2016 5:17 PM, "Panda Admin" <pandanonom...@gmail.com> wrote:

> Adding a swap directory fixed it for now.  I think it's because my ACL
> files are so large.
>
> On Tue, Feb 9, 2016 at 11:00 AM, Panda Admin <pandanonom...@gmail.com>
> wrote:
>
>> I see that, but that's not possible. I still have system memory available.
>> I just did a top while running squid, never went over 30% memory usage.
>> It maxed out the CPU but not the memory. So, yeah...still confused.
>>
>> On Tue, Feb 9, 2016 at 10:55 AM, Kinkie <gkin...@gmail.com> wrote:
>>
>>> Hi,
>>>   it's all in the logs you posted:
>>>
>>> ipcCreate: fork: (12) Cannot allocate memory
>>> WARNING: Cannot run '/lib/squid3/ssl_crtd' process.
>>> ...
>>> FATAL: Failed to create unlinkd subprocess
>>>
>>> You've run of system memory during startup.
>>>
>>>
>>> On Tue, Feb 9, 2016 at 4:47 PM, Panda Admin <pandanonom...@gmail.com>
>>> wrote:
>>> > Hello,
>>> >
>>> > I am running squid 3.5.13 and it crashes with these errors:
>>> >
>>> > 2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
>>> > 2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
>>> > x86_64-pc-linux-gnu...
>>> > 2016/02/09 15:43:24 kid1| Service Name: squid
>>> > 2016/02/09 15:43:24 kid1| Process ID 7279
>>> > 2016/02/09 15:43:24 kid1| Process Roles: worker
>>> > 2016/02/09 15:43:24 kid1| With 1024 file descriptors available
>>> > 2016/02/09 15:43:24 kid1| Initializing IP Cache...
>>> > 2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
>>> > 2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
>>> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from
>>> /etc/resolv.conf
>>> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from
>>> /etc/resolv.conf
>>> > 2016/02/09 15:43:24 kid1| Adding domain nuspire.com from
>>> /etc/resolv.conf
>>> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
>>> > processes
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>>> > process.
>>> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
>>> > processes
>>> > 2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
>>> > needed.
>>> > 2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
>>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>> > FATAL: Failed to create unlinkd subprocess
>>> > Squid Cache (Version 3.5.13): Terminated abnormally.
>>> > CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
>>> > Maximum Resident Size: 4019840 KB
>>> > Page faults with physical i/o: 0
>>> >
>>> >
>>> > Anybody have an idea why?
>>> >
>>> > ___
>>> > squid-users mailing list
>>> > squid-users@lists.squid-cache.org
>>> > http://lists.squid-cache.org/listinfo/squid-users
>>> >
>>>
>>>
>>>
>>> --
>>> Francesco
>>>
>>
>>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-4.0.4 on FreeBSD

2016-01-13 Thread Kinkie
Hi,
   I see that there is no -I/usr/local/include option to the compiler.

Add that as a CPPLAGS when calling configure
(e.g.
CPPFLAGS=-I/usr/local/include ./configure
)
this should fix the build for you.


On Wed, Jan 13, 2016 at 4:25 PM, Odhiambo Washington  wrote:
> I am trying to compile on FreeBSD 10.1-RELEASE-amd64
>
>
> 
> /bin/sh ../libtool  --tag=CC   --mode=compile clang -DHAVE_CONFIG_H   -I..
> -I../include -I../lib -I../src -I../include  -I/usr/include  -I/usr/include
> -I../libltdl -I/usr/include -I/usr/local/include/libxml2  -Werror
> -Qunused-arguments  -D_REENTRANT  -MT md5.lo -MD -MP -MF $depbase.Tpo -c -o
> md5.lo md5.c &&\
> mv -f $depbase.Tpo $depbase.Plo
> libtool: compile:  clang -DHAVE_CONFIG_H -I.. -I../include -I../lib -I../src
> -I../include -I/usr/include -I/usr/include -I../libltdl -I/usr/include
> -I/usr/local/include/libxml2 -Werror -Qunused-arguments -D_REENTRANT -MT
> md5.lo -MD -MP -MF .deps/md5.Tpo -c md5.c  -fPIC -DPIC -o .libs/md5.o
> In file included from md5.c:41:
> ../include/md5.h:13:10: fatal error: 'nettle/md5.h' file not found
> #include 
>  ^
> 1 error generated.
> Makefile:956: recipe for target 'md5.lo' failed
> gmake[2]: *** [md5.lo] Error 1
> gmake[2]: Leaving directory '/usr/home/wash/ILI/Squid/4.x/squid-4.0.4/lib'
> Makefile:1001: recipe for target 'all-recursive' failed
> gmake[1]: *** [all-recursive] Error 1
> gmake[1]: Leaving directory '/usr/home/wash/ILI/Squid/4.x/squid-4.0.4/lib'
> Makefile:579: recipe for target 'all-recursive' failed
> gmake: *** [all-recursive] Error 1
>
> 
>
>
>
> But the file is there ...
>
>
> wash@mail:~/ILI/Squid/4.x/squid-4.0.4$ ls -al
> /usr/local/include/nettle/md5.h
> -rw-r--r--  1 root  wheel  2023 Jan  7  2015 /usr/local/include/nettle/md5.h
>
>
> --
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft."
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-4.0.4 beta is available

2016-01-10 Thread Kinkie
Hi eliezer,
   This looks like a broken or not completely installed libstdc++.
Could you check that all packages mentioned at
http://wiki.squid-cache.org/BuildFarm/CentosInstall are installed on
your build system?

On Sun, Jan 10, 2016 at 6:02 PM, Eliezer Croitoru  wrote:
> I am having trouble building 4.0.4 on OpenSUSE leap.
> I have tried both manually and using the rpm build tools.
> The error in the rpmbuild logs at:
> http://ngtech.co.il/repo/opensuse/leap/logs/build5-4.0.4.log
> and the build log of the manual compilation are at:
> http://ngtech.co.il/repo/opensuse/leap/logs/conf1-4.0.4.log
> http://ngtech.co.il/repo/opensuse/leap/logs/build1-4.0.4.log
>
> The error output:
> make[3]: Entering directory
> '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers/basic_auth/NCSA'
> depbase=`echo basic_ncsa_auth.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\
> /usr/local/bin/g++ -DHAVE_CONFIG_H   -I../../.. -I../../../include
> -I../../../lib -I../../../src -I../../../include-I.  -Wall
> -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror
> -Wno-deprecated-register -pipe -D_REENTRANT -g -O2 -march=native -std=c++11
> -MT basic_ncsa_auth.o -MD -MP -MF $depbase.Tpo -c -o basic_ncsa_auth.o
> basic_ncsa_auth.cc &&\
> mv -f $depbase.Tpo $depbase.Po
> basic_ncsa_auth.cc: In function ‘int main(int, char**)’:
> basic_ncsa_auth.cc:104:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("");
>  ^
> basic_ncsa_auth.cc:104:42: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("");
>   ^
> basic_ncsa_auth.cc:108:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("");
>  ^
> basic_ncsa_auth.cc:108:42: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("");
>   ^
> basic_ncsa_auth.cc:115:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("No such user");
>  ^
> basic_ncsa_auth.cc:115:54: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("No such user");
>   ^
> basic_ncsa_auth.cc:128:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:128:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:133:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:133:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:138:13: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("Password too long. Only 8 characters accepted.");
>  ^
> basic_ncsa_auth.cc:138:88: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("Password too long. Only 8 characters accepted.");
>
>  ^
> basic_ncsa_auth.cc:144:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:144:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:148:13: error: ‘cout’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:148:41: error: ‘endl’ is not a member of ‘std’
>  SEND_OK("");
>  ^
> basic_ncsa_auth.cc:151:9: error: ‘cout’ is not a member of ‘std’
>  SEND_ERR("Wrong password");
>  ^
> basic_ncsa_auth.cc:151:52: error: ‘endl’ is not a member of ‘std’
>  SEND_ERR("Wrong password");
> ^
> At global scope:
> cc1plus: error: unrecognized command line option "-Wno-deprecated-register"
> [-Werror]
> cc1plus: all warnings being treated as errors
> Makefile:814: recipe for target 'basic_ncsa_auth.o' failed
> make[3]: *** [basic_ncsa_auth.o] Error 1
> make[3]: Leaving directory
> '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers/basic_auth/NCSA'
> Makefile:517: recipe for target 'all-recursive' failed
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory
> '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers/basic_auth'
> Makefile:517: recipe for target 'all-recursive' failed
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory '/home/rpm/rpmbuild/SOURCES/squid-4.0.4/helpers'
> Makefile:569: recipe for target 'all-recursive' failed
> make: *** [all-recursive] Error 1
> ##END OF OUTPUT
>
> I have tried to understand the issue and I found out that it might be
> because of the usage of gcc and not g++ and I have tried to use CXX=g++ in
> order to test the issue but it doesn't help.
> On the same machine I have built 3.5.13 without any issues.
>
> If I can add more information on the build node just let me know.
>
> Thanks,
> Eliezer
>
> On 10/01/2016 08:15, Amos Jeffries 

Re: [squid-users] Squid is not worked in OpenVZ VPS.

2015-12-30 Thread Kinkie
Well, the IPv6 address could be telling. Maybe OpenVZ is setting up a
V6 network but has no route out of it.
Can you try accessing a known V4 and a known V6 address? It could help
you understand if the issue is there. In that case, you need to fix
the issue at the OpenVZ level.


On Wed, Dec 30, 2015 at 3:14 PM, Billy.Zheng <zw...@163.com> wrote:
> Thanks for you reply.
>
> The failed message is: `Connection to  failed',  is a IPV6
> address somehow.
>
> I found i just could't access part of website, not all.
>
> so, I thought this is not Squid problem, maybe china GFW prevent this,
> I doubt OpenVZ provider's machine room exist some problem.
>
> Thanks.
>
> Francesco Chemolli writes:
>
>>> On 30 Dec 2015, at 11:39, Billy.Zheng(zw963) <zw...@163.com> wrote:
>>>
>>> Hi, I have two VPS in same location(HONG KONG)
>>>
>>> the two VPS is blongs to two service provider, one OpenVZ, one XEN.
>>>
>>> I choice with same version CentOS(6.7), and with same config script for
>>> a FORWARD proxy to access free world.
>>>
>>> XEN always worked for me, but OpenVZ is not.
>>>
>>
>>> the second logs is so strange,  www.vpsnine.com is my OpenVZ VPS
>>> provider domain name, I never access it from my local browser.
>>> and not like another XEN VPS, those log output is very very slow.
>>>
>>> Could you give me some clue for resolve this? Thanks.
>>
>> when you try accessing some destination with the proxy that is not working, 
>> what does the error page say?
>>
>>   Kinkie
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> --
> Geek, Rubyist, Emacser
> Homepage: http://zw963.github.io
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slow App through Proxy

2015-12-18 Thread Kinkie
Hi,
  Do you see anything denied in the squid logs? From what you say it could
be related to a failing attempt to validate a certificate.
On Dec 18, 2015 17:25, "Patrick Flaherty"  wrote:

> Hello,
>
>
>
> We have an app configured to use Squid Proxy (3.5.11). The client machine
> does not have access to the internet except for the whitelisted domains in
> Squid. The app launches painfully slow. It seems to be SSL Certificate
> related. I found a way to fix it but don’t know why it fixes it. Let me
> explain.
>
>
>
> If I go into IE and configure it to use the Squid Proxy and I go to our
> website (SSL Based), the page comes up fine with a nice lock symbol
> signifying SSL. I then turn off the proxy config in IE to stop using the
> Squid Proxy. I relaunch our app and it launches fast forever more!!! I
> thought that it might be downloading a certificate but I look at all the
> Windows certificates either through IE or CertMgr.msc and it appears that
> no new certificates are in there after this exercise. Something in the
> Windows config changed and I don’t know what it is. I would love to know
> because I would like to see if there is an easier method to fix this as
> opposed to the one I just outlined.
>
>
>
> Any input would be greatly appreciated.
>
>
>
> Patrick
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.4, dstdomain

2015-12-10 Thread Kinkie
Hi,
  it works exactly as you expect. "dstdomain addons.mozilla.org" does
not block subdomains.

On Thu, Dec 10, 2015 at 11:02 AM,   wrote:
> 2015/12/10 10:33:49| ERROR: '.addons.mozilla.org' is a subdomain of
> 'addons.mozilla.org'
>
>
> I thought
> addons.mozilla.org  blocks only these hostname
>
> .addons.mozilla.org blocks all the sub-domains, like
> www.addons.mozilla.org etc.addons.mozilla.org
>
>
> Which are the parsing rules of squid 3.4 ?
>
> Does the first case block also the sub-domains ?
>
>
> best regards, Sala
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.4, dstdomain

2015-12-10 Thread Kinkie
On Thu, Dec 10, 2015 at 11:43 AM,  <massimo.s...@asl.bergamo.it> wrote:
> Massimo
>> 2015/12/10 10:33:49| ERROR: '.addons.mozilla.org' is a subdomain of
>> 'addons.mozilla.org'
>
>
> Kinkie :
>>  it works exactly as you expect. "dstdomain addons.mozilla.org" does
>> not block subdomains.
>
>
>
> So why doesn't squid accept both rules ? a parsing bug ?


No bug, it is really intentional: ".addons.mozilla.org" also matches
"addons.mozilla.org" (without the dot). Therefore the latter is
rejected to keep the internal data structures consistent.


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with ldap authentication

2015-12-08 Thread Kinkie
On Tue, Dec 8, 2015 at 6:14 PM, Marcio Demetrio Bacci
 wrote:
> Hi
>
> In the Squid Server, I want only basic authentication.
>
> The command:
>
> /usr/lib/squid3/basic_ldap_auth \
>-b cn=users,dc=empresa,dc=com,dc=br \
>-D cn=proxy,cn=users,dc=empresa,dc=com,dc=br -w test_12345 \
>-h 192.168.0.25 -p 389 -s sub -v 3 -f "sAMAccountName=%s"
>
> shows "Success" to authenticate only the users in Organization Unity  (OU)
> "Users", but in my domain I have many OU that has users as TI, Financial,
> Sales..
>
> How I get authenticate the users in others OU?

Since you are using "sub" as search scope, you simply have to move up
one level in the base-DN tree.
Change the parameter
-b cn=users,dc=empresa,dc=com,dc=br
to
-b dc=empresa,dc=com,dc=br

   Francesco Chemolli
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Disabling IP6 in 3.5.x

2015-12-02 Thread Kinkie
Hi Patrick,
   ./configure --disable-ipv6 

will do the trick.

On Thu, Dec 3, 2015 at 12:43 AM, Patrick Flaherty  wrote:
> Hello,
>
>
>
> Is there a way to disable IP6 in the 3.5.x Squid builds?
>
>
>
> Thanks
>
> Patrick
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ntlm_auth defaulting to succeed

2015-12-02 Thread Kinkie
Hi,
  you can check the ntlm_fake_auth helper; it'll blandly trust
anything the user says.

On Wed, Dec 2, 2015 at 10:10 PM, Noel Kelly  wrote:
> Hello All
>
> We have been using Squid and ntlm_auth for many years with mainly success.
> However we have always had a few annoyances like continual authentication
> pop-ups if a user has changed their password and not restarted their session
> or, as now, persistent popups which seem related to a browser update (Google
> Chrome is the suspect currently).
>
> It occurred to me that thee days we don't use ntlm_auth to block Internet
> access per se but rather to capture the username to manage access using ACLs
> and the username.
>
> So I was wondering if anyone had any ideas for a Squid config where the
> ntlm_auth helper always succeeded regardless of the password  so they user
> gets waived through and Squid has the username needed to process the ACLs?
>
> Thanks
> Noel
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Accessing squid from a url rather than proxy settings

2015-10-27 Thread Kinkie
Hi,
  a proxy is different from a webserver. The protocol they speak is
slightly different in order to support the use-cases (nothing would
technically prevent from having webservers use a language more similar
to proxies); what you are trying to do is to use Squid (the proxy) as
if it was a webserver, which it isn't.

On Tue, Oct 27, 2015 at 5:51 AM, Phil Allred  wrote:
> I want to have users access squid directly from a URL like this:
>
> http://my.squidserver.org:3128/testurl
>
> Rather than by setting a proxy in their browser.  Then I want squid to
> rewrite the URL “my.squidserver.org”  to the site I want users to access.
> The reason I want to do this is in order to access ONLY a certain research
> database through the proxy server, not all HTTP requests.
>
> When I set up squid to do url rewriting, everything works, if I configure my
> browser to use my proxy server.  However, when I try to access squid
> directly  like mentioned above, it refuses to try to rewrite the URL.  Squid
> just sends back an error like this:
>
> The following error was encountered while trying to retrieve the URL:
> /testurl
>
> Invalid URL
>
> Some aspect of the requested URL is incorrect.
>
>
>
> Is what I’m trying to do even possible?  If so, how do I fix my problem?
>
> Thanks in advance,
>
> Phil
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [feature request] Websocket Support for Intercept Mode

2015-10-27 Thread Kinkie
Hi Tarik,
   as far as I know, no developer is working on that.
The best way to get that feature would be to sponsor a developer to
implement it.

On Tue, Oct 27, 2015 at 11:54 AM, Tarik Demirci  wrote:
> Hello,
> Is there any plan to add support for websocket when using intercept mode?
>
> Currently, I use SslPeekAndSplice feature but this brokes many
> websites using websocket (one example is web.whatsapp.com). As a
> workaround, after peeking at step 1, splicing problematic sites and
> bumping the rest works. But maintaining this list is tiring and I
> can't use content filtering for these sites. It would be much better
> if squid had support for websocket.
>
>
> Related issue:
> http://bugs.squid-cache.org/show_bug.cgi?id=4349
> --
> Tarık Demirci
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] deny rep_mime_type

2015-10-21 Thread Kinkie
Hi,
  I suspect (unverified) that

acl dom dstdomain .example.com
acl type rep_mime_type base/type
http_reply_access deny dom type
http_reply_access allow all

will do what you need

On Wed, Oct 21, 2015 at 9:36 PM, HackXBack  wrote:
> hello ,
> can we deny rep_mime_type for specific domain ?
> if yes then how
> if no then why
> thank you ..
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/deny-rep-mime-type-tp4673816.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Volunteers sought

2015-09-01 Thread Kinkie
Hi all,
   I am currently working on some performance improvements for the
next version of squid; I need some help from volunteers to verify the
benefit given by a memory pools feature in real-life scenarios, to
better understand how to develop it further.
I need the help of someone who has a somewhat busy deployment, who's
building their own software packages and who's willing to run a
patched version of a reasonably recent squid (it's a 1-line patch with
no user-visible behavior changes) for a few hours, and report whether
there are any observable changes in performance against the
non-patched version.

If you are interested, please get in touch with me for the details.

Thanks!

-- 
Kinkie
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-Availability in Squid

2015-08-29 Thread Kinkie
Hi,
  please see http://wiki.squid-cache.org/Features/MonitorUrl.
It's available in squid 2.6 , and is one of the last few features who
haven't yet made it to Squid 3.X. If anyone is interested, code and
sponsorships are always welcome :)

On Thu, Aug 27, 2015 at 12:10 PM, Imaginovskiy m...@stellarise.com wrote:

 Hi all,

 Bit of a strange one but I'm wondering if it's possible to have squid
 redirect a site to a secondary backend server if the primary is down. Have
 been looking into this but haven't seen much similar to this. Currently I
 have my setup along the lines of this;

 Client - Squid - Backend1

 but in the event that Backend1 is down, the following should be done;

 Client - Squid - Backend2

 Is squid capable of monitoring connections to peer or redirecting based on
 an ACL looking for some HTTP error code?

 Thanks.





 --
 View this message in context:
 http://squid-web-proxy-cache.1019090.n4.nabble.com/High-Availability-in-Squid-tp4672899.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 16G Virtual Mem

2015-08-28 Thread Kinkie
Hi,
   yes, it could be, depending on your configuration.
Please see http://wiki.squid-cache.org/SquidFaq/SquidMemory


On Fri, Aug 28, 2015 at 4:32 PM, Jorgeley Junior jorge...@gmail.com wrote:

 Guys, is this really normal???

 ​

 --



 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mac OS X Updates

2015-08-24 Thread Kinkie
Hi John,
  according to the article you link to, it's not possible to cache these
updates: Apple puts some effort as a conscious choice to make it so.

  Updates for older versions of MacOS may be over HTTP, newer ones are over
HTTPs over port 443 and and dynamically-generated ports. HTTP could be
cached, https cannot without ssl-bump/peek-n-splice (SSL man-in-the-middle).
  The wording of the article seems to suggest that the list of trusted
issuers of certificates for the https service is not the same as the
system's CA root certificate store but is probably locked to Apple's. This
means that also SSL MITM is not possible, by design.


On Wed, Aug 19, 2015 at 9:20 PM, John Pearson johnpearson...@gmail.com
wrote:

 Anyone have Mac OS X update caching working ? Without doing a SSL bump. I
 think they are hosted through https (
 https://support.apple.com/en-us/HT202943 )

 Thanks!

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] forward proxy - many users with one login/passwd.

2015-07-31 Thread Kinkie
On Thu, Jul 30, 2015 at 11:57 PM, Berkes, David david.j.ber...@pjc.com
wrote:


 Just a basic question.  I have a 3.5.0.4 forward proxy setup with basic
 authentication for my MDM proxy (iphones).  All iphones are set with the
 global proxy and identical user-name/password.  They will be on an LTE
 network and will be switching IP's often.  The forward proxy
 user-name/password will always be the same from each iphone. I have read
 several things about (max_user_ip, authenticate_ip_ttl) and concerned with
 the setup.  I essentially don’t want to limit any number of source
 connections using the same credentials.   Please advise of any pitfalls
 and/or settings for many users switching IP's frequent, using the same
 login/passwd.


Hi,
  there's none that I can think of.

-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] a problem about reverse proxy and $.ajax

2015-07-17 Thread Kinkie
Hi,
  What is in the squid access.log for that request?

On Thu, Jul 16, 2015 at 5:27 PM, johnzeng johnzeng2...@yahoo.com wrote:


 Hello dear All :

 i am writing testing download rate program recently ,

 and i hope use reverse proxy ( squid 3.5.x) too ,

 but if we use reverse proxy and i found Ajax won[t succeed to download

 , and success: function(html,textStatus) -- return value ( html ) is blank
 .


 if possible , please give me some advisement .



 squid config

 http_port 4432 |accel| vport defaultsite=10.10.130.91
 |cache_peer 127.0.0.1 parent 80 0 default name=ubuntu-lmr  |



 Ajax config

 $.ajax({
 type: GET,
 url: load_urlv,
 cache: false,
 mimeType: 'text/plain; charset=x-user-defined',

 beforeSend: function(){
 $('#time0').html('blinkdownload file.../blink').show();
 },

 error: function(){
 alert('Error loading XML document');
 },
 success: function(html,textStatus)
 {

 ...

 }


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] mimeInit: /etc/squid/mime.conf: (13) Permission denied

2015-06-12 Thread Kinkie
Hi all,
  file system corruption at times manifests itself as permission problems.
Can you fsck?

On Fri, Jun 12, 2015 at 12:00 PM, yashvinder hooda
yashvinder@gmail.com wrote:
 Hi Amos

 Squid pkg version is 3.5.2 and ‎it's running on openwrt.
 In logs I can see  one more permission related error ‎ and that is:
 ParseEtcHosts: /etc/hosts ()Permission denied

 Regards,
 Yashvinder
   Original Message
 From: Amos Jeffries
 Sent: Friday 12 June 2015 3:18 PM
 To: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] mimeInit: /etc/squid/mime.conf: (13) Permission 
 denied

 On 12/06/2015 9:35 p.m., yashvinder hooda wrote:
 ‎Hi,
 Fred

 Squid directory permission is 644 with nobody:root and same is for mime.conf 
 and squid.conf

 Regards,
 Yashvinder

 Okay. Wierd. Its not even like Squid is trying to open for write or
 anything fancy. Its just reading.

 Are you using the latest available Squid version?

 Amos

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can files be placed on a RAID volume now?

2015-06-07 Thread Kinkie
Yes, it still applies; it is a FAQ.
See http://wiki.squid-cache.org/SquidFaq/RAID


On Sun, Jun 7, 2015 at 12:35 PM, TarotApprentice
tarotapprent...@yahoo.com wrote:
 I recall from Squid 2.7 days the recommendation not to put the cache files on 
 a RAID volume under Windows. Does that restriction still apply?

 Related does the windows version use the different file system types (ie 
 rock, aufs, ufs) for the disk cache or is it irrelevant under windows.

 Cheers,
 MarkJ
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Bugzilla is down

2015-04-30 Thread Kinkie
Hi,
  sorry, we had a severe OOM on the main squid server.
Now rebooted and hopefully better plugged. We will see about upgrading
the server as soon as possible.

On Thu, Apr 30, 2015 at 12:10 PM, Yuri Voinov yvoi...@gmail.com wrote:
 Amos,

 what's up with bugzilla? It down and not available.

 WBR, Yuri


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Bugzilla is down

2015-04-30 Thread Kinkie
Should be fine now.
Thanks for notifying of the issue.

On Thu, Apr 30, 2015 at 7:42 PM, Yuri Voinov yvoi...@gmail.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Now server produces 500 error.

 30.04.15 23:39, Kinkie ?:
 Hi,
   sorry, we had a severe OOM on the main squid server.
 Now rebooted and hopefully better plugged. We will see about upgrading
 the server as soon as possible.

 On Thu, Apr 30, 2015 at 12:10 PM, Yuri Voinov yvoi...@gmail.com wrote:
 Amos,

 what's up with bugzilla? It down and not available.

 WBR, Yuri


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVQml8AAoJENNXIZxhPexGfvEIAKHXVkgDYuOob2YgmFB0AP1h
 h3jgjoNkGbxRkV+BCjAYpn/qSDHGHMI54T6d9r0If3oFrDLccWM3Bq+eGQK1smTj
 ZbRcvt37QtjYcuRMXqU42m/mQDZ5UvEOireGwn9DR9TKsbHHn0EKynDdsFaLK3A/
 8AbSoRIxMLH9vPbhBGd0O5gFsgBit68v/8nt3P+GMbHhS/WIamG0FvlAQDqEnIir
 K2avn4C/PL4ZcKErKtCPMRYAl9KyO9HdAhXMKKAq3k0iKCknMd+NTKUtXBmDyH5Z
 F+bhdddG81OioGJ1LwMX8xIM4CT6JHEyO+dMa1n5/eydiWg6Fi0qaUYvFZytnLQ=
 =iwk9
 -END PGP SIGNATURE-




-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Client delay pools ...doesn't work

2015-04-12 Thread Kinkie
Hi Fiorenza,
  does your browser display the same error when you remove that config
line and reconfigure squid?

On Fri, Apr 10, 2015 at 3:51 PM, Fiorenza Meini fme...@esseweb.eu wrote:
 Hi,
 I'm testing on a 3.4 squid release the client_delay_poolfunctionality.
 It seems that isn't working: on my browser I receive the error that proxy
 isn't reachable, and in log file I can't see nothing useful.

 Has anyone configured this functionality successfully ?

 Regards

 Fiorenza Meini
 --
 Spazio Web S.r.l.
 V. Dante, 10
 13900 Biella
 Tel.: +39 015 2431982
 Fax.: +39 015 2522600
 Numero d'Iscrizione al Registro Imprese presso CCIAA Biella, Cod.Fisc.e
 P.Iva: 02414430021
 Iscriz. REA: BI - 188936 Cap. Soc.: EURO. 30.000 i.v.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange message when doing a squid -k parse or reconfigure

2015-04-07 Thread Kinkie
Hi,
  I've searched for these strings in squid, couldn't find them.
Maybe this is emitted by some library?

On Tue, Apr 7, 2015 at 5:16 PM, dweimer dwei...@dweimer.net wrote:
 My Squid Process seems to be working fine, but I noticed an unusual message
 when testing the squid configuration

 squid: environment corrupt; missing value for https_pr

 Any Ideas? Its a forward only proxy not doing reverse proxy or anything. Its
 running on FreeBSD 10.1-RELEASE-p8, installed from ports Squid version is
 3.4.12. I don't have any problems accessing http or https sites through it.

 --
 Thanks,
Dean E. Weimer
http://www.dweimer.net/
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: Squid 3.5.2 Compile Error

2015-03-07 Thread Kinkie
Yes, and I thought that as well. My reaction would be to make sure
that the secondo variable is guaranteed defined as 0.

On Sun, Mar 8, 2015 at 6:26 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 8/03/2015 7:33 a.m., Kinkie wrote:
 Could you please attach your config.log file?

 FYI: This appears to be the precompiler not treating undefined macros as
 0/false. Which is kind of weird in Wheezy since that compiler version
 was in use during the test development.

 Amos


 Thanks

 On Sat, Mar 7, 2015 at 5:27 AM, Michel Peterson wrote:
 Hi friends,

 I'm trying to compile squid 3.5.2 on debian wheezy and I am getting
 the following error after running the command make all:

 Making all in compat
 make[1]: Entrando no diretório `/root/squid-3.5.2/compat'
 depbase=`echo assert.lo | sed 's|[^/]*$|.deps/|;s|\.lo$||'`;\
 /bin/bash ../libtool  --tag=CXX   --mode=compile g++
 -DHAVE_CONFIG_H   -I.. -I../include -I../lib -I../src -I../include
 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror
 -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64  -g
 -O2 -march=native -std=c++11 -MT assert.lo -MD -MP -MF $depbase.Tpo -c
 -o assert.lo assert.cc \
 mv -f $depbase.Tpo $depbase.Plo
 libtool: compile:  g++ -DHAVE_CONFIG_H -I.. -I../include -I../lib
 -I../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments
 -Wshadow -Werror -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE
 -D_FILE_OFFSET_BITS=64 -g -O2 -march=native -std=c++11 -MT assert.lo
 -MD -MP -MF .deps/assert.Tpo -c assert.cc  -fPIC -DPIC -o
 .libs/assert.o
 In file included from ../include/squid.h:43:0,
  from assert.cc:9:
 ../compat/compat.h:49:57: error: operator '' has no right operand
 make[1]: ** [assert.lo] Erro 1
 make[1]: Saindo do diretório `/root/squid-3.5.2/compat'
 make: ** [all-recursive] Erro 1



 Help me plz.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: Squid 3.5.2 Compile Error

2015-03-07 Thread Kinkie
Could you please attach your config.log file?

Thanks

On Sat, Mar 7, 2015 at 5:27 AM, Michel Peterson
michel.petter...@gmail.com wrote:
 Hi friends,

 I'm trying to compile squid 3.5.2 on debian wheezy and I am getting
 the following error after running the command make all:

 Making all in compat
 make[1]: Entrando no diretório `/root/squid-3.5.2/compat'
 depbase=`echo assert.lo | sed 's|[^/]*$|.deps/|;s|\.lo$||'`;\
 /bin/bash ../libtool  --tag=CXX   --mode=compile g++
 -DHAVE_CONFIG_H   -I.. -I../include -I../lib -I../src -I../include
 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror
 -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64  -g
 -O2 -march=native -std=c++11 -MT assert.lo -MD -MP -MF $depbase.Tpo -c
 -o assert.lo assert.cc \
 mv -f $depbase.Tpo $depbase.Plo
 libtool: compile:  g++ -DHAVE_CONFIG_H -I.. -I../include -I../lib
 -I../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments
 -Wshadow -Werror -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE
 -D_FILE_OFFSET_BITS=64 -g -O2 -march=native -std=c++11 -MT assert.lo
 -MD -MP -MF .deps/assert.Tpo -c assert.cc  -fPIC -DPIC -o
 .libs/assert.o
 In file included from ../include/squid.h:43:0,
  from assert.cc:9:
 ../compat/compat.h:49:57: error: operator '' has no right operand
 make[1]: ** [assert.lo] Erro 1
 make[1]: Saindo do diretório `/root/squid-3.5.2/compat'
 make: ** [all-recursive] Erro 1



 Help me plz.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.1 install

2015-01-18 Thread Kinkie
Hi Matt,
  could you share the last 20 lines of the build output, and what
environment you are building on?
The developers regularly test builds on 15 or so different OSes (see
http://build.squid-cache.org) but yours may be different.
You may also want to check the squid wiki for details on how to set up
a build environment.

On Sun, Jan 18, 2015 at 7:51 PM, Matt Bowman mattat...@gmail.com wrote:
 Hey guys,

 I just tried compiling the latest version of squid 3.5.1 with OpenSSL enabled 
 and am receiving compile errors.  Has anyone else run into this problem?

 Thanks,

 Matt

 Sent from my iPhone
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] DiskThreadsDiskFile::openDone squid 3.5.0.4

2014-12-26 Thread Kinkie
Nothing to worry about. The files were removed by some outside
software and were not found. Squid will manage the error and carry on.

On Fri, Dec 26, 2014 at 1:22 PM, HackXBack hack.b...@hotmail.com wrote:
 Hello squid ,
 after using 3.5.0.4 on fresh debian system
 i see many errors in cache.log

 2014/12/26 07:21:39 kid1|   /cache03/2/00/31/3123
 2014/12/26 07:21:39 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:21:39 kid1|   /cache04/1/00/5F/5F16
 2014/12/26 07:21:39 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:21:39 kid1|   /cache03/3/00/29/291F
 2014/12/26 07:21:39 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:21:39 kid1|   /cache05/1/00/11/11F6
 2014/12/26 07:21:45 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:21:45 kid1|   /cache03/3/00/17/176C
 2014/12/26 07:21:46 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:21:46 kid1|   /cache02/6/00/15/15CE
 2014/12/26 07:21:47 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:21:47 kid1|   /cache02/6/00/0B/0B07
 2014/12/26 07:21:47 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:21:47 kid1|   /cache02/2/00/02/02B4
 2014/12/26 07:22:09 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:09 kid1|   /cache03/4/00/03/0365
 2014/12/26 07:22:12 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:12 kid1|   /cache03/2/00/1F/1F26
 2014/12/26 07:22:12 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:12 kid1|   /cache03/6/00/1F/1F25
 2014/12/26 07:22:13 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:13 kid1|   /cache04/6/00/1F/1F21
 2014/12/26 07:22:15 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:15 kid1|   /cache05/2/00/1F/1F30
 2014/12/26 07:22:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:21 kid1|   /cache02/6/00/1D/1D5A
 2014/12/26 07:22:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:21 kid1|   /cache02/2/00/0C/0CB5
 2014/12/26 07:22:21 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:21 kid1|   /cache03/5/00/01/0144
 2014/12/26 07:22:31 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:31 kid1|   /cache02/2/00/25/2504
 2014/12/26 07:22:31 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2014/12/26 07:22:31 kid1|   /cache04/5/00/24/244D
 2014/12/26 07:22:31 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory




 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/DiskThreadsDiskFile-openDone-squid-3-5-0-4-tp4668840.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.10 incorrectly configured on Solaris 10

2014-12-18 Thread Kinkie
Hello Yuri,
  this is probably a system header dependency.
Could you check if the manuals mention anything about ipfmutex_t ? If
they do, at the beginning of the page they should include a list of
#include ... lines. Could you copy-paste these lines here?

Thanks

On Thu, Dec 18, 2014 at 3:01 PM, Yuri Voinov yvoi...@gmail.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi there,

 yesterday (and during last four day) I've try to build transparent
 caching proxy on Solaris 10 (x86_64) testing environment.

 Configuration options are:

 # Without SSL 64 bit GCC
 ./configure '--prefix=/usr/local/squid' '--enable-translation'
 '--enable-external-acl-helpers=file_userip,unix_group'
 '--enable-icap-client' '--enable-ipf-transparent'
 '--enable-storeio=diskd' '--enable-removal-policies=lru,heap'
 '--enable-devpoll' '--disable-wccp' '--enable-wccpv2'
 '--enable-http-violations' '--enable-follow-x-forwarded-for'
 '--enable-arp-acl' '--enable-htcp' '--enable-cache-digests' '--with-dl'
 '--enable-auth-negotiate=none' '--disable-auth-digest'
 '--disable-auth-ntlm' '--disable-auth-basic'
 '--enable-storeid-rewrite-helpers=file'
 '--enable-log-daemon-helpers=file' '--with-filedescriptors=131072'
 '--with-build-environment=POSIX_V6_LP64_OFF64' 'CFLAGS=-O3 -m64 -fPIE
 -fstack-protector -mtune=core2 --param=ssp-buffer-size=4 -pipe'
 'CXXFLAGS=-O3 -m64 -fPIE -fstack-protector -mtune=core2
 --param=ssp-buffer-size=4 -pipe' 'CPPFLAGS=-I/usr/include
 -I/opt/csw/include' 'LDFLAGS=-fPIE -pie -Wl,-z,now'

 But binaries built without interceptor support.

 Some investigation:

 Config.log has errors with ip_nat.h compilation:

 configure:27435: checking for netinet/ip_nat.h
 configure:27435: g++ -c -m64 -O3 -m64 -fPIE -fstack-protector
 -mtune=core2 --param=ssp-buffer-size=4 -pipe -march=native -std=c++11
 -I/usr/include -I/opt/csw/include -I/usr/include/gssapi
 -I/usr/include/kerberosv5 conftest.cpp 5
 In file included from conftest.cpp:266:0:
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:98:2:
 error: 'ipfmutex_t' does not name a type
   ipfmutex_t nat_lock;
   ^
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:108:2:
 error: 'frentry_t' does not name a type
   frentry_t *nat_fr; /* filter rule ptr if appropriate */
   ^
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:112:2:
 error: 'ipftqent_t' does not name a type
   ipftqent_t nat_tqe;
   ^
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:113:2:
 error: 'u_32_t' does not name a type
   u_32_t  nat_flags;
   ^
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:114:2:
 error: 'u_32_t' does not name a type
   u_32_t  nat_sumd[2]; /* ip checksum delta for data segment */
   ^
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:115:2:
 error: 'u_32_t' does not name a type
   u_32_t  nat_ipsumd; /* ip checksum delta for ip header */
   ^
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:116:2:
 error: 'u_32_t' does not name a type
   u_32_t  nat_mssclamp; /* if != zero clamp MSS to this */
   ^
 /opt/csw/lib/gcc/i386-pc-solaris2.10/4.9.2/include-fixed/netinet/ip_nat.h:117:2:
 error: 'i6addr_t' does not name a type
   i6addr_t nat_inip6;

 and so, configure does not see IP Filter finally, ergo cannot build
 interceptor.

 Yes, IP Filter installed in system. Yes, I've try to build 32 bit also.
 Yes, I've try to build on another system. Yes, I've try to play with
 configure option. Yes, I've try also development version 3.5.x - with
 the same result.

 Amos, need your help.

 Thanks in advance,

 WBR, Yuri

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBAgAGBQJUkt4vAAoJENNXIZxhPexGn9EH/3CUqof3f4xHNBuZIhC35Zup
 EgTYQGwUck0hq98GP+USC7C186qW3pscafTO82olbb55xb7Bpmw6b0YVgsVK9AJy
 u2IFnc6MQe1rhYl8NM5L9B5XC6K5gKb8P4UQYAirYPvu0XDxWJYd0N8HqL+8uI6+
 3OtvrGnQZyCOHTuQ8Ubu2y3yDpjdUhjX7sCRER8QiLR/IMTyXAu2pmIpMISLTMK+
 wmI1xVfrafpg5TO+RzkwQFbWQhNUq1JqY6kttHb9D/Qg5eTw2ceFLYsrkTiuwpYv
 czjRk2J4F7WYmbFJ0sTwRqyAZtM8xC8b9dk4SjkqOEpgIE/wdnqCJp/yQbfo/kk=
 =LWVp
 -END PGP SIGNATURE-


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Packet Marking for Traffic Shaping

2014-12-06 Thread Kinkie
Hello Osmany,
  have you tried http://wiki.squid-cache.org/Features/QualityOfService ?


Kinkie

On Fri, Dec 5, 2014 at 4:15 PM, Osmany Goderich ogoder...@gmail.com wrote:
 Hi everyone,

 I was googling and I couldn't find anything clear about the subject, but I
 am trying to make Squid mark the packets in order to differentiate traffic
 so that I can do Traffic Shaping on my hardware firewalls. This should help
 me do the job easier in my firewalls since the requests that go to internet
 come from only one IP (the proxy-cache) and I really need to identify
 different clients so that I can apply different traffic shaping rules. My
 firewall supports DSCP or ToS. I was looking up ToS but I am having a hard
 time to understand how can I apply different values of ToS to all my
 clients(I'm talking about more that 50 clients with different bandwidth to
 be assigned).
 Can anyone please help me with this?

 Thanks

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Authentication\Authorization using a PAC file?

2014-11-24 Thread Kinkie
Hi Eliezer,
  I don't think so. PACfiles have no access to the DOM or facilities
like AJAX, and are very limited in what they can return or affect as
side-effects. In theory it could be possible to do something, but in
practice it would be only advisory and not secure: a pacfile must by
definition be in a publicly-accessible URL, so anyone can read it and
interpret it.

On Mon, Nov 24, 2014 at 11:25 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I do know that pac files contains some form of JS and in the past I
 have seen couple complex PAC files but unsure about the options.
 I want to know if a PAC file can be used for
 Authentication\Authorization, maybe even working against another
 external system to get a token?

 Thanks,
 Eliezer
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJUcweKAAoJENxnfXtQ8ZQUy7oH/ieegXDfKslc8NPYgzkRfpRW
 JVYcRB9gqVEQSEpphznVz3s4PTuspYYKmNnr1uWMnUQRC906GPaa326j+EMtQ9Eq
 mcPc2dBU7jyMkj5V4EUAJlMZ+29YzDFKSAAJkf4/cYX5ik1JKOMyIljaKF5O4PQU
 HNhSUVrQ+/9nkDE8puzALYYFygKn+u8exN2pr9ikobAgsGhoMMsULJxQi90st67S
 W9/Be12+2KiBxGWBwnTCNTZjRs5xAg/8xsLTOuMMzKPF0ihpDRcDFQFYZYF22uKM
 BQAZCG1VJWz8wwDrDN8Pmy7AbII2ygFvKu/8s6S7ZAdq7mragGVsyhJzVoQzqJc=
 =l9Ue
 -END PGP SIGNATURE-
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Centralized Squid - design and implementation

2014-11-19 Thread Kinkie
One word of caution: pactester uses the Firefox JavaScript engine, which is
more forgiving than MSIE's. So while it is a very useful tool, it may let
some errors slip through.
On Nov 18, 2014 9:45 PM, Jason Haar jason_h...@trimble.com wrote:

 On 19/11/14 01:39, Brendan Kearney wrote:
  i would suggest that if you use a pac/wpad solution, you look into
  pactester, which is a google summer of code project that executes pac
  files and provides output indicating what actions would be returned to
  the browser, given a URL.
 couldn't agree more. We have it built into our QA to run before we ever
 roll out any change to our WPAD php script (a bug in there means
 everyone loses Internet access - so we have to be careful).

 Auto-generating a PAC script per client allows us to change behaviour
 based on User-Agent, client IP, proxy and destination - and allows us to
 control what web services should be DIRECT and what should be proxied.
 There is no other way of achieving those outcomes.

 Oh yes, and now that both Chrome and Firefox support proxies over HTTPS,
 I'm starting to ponder putting up some form of proxy on the Internet for
 our staff to use (authenticated of course!) - WPAD makes that something
 we could implement with no client changes - pretty cool :-)

 --
 Cheers

 Jason Haar
 Corporate Information Security Manager, Trimble Navigation Ltd.
 Phone: +1 408 481 8171
 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-dev] When i redirected to more traffic to squid box for testing goal via web-polygraph . find error info OS probably ran out of ephemeral ports at 192.168.2.1:0

2014-11-09 Thread Kinkie
You want to ask this question to the polygraph authors; squid users
and developers are not really the right place to ask.
I don't know about the details, but that error message probably means
that your bots are overwhelming the TCP stack, which can't free
ephemeral ports quickly enough due to TCP timeouts.

On Thu, Nov 6, 2014 at 8:36 AM, johnzeng johnzeng2...@yahoo.com wrote:

 Hello :

 i meet a problem , When i redirected to more traffic to squid box for
 testing goal via web-polygraph .

 squidbox ip is 192.168.2.2 web-polygraph_box ip is 192.168.2.3

 /polygraph-client --config
 /accerater/testtool/share/polygraph/workloads/simple.pg --proxy
 192.168.2.2:80 --verb_lvl 10

 ./polygraph-server --config
 /accerater/testtool/share/polygraph/workloads/simple.pg --verb_lvl 10

 When testing traffic is 1500request/sec , i found more error info ,

 but my os setting is net.ipv4.ip_local_port_range = 1024 65535
 open files (-n) 65536
 max user processes (-u) 1
 /proc/sys/fs/file-max 6815744


 and i found these error info still , how will i do ? if possible , give
 me some help or advisement please .

 **
 error info
 **

 EphPortMgr.cc:23: error: 34920/69877 (s98) Address already in use
 005.28| OS probably ran out of ephemeral ports at 192.168.2.3:0
 005.28| Client.cc:347: error: 34920/69878 (c63) failed to establish a
 connection
 005.28| 192.168.2.3 failed to connect to 192.168.2.2:80
 005.31| i-dflt 104811 0.00 -1 -1.00 3904 32336
 005.59| PolyApp.cc:189: error: 39/75599 (c58) internal timers may be
 getting beh

 005.90| EphPortMgr.cc:23: error: 64/129 (s98) Address already in use
 005.90| OS probably ran out of ephemeral ports at 192.168.2.1:0
 005.90| Client.cc:347: error: 64/130 (c63) failed to establish a connection
 005.90| 192.168.2.1 failed to connect to 192.168.2.2:80
 005.90| PolyApp.cc:189: error: 4/180 (c58) internal timers may be
 getting behind
 005.90| record level of timing drift: 179msec; last check was 3msec ago
 005.90| EphPortMgr.cc:23: error: 128/260 (s98) Address already in use
 005.90| OS probably ran out of ephemeral ports at 192.168.2.1:0
 005.90| Client.cc:347: error: 128/261 (c63) failed to establish a connection
 005.90| 192.168.2.1 failed to connect to 192.168.2.2:80
 005.91| PolyApp.cc:189: error: 8/460 (c58) internal timers may be
 getting behind
 005.91| record level of timing drift: 383msec; last check was 3msec ago









 ***
 Web-polygraph configration
 ***


 Content SimpleContent = {
 size = exp(13KB); // response sizes distributed exponentially
 cachable = 80%; // 20% of content is uncachable
 };

 // a primitive server cleverly labeled S101
 // normally, you would specify more properties,
 // but we will mostly rely on defaults for now
 Server S = {
 kind = S101;
 contents = [ SimpleContent ];
 direct_access = contents;

 addresses = ['192.168.2.1:9090']; // where to create these server agents
 };

 // a primitive robot
 Robot R = {
 kind = R101;
 pop_model = { pop_distr = popUnif(); };
 recurrence = 55% / SimpleContent.cachable; // adjusted to get 55% DHR
 req_rate = 1600/sec;


 origins = S.addresses; // where the origin servers are
 addresses = ['192.168.2.1']; // where these robot agents will be created
 };




 **
 sysctl.conf
 **


 fs.file-max = 6815744
 fs.aio-max-nr = 1048576
 kernel.shmmax = 4294967295
 kernel.threads-max = 212992
 kernel.sem = 250 256000 100 1024
 net.core.rmem_max=5165824
 net.core.wmem_max=262144
 net.ipv4.tcp_rmem=5165824 5165824 5165824
 net.ipv4.tcp_wmem=262144 262144 262144
 net.core.rmem_default = 5165824
 net.core.wmem_default = 262144
 net.core.optmem_max = 25165824
 net.ipv4.ip_local_port_range = 1024 65535
 net.nf_conntrack_max = 6553600
 net.netfilter.nf_conntrack_tcp_timeout_established = 1200


 net.ipv4.tcp_tw_recycle = 1
 net.ipv4.tcp_tw_reuse = 1
 net.ipv4.tcp_fin_timeout = 10
 net.ipv4.tcp_orphan_retries = 3
 net.ipv4.tcp_retries2 = 5
 net.ipv4.tcp_keepalive_intvl = 15
 net.ipv4.tcp_syn_retries = 5
 net.ipv4.tcp_synack_retries = 2
 net.ipv4.tcp_keepalive_time = 1800
 net.core.netdev_max_backlog = 300
 net.ipv4.tcp_max_syn_backlog = 262144000
 net.ipv4.tcp_max_tw_buckets = 5
 net.core.somaxconn = 262144000
 net.ipv4.tcp_sack = 1
 net.ipv4.tcp_fack = 1

 net.ipv4.tcp_timestamps = 0
 net.ipv4.tcp_window_scaling = 1
 net.ipv4.tcp_syncookies = 1
 net.ipv4.tcp_no_metrics_save=1
 net.ipv4.tcp_max_orphans = 26214400
 net.ipv4.tcp_synack_retries = 2
 net.ipv4.tcp_low_latency = 1
 net.ipv4.tcp_rfc1337 = 1

 ___
 squid-dev mailing list
 squid-...@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-dev



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org

Re: [squid-users] squid authentication failing

2014-08-12 Thread Kinkie
On Mon, Aug 11, 2014 at 7:59 PM, Sarah Baker sba...@brightedge.com wrote:
 Background:
 Squid: squid-3.1.23-2.el6.x86_64
 OS: CentOS 6.5 - Linux 2.6.32-431.23.3.el6.x86_64 #1 SMP Thu Jul 31 17:20:5=
 1 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

 Issue:
 I have two boxes, same OS, same squid binary, same config file, same squid-= 
 passwd file.
 Configuration is setup for ncsa_auth.  Squid runs as user squid.

 Both systems return OK to use of command line of ncsa_auth as squid user to=  
 the login and password in the squid-passwd file.

 Using squid however via a curl thru one of the proxy ips/port of the system=
 : one system gives 403 forbidden, the other works just fine.
 Tried removing authentication entirely, a fully open squid.  It fails - same 
 message.

403 forbidden means that the authenticator doesn't even get to kick
in; it's a final deny.
Are you really sure that the 403 is generated by Squid, and not by the
origin server? you can tell by looking at the error page.

 Also looked at thusfar:
 rpm -q query_options --requires squid-3.1.23-2.el6.x86_64
 the same on both boxes.
 Ran yum update on both to insure everything was up to latest - no change.

The issue is either not in squid or it's related to the http_access
configuration.
Would you mind sharing an excerpt of your squid.conf with including that part?

 Any ideas what I should look far?


-- 
Francesco


Re: [squid-users] find the cached pages by squid?

2014-08-09 Thread Kinkie
Hello Mark,
  access.log contains the list of URLs requested by any client to the
cache (if enabled, of course).
If you wish, you can then verify whether they have been cached (and
whether the cached entry is still considered valid) by requesting them
(or at least their headers via the HEAD http verb) with the
Cache-Control: only-if-cached HTTP header - you can do that with any
command-line HTTP client such as curl or wget.

On Sat, Aug 9, 2014 at 2:39 PM, Mark jensen ngiw2...@hotmail.com wrote:

 We know that squid is a cache engine (it caches the requested pages in a a 
 cache memory)

 I have tried to see the cached pages from cache.log file, but I didn't find 
 any page.

 and from squid wiki:

 The cache.log file contains the debug and error messages that Squid 
 generates.(not the cached pages).

 So where can I find the cached pages (url at least)?






-- 
Francesco


Re: [squid-users] delay pool negativ value

2014-07-01 Thread Kinkie
On Tue, Jul 1, 2014 at 12:47 PM, Grooz, Marc (regio iT)
marc.gr...@regioit.de wrote:
 Hi,

 If I watch the delay pool with squidclient mgr:delay, what does a negativ
 value in the current field means?

Small or smallish values mean that the pool is depleted. Until the
values get positive and big enough again, no more data will be sent.

 Is there a description of the values in that output?

It should be more or less self-explanatory. What aspects are you
mostly concerned about?


-- 
Francesco


Re: [squid-users] What is a reasonable size for squid.conf?

2014-06-29 Thread Kinkie
On Sun, Jun 29, 2014 at 3:57 AM, Owen Crow owen.c...@gmail.com wrote:
 Consider this a reply to Kinkie and Eliezer.

 Yes, I expect my setup is unusual, but that's why I'm trying to get
 advice from others who might have a similar setup.

 I run the proxy as the main destination for a wildcard DNS. This is
 our many tenants use URLs in the wildcard domain (lets call it
 *.wild.com) and the proxy connects them to the various backend
 services based on the hostname such as:

 acme-www.wild.com connects to the WWW server for Acme customer
 beta-www.wild.com connects to a similar but different WWW server for
 Beta customer.

 For each customer there are 5-10 unique hostnames to keep the services
 separate. We do this as it is much simpler than URL-rewriting (or at
 least it seemed so to me at the beginning).

 In addition, our proxy listens on about 8 different ports
 (80/443/8080, etc) for different services. The different ports require
 7 ACLs that excludes the other ports that are not for that one
 service/port combination.

 I can get more specific if anyone is interested.
 I use make+M4 macros to generate the squid.conf file from a source
 file and then separate all the customers into individual configuration
 files based on a conf.d directory.

Hi,
  yes, it could be interesting. Not the full configuration, which is
most likely confidential.
But the template you are using for a single entry may be interesting
and maybe give us enough information to understand if there are
opportunities for optimization.

  Kinkie


Re: [squid-users] What is a reasonable size for squid.conf?

2014-06-28 Thread Kinkie
On Fri, Jun 27, 2014 at 9:51 PM, Owen Crow owen.c...@gmail.com wrote:
 I am running a non-caching reverse proxy using version 3.3.10.

 My squid.conf is currently clocking in 60k lines (not including
 comments or blank lines). Combined with the conf files in my conf.d
 directory, I have a total of 89k lines in configuration.

Hi Owen,
  I suspect you have embedded in your squid.conf some very long ACL,
haven't you?
If so, what type is it, and how many lines?
As a general advice, you may want to consider moving these ACLs to
external files and reference them from the config-file.

 I have definitely noticed -k reconfigure calls taking on the order
 of 20 seconds to run when it used to be less than a couple seconds.
 (Same results with -k test).

20 seconds is quite a bit. What has changed in the configuration file
since then?

 I've tried searching for anything related to max lines and similar,
 but it usually talks about squid.conf configuration options and not
 the file itself.
 If this is not documented per se, are there any anecdotal examples
 that have this many lines or more? I only see this growing over time.

There is no hard limit to the configuration file that I know of. Are
you experiencing any performance issues other than during
reconfiguration?

-- 
Kinkie


Re: [squid-users] Even/Odd SRC ACL

2014-06-27 Thread Kinkie
Hi Sharma,
   would using a random ACL for outgoing IP selection be good enough?

Francesco

On Fri, Jun 27, 2014 at 9:18 AM, Nishant Sharma codemarau...@gmail.com wrote:

 On Friday 27 June 2014 12:34 PM, Amos Jeffries wrote:
 Ah, Squid-3 is using CIDR masking. Sorry should have remembered earlier
 how strict this is.

 The two /25 subnets (or groups of /26 etc) is the way to go.

 Thanks for the clarification. So, would it be possible in future?

 I don't know how complicated it would be to implement.

 Thanks again.

 Regards,
 Nishant



-- 
Francesco


Re: [squid-users] squid error

2014-06-18 Thread Kinkie
On Tue, Jun 17, 2014 at 6:12 PM, Kohichi Kamijoh kami...@jp.ibm.com wrote:
 Hi Kinkie,

 Thank you for your response. Sorry but I cannot share the hostname because
 of some security reason, but the name is long (more than 10 letters).

Length is not the issue. Contents may be, if they violate the DNS RFC.

 Anyway, I found the following in squid.conf.default.

 #  TAG: check_hostnames
 #   For security and stability reasons Squid by default checks
 #   hostnames for Internet standard RFC compliance. If you do not want
 #   Squid to perform these checks then turn this directive off.
 #
 #Default:
 # check_hostnames on

 Will the Squid Malformed Host Name Error be eliminated if we set
 check_hostnames off in squid.conf?

Maybe; I can't tell with no info. Try it is the best I can offer.

   Kinkie


Re: [squid-users] squid error

2014-06-17 Thread Kinkie
On Tue, Jun 17, 2014 at 4:17 PM, Kohichi Kamijoh kami...@jp.ibm.com wrote:

 Hello,

 I'm Koichi Kamijo from IBM. I have a question regarding squid.

 I had a following error while using squid.

 Squid Malformed Host Name Error

 We are using this just test purpose and OS is windows 7. Squid version is
 2.7 STABLE 8.

Hi Koichi,
  what is the URL  that you were trying to access when you got that message?
Are you also aware that Squid 2.7 is way obsolete by now, with the
currently supported version being 3.4?

Ciao,

-- 
Kinkie


Re: [squid-users] squid error

2014-06-17 Thread Kinkie
On Tue, Jun 17, 2014 at 4:47 PM, Kohichi Kamijoh kami...@jp.ibm.com wrote:
 Hi Kinkie,

 Thank you for your quick response.

 This message was created by a unique system which we own.

 Is version 3.4 supported by Windows 7? Also, is this Squid Malformed Host
 Name Error fixed by this version?

Aha!
No, unfortunately Squid 3 is not yet working on Windows. Contributions
in terms of code or sponsored development are of course welcome
towards that goal.
I was asking the question because maybe the host name _is_ malformed
after all, in this case you may want to see the check_hostnames
configuration directive
(http://www.squid-cache.org/Doc/config/check_hostnames/).

I cannot tell if the issue is fixed, because without knowing the
hostname I can't tell if this is a problem in squid or in the system
you refer to.

   Kinkie


Re: [squid-users] squid caches gmail login/account

2014-06-17 Thread Kinkie
On Tue, Jun 17, 2014 at 3:48 PM,  deb...@boku.ac.at wrote:
 Hi,

 i've a realy strange problem:

 With host1 i log in into gmail.com - now i open a browser-iceweasel on
 host2, and go to gmail.com - i'm automaticaly logged in with the account
 from host1. - wtf ?
 The clients, in my case are standalone debian hosts.

This is way strange, especially since gmail is in https, which means
that even if it wanted, squid could not see the traffic nor,
obviously, cache.
What do you see in access.log?


-- 
kinkie


Re: [squid-users] Segment violation in 3.4.5

2014-05-06 Thread Kinkie
Unfortunately this report is still short on details.
Could you try to follow the procedure explained in
http://wiki.squid-cache.org/SquidFaq/BugReporting#Using_gdb_debugger_on_Squid

and obtain a backtrace?


Thanks!

On Tue, May 6, 2014 at 3:12 AM, Dan Charlesworth d...@getbusi.com wrote:
 Hi folks

 We just compiled an EL6 x86_64 RPM for the 3.4.5 update and are now receiving 
 the Segment violation...dying error for every request.

 We're using the same configuration and config options we have been for every 
 3.4 release so far, with the only change being the addition of  
 --with-included-ltdl.

 We use SSL bump and have a fair few external ACLs so I'm sure there's plenty 
 of places for it break but this error message isn't very specific. The debug 
 logs suggest it happens immediately after a transaction is complete:

 2014/05/06 10:48:17.771 kid1| Checklist.cc(55) markFinished: 
 0x7fff494fe6b0 answer ALLOWED for match
 FATAL: Received Segment Violation...dying.

 Here are our config opts:

 %configure \
--exec_prefix=/usr \
--libexecdir=%{_libdir}/squid \
--localstatedir=/var \
--datadir=%{_datadir}/squid \
--sysconfdir=%{_sysconfdir}/squid \
--with-logdir='$(localstatedir)/log/squid' \
--with-pidfile='$(localstatedir)/run/squid.pid' \
--disable-dependency-tracking \
--enable-follow-x-forwarded-for \
--enable-auth \

 --enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam
  \
--enable-auth-ntlm=smb_lm,fake \
--enable-auth-digest=file,LDAP,eDirectory \
--enable-auth-negotiate=kerberos,wrapper \

 --enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group,session
  \
--enable-cache-digests \
--enable-cachemgr-hostname=localhost \
--enable-delay-pools \
--enable-epoll \
--enable-icap-client \
--enable-ident-lookups \
%ifnarch ppc64 ia64 x86_64 s390x
--with-large-files \
%endif
--enable-linux-netfilter \
--enable-referer-log \
--enable-removal-policies=heap,lru \
--enable-snmp \
--enable-ssl \
--enable-ssl-crtd \
--enable-storeio=aufs,diskd,ufs \
--enable-useragent-log \
--enable-wccpv2 \
--enable-esi \
--with-aio \
--with-default-user=squid \
--with-filedescriptors=16384 \
--with-maxfd=65535 \
--with-dl \
--with-openssl \
--with-pthreads \
--with-included-ltdl




-- 
Francesco


Re: [squid-users] How to install der(or other) root CA certificate on an android device?

2014-04-27 Thread Kinkie
Hi,
  just opening the new root cert in a browser and accepting the
following warnings seemed to do the trick for me, however I haven't
double-checked.

On Sun, Apr 27, 2014 at 12:46 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 I have tried couple times in the past to install a rootCA self-signed
 certificate on an android device but I am yet to succeed.

 If anyone did managed to do that on any android device please help me with
 it.

 Thanks,
 Eliezer



-- 
Francesco


Re: [squid-users] strange running out of filedescriptors problem on MacOSX

2014-04-26 Thread Kinkie
On Sat, Apr 26, 2014 at 7:58 AM, Ambrose LI ambrose...@gmail.com wrote:
 Hi,

 does anyone know what would cause squid 3.3.12 to run out of
 filedescriptors on MacOS X?

Hi Ambrose,
  I guess you are leaking filedescriptors.
A clue why it happens may be in the cachemgr.
Does your build include the squidclient binary? It can be used to
directly acccess the cachemanager interface
squidclient -U useername -P the password set in cachemgr_passwd
option mgr:filedescriptors

should do the trick: that tool includes a description of what the
filedescriptor is being used for.

 This seems to be a MacOSX-specific problem. I have never seen this
 happen on my other squid, which runs on Linux.

I am currently trying to build a squid for MacOS; I am not
successfully able to build rock store. Are you able to? Would you mind
sharing what options and environment you are using? Thanks!

 My other observation would be that when I am not at home this does not
 seem to happen. So one possible cause might be that a netdb exchange
 failure (endianness mismatch) would cause squid to run out of file
 descriptors. Does anyone know if this might be possible, or recognize
 what would cause squid to run out of file descriptors shortly after
 startup? Thanks very much.

Seems unlikely. Let's first get the facts out of cachemgr, then we can
speculate on the causes :)


-- 
Kinkie


Re: [squid-users] WARNING: Forwarding loop detected for:

2014-04-08 Thread Kinkie
This looks like a legitimate forwarding loop. What is your request
routing configuration?
cache_peer parent and never_direct are the most interesting lines on
top of my head.

On Tue, Apr 8, 2014 at 8:15 AM, Dipjyoti Bharali dipjy...@skanray.com wrote:
 Hi,

 I facing this peculiar issue with certain specific clients. When these
 clients connect to the proxy server, it goes for a toss until i reload the
 service. When examined through the log file, i get this same message
 everytime.

/2014/04/02 09:00:17| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Content-Type: text/xml; charset=Utf-16
UNICODE: YES
Content-Length: 0
Host: 192.168.1.1:3128
Via: 1.0 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
.
.
.
.
.
.

X-Forwarded-For: 192.168.1.74, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, .
.
.
.
.
.
.

Cache-Control: max-age=5999400
Connection: keep-alive/


 Please help. I have to reload every now and then, otherwise. For now i have
 disconnected those clients from the network


 *Dipjyoti Bharali*

 Skanray Technologies Pvt Ltd,
 Plot No. 15-17, Hebbal Industrial Area,
 Mysore - 570018
 Cell Phone : +919243552011
 Phone/Fax: +91 821 2415559/2403344 Extn: 310

 ; www.skanray.com http://www.skanray.com

 *Please consider the environment before printing this email. *


 ---
 avast! Antivirus: Outbound message clean.
 Virus Database (VPS): 140407-0, 07-04-2014
 Tested on: 08-04-2014 11:45:53
 avast! - copyright (c) 1988-2014 AVAST Software.
 http://www.avast.com






-- 
Francesco


Re: [squid-users] Looking for a cache-friendly CMS.

2014-03-31 Thread Kinkie
Hi,
  I am having good success with Wordpress installed with mod_redirect
integration and the WP Super Cache plugin.

On Mon, Mar 31, 2014 at 1:03 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey I want to use a CMS for my site but I have tried wordpress which is not
 cache friendly by default.
 I want to maybe try to use another CMS but I have no direction.
 I am not a web developer so I need a very simple system.

 I know this is not the right place to look for a CMS but I look if to be
 cache friendly so..

 Eliezer



-- 
Francesco


Re: [squid-users] Inaccessible google.com (squid-3.4)

2014-03-28 Thread Kinkie
Hi Marcello,
  have you tried accessing google FROM the squid box itself?


On Fri, Mar 28, 2014 at 11:56 AM, Marcello Romani
mrom...@ottotecnica.com wrote:
 Hi,
I'm struggling with a recurring problem, namely the very long time it
 takes (and often the impossibility) to reach www.google.com from my LAN,
 which sits behind a custom compiled squid-3.4.

 When this happens, other websites are not affected.
 Also, if I change the browser setting to no proxy, the problem goes away.

 As a workaround, issuing /usr/local/squid/sbin/squid -k reconfigure
 temporarily fixes the problem: after it has completed, pressing F5 or trying
 to directly access www.google.com again succeds within the usual time frame.

 My squid.conf has

 dns_v4_first on

 Also, /proc/sys/net/ipv6/conf/default/disable_ipv6 == 1

 squid -v reads as follows:

 # /usr/local/squid/sbin/squid  -v
 Squid Cache: Version 3.4.4
 configure options:
 '--prefix=/usr/local/squid'
 '--enable-xmalloc-statistics'
 '--enable-storeio=aufs diskd rock ufs'
 '-enable-removal-policies=heap lru'
 '--enable-icmp'
 '--enable-delay-pools'
 '--enable-ssl'
 '--disable-auth'
 --enable-ltdl-convenience

 The box is a Primergy TX200 server running debian 6.0.9, with
 2.6.32-5-686-bigmem kernel.

 Any hints as to where to look to pinpoint the issue would be greatly
 appreciated.

 Thank you in advance.

 --
 Marcello Romani



-- 
Francesco


Re: [squid-users] negative values in mgr:info

2014-02-17 Thread Kinkie
On Mon, Feb 17, 2014 at 11:15 AM, Niki Gorchilov n...@gorchilov.com wrote:
 Hello,

 While using Squid 3.4.3 on 64bit Ubuntu 12.04.3 with 64GB cache mem
 and I see negative values in some memory-related statistics:

 ===[cut]===
 Memory usage for squid via mallinfo():
 Total space in arena:  -972092 KB
 Ordinary blocks:   -974454 KB   4472 blks
 Small blocks:   0 KB  0 blks
 Holding blocks:740328 KB 19 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:2362 KB
 Total in use:2362 KB -1%
 Total free:  2362 KB -1%
 Total size:-231764 KB
 Memory accounted for:
 Total accounted:   1834725 KB -792%
 memPool accounted: 77332197 KB -33367%
 memPool unaccounted:   -77563961 KB  -0%
 memPoolAlloc calls: 13874752170
 memPoolFree calls:  13959152640
 ===[cut]===

 Are these result of integer overflow or using signed integers?

The former.
The OS API we rely on to collect those uses 32-bit signed integers.
There's nothing we can do about it, I'm sorry :(


-- 
Kinkie


Re: [squid-users] squid 3.4.3 on Solaris Sparc

2014-02-17 Thread Kinkie
That should be enough.
Check (you can use the nm -s tool) that libdb.so contains the
symbols db_create and db_env_create. It may be that the file is
corrupted, a wrong version or a stub.
Alternatively, if you don't need the session helper, use squid's
configure flags to skip building it.

On Mon, Feb 17, 2014 at 4:23 PM, Monah Baki monahb...@gmail.com wrote:
 Hi,


 I did find /usr/lib/libdb.so but no results for libdb.a


 Thanks

 On Mon, Feb 17, 2014 at 12:42 AM, Francesco Chemolli gkin...@gmail.com 
 wrote:

 On 17 Feb 2014, at 01:15, Monah Baki monahb...@gmail.com wrote:

 uname -a
 SunOS proxy 5.11 11.1 sun4v sparc SUNW,SPARC-Enterprise-T5220

 Here are the steps before it fails

 ./configure --prefix=/usr/local/squid --enable-async-io
 --enable-cache-digests --enable-underscores --enable-pthreads
 --enable-storeio=ufs,aufs --enable-removal-policies=lru,
 heap

 make

 c -I../../../include   -I/usr/include/gssapi -I/usr/include/kerberosv5
   -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x -MT ext_session_acl.o -MD -MP
 -MF .deps/ext_session_acl.Tpo -c -o ext_session_acl.o
 ext_session_acl.cc
 mv -f .deps/ext_session_acl.Tpo .deps/ext_session_acl.Po
 /bin/sh ../../../libtool --tag=CXX--mode=link g++ -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x   -g -o ext_session_acl
 ext_session_acl.o ../../../compat/libcompat-squid.la
 libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
 -Wshadow -Werror -pipe -D_REENTRANT -pthreads -g -O2 -std=c++0x -g -o
 ext_session_acl ext_session_acl.o
 ../../../compat/.libs/libcompat-squid.a -pthreads
 Undefined   first referenced
 symbol in file
 db_create   ext_session_acl.o
 db_env_create   ext_session_acl.o

 The build system is not being able to find the Berkeley db library files 
 (but for some reason it can find the header).
 Please check that libdb.a or libdb.so are available and found on the paths 
 searched for libraries by your build system.

 Kinkie



-- 
Francesco


Re: [squid-users] Debuging ERR_CONNECT_FAIL with SYSERR=110

2014-02-12 Thread Kinkie
On Wed, Feb 12, 2014 at 12:40 PM, Pawel Mojski paw...@pawcio.net wrote:
 Hi All;

 I have pretty loaded squid server working in interception mode.
 In about 0.5% of total http request I have an ERR_CONNECT_FAIL with
 additional error SYSERR=110.
 How can I debug a reason of those errors?

 The thing which consider me a lot is the URL and remote server of those
 requests.
 For example, I found three same requests for the same URL hosted on the
 same IP request.
 The first one finished with response 200, the second with 503 and
 ERR_CONNECT_FAIL(SYSERR=110) and the third with 200 again.

 Also, my customers complains that sometimes they have problems surfing
 the web.

 What can I do to debug the problem?

Hi Pawel,
 SYSERR 110 on Linux is connection timeout (ETIMEOUT).
It would seem to indicate network issues somewhere, or a severely
overloaded server (which has used all its syn backlog)


-- 
  Kinkie


Re: [squid-users] Debuging ERR_CONNECT_FAIL with SYSERR=110

2014-02-12 Thread Kinkie
On Wed, Feb 12, 2014 at 1:49 PM, Pawel Mojski paw...@pawcio.net wrote:
 W dniu 2014-02-12 13:30, Kinkie pisze:
 On Wed, Feb 12, 2014 at 12:40 PM, Pawel Mojski paw...@pawcio.net wrote:
 Hi All;

 I have pretty loaded squid server working in interception mode.
 In about 0.5% of total http request I have an ERR_CONNECT_FAIL with
 additional error SYSERR=110.
 How can I debug a reason of those errors?

 The thing which consider me a lot is the URL and remote server of those
 requests.
 For example, I found three same requests for the same URL hosted on the
 same IP request.
 The first one finished with response 200, the second with 503 and
 ERR_CONNECT_FAIL(SYSERR=110) and the third with 200 again.

 Also, my customers complains that sometimes they have problems surfing
 the web.

 What can I do to debug the problem?
 Hi Pawel,
  SYSERR 110 on Linux is connection timeout (ETIMEOUT).
 It would seem to indicate network issues somewhere, or a severely
 overloaded server (which has used all its syn backlog)


 Hi Kinkie;

 I thought the same, but, I have huge net.core.netdev_max_backlog and
 net.ipv4.tcp_max_syn_backlog and there are no network related problems
 at all.

The error is reported by squid, but the issue is on the origin server, if any.

 At the same time when squid reports a problem I can connect manually
 from squid box to the same ip address (through telnet, wget, etc) and
 nothing wrong occurs.
 I even can belive somewhere somekind of timeout happened but how can I
 find out what type of timeout it is? syn/ack, wait, whatever?

It should be waiting for syn/ack.



-- 
Francesco


Re: [squid-users] Debuging ERR_CONNECT_FAIL with SYSERR=110

2014-02-12 Thread Kinkie
 At the same time when squid reports a problem I can connect manually
 from squid box to the same ip address (through telnet, wget, etc) and
 nothing wrong occurs.
 I even can belive somewhere somekind of timeout happened but how can I
 find out what type of timeout it is? syn/ack, wait, whatever?
 It should be waiting for syn/ack.
 So. Just to clarify.
 From tcp point of view: squid is not able to establish connection with
 origin server. Thats all.
 After sending SYN packet no ACK is reponded. SYSERR=100 cannot means
 that squid was able to connect but no response was returned in some
 period of time. Am I right?

That's what the error message seems to indicate, yes.
This is not the root cause, however: it's the symptom of something
else happening somewhere at the network level.


-- 
Kinkie


Re: [squid-users] A very low level question regarding performance of helpers.

2014-02-10 Thread Kinkie
Hi Eliezer,
  I am sorry but from your mail I don't really understand what the
problem is. What are you trying to do, and in what way you are being
prevented from doing that?

On Sun, Feb 9, 2014 at 2:48 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 I have tried for a very long time to understand what are the limits of the
 interface between squid and the helpers.
 I have tested it with perl, python, ruby, java and other codes.
 I am not yet sure if the STDIN\OUT is the relevant culprit in couple issues.
 I have helpers in all sort of languages and it seems to me that there is a
 limit that do exist on the interface between squid and the helpers by the
 nature of the code as code.

 I have reached a limit of about 50 requests per second on a very slow Intel
 Atom CPU per helper which is not slowing down the whole squid instance else
 then the code startup.

 If you do have couple statistics please feel free to share them.

 * (I have managed to build a I686 squid on CentOS 6.5)

 Eliezer



-- 
Francesco


Re: [squid-users] Re: Squid 3.1.12 hangs with HEAD request in LOG

2014-01-27 Thread Kinkie
No, I hadn't.

Good :)

On Mon, Jan 27, 2014 at 8:24 AM, 4eversr thomas.boelsc...@assmann.de wrote:
 @Kinkie

 I guess you have not seen my last edit where i described that 3.4.2 fix
 this issue.



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-12-hangs-with-HEAD-request-in-LOG-tp4664219p4664461.html
 Sent from the Squid - Users mailing list archive at Nabble.com.



-- 
/kinkie


Re: [squid-users] Hypothetically comparing SATA\SAS to NAS\SAN for squid.

2014-01-19 Thread Kinkie
On Sun, Jan 19, 2014 at 7:42 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 While working here and there I have seen that ZFS is a very robust FS.
 I will not compare it to any others because there is no need for that.

 OK so zfs, ext3, ext4 and others are FS which sits on SPINNING disks or
 flash drives.
 The SATA and SAS interfaces are limited to a serial interface in the limits
 (by a standard) to 3,6 Gbps.
 So SATA is max 6Gbps while a NIC can have a bandwidth of 10Gbps.
 Are there any real reasons to not use a 10Gbps line?

At least two come to mind: first, behind that 10Gbps line there are
still spinning disks and/or SSDs; the transmission line is simply
going to add some latency. Maybe not much (10% overhead over spinning
disks due to processing, propagation, transmission, error correction
etc). Bandwidth is important, but latency is even more so.
Second, packet loss: SATA, SAS and FC guarantee 0% packet loss. If
there is any, it is immediately detected, and the data is
retransmitted. On Ethernet, you're not so sure. On IP-over-ethernet,
even less. I was told that a 1% packet loss is enough to completely
kill transmission performance in a FCoE environment, and that's the
reason why people who do FCoE use special (converged) adapters,
which look more like FC adapters than to Ethernet adapters.

 For example if I have 10 SAS or SATA disks SSD or Spinning under a machine
 with 128GB of ram it is possible to allow some flow of data to be used and
 be faster then only one and even SSD drive.

 A dual 10Gbps machine can potentially be faster in lots of aspects then a
 local disk.

 I do not have the answer but a NAS might be sometimes the right choice as a
 cache storage.

Benchmarks are welcome :)

 Indeed there are overheads for each and every TCP connection and indeed
 there are many aspects that needs to be tested and verified but I still do
 suspect that there are some assumptions that needs to be verified to make
 sure that a SAN\NAS might worth a lot then it is assumed to be.

The main advantage I can think of for using a NAS is that these
usually have huge RAM caches, which can help by keeping the directory
structure in RAM thus making small file retrieval faster than doing
multiple roundtrips to disk.

-- 
/kinkie


Re: [squid-users] squid 3.3.11 on SLES11 SP3 and couldn't squid -k reconfigure ?

2014-01-16 Thread Kinkie
On Thu, Jan 16, 2014 at 8:36 AM, Josef Karliak karl...@ajetaci.cz wrote:
   Good morning,
   I've squid 3.3.11.xx on SLES11 SP3, all OK, but I often used squid -k
 reconfigure after changing some configuration (adding forbiden domains or
 so). But there is some problem - the command mentioned above tell me some
 errors:
 proxy:/etc/squid # squid -k reconfigure
 squid: ERROR: No running copy

   But proxy is running, we browse on the internet. What went wrong ?
   Thanks for kicking to the right way and best regards
   J.Karliak.

squid -k reconfigure needs to obtain the process-id of the running
copy, it gets that from the pid-file (I think the config option is
pid_filename).
So either the running squid can't write the pid file, or the file
can't be read by the -k reconfigure command; this can be due to
filesystem permissions, or something else deleting the pid-file in the
meantime.


-- 
/kinkie


[squid-users] Re: helper program

2014-01-10 Thread Kinkie
3.4 introduced some changes (and some regressions, fixed in 3.4.2) to
the helper protocol, aiming to make it more consistent and robust. It
is possible that your helper produced invalid output, but that the
invalid output was accepted for some reason while now it is not
accepted anymore.

On Wed, Jan 8, 2014 at 6:58 PM, mailsysadmin mailsysad...@e-lcom.sy wrote:

 Sure, I tried squid.3-HEAD , squid.3.4.1, squid.3.4.2
 I confused why Perl helpers didn't work and get the same error result, but
 it works under squid.2.7
 did I miss something ?




 On Wed, 2014-01-08 at 17:33 +0100, Kinkie wrote:

 On Wed, Jan 8, 2014 at 12:11 PM, mailsysadmin mailsysad...@e-lcom.sy
 wrote:
 Dear Kinkie,

 Thx for your replay,
 I really confused about this case and I try many Perl helpers on squid 3.4
 and I couldn't see and HIT in my access log, in the other hand when I
 try
 to work with squid 2.7 every things goes OK and the dynamic content cached
 perfectly.

 so I'm wonder why squid 3.4 doesn't work with Perl helper ??
 and I follow this article but it doesn't work for me

 http://www.youtube.com/watch?v=nCgUWNXN25I

 Please keep the list in Cc, otherwise others can't benefit from the
 discussion.
 Squid 3.4.1 included a regression in the communication with some
 helpers, which should have been fixed in 3.4.2; have you tried 3.4.2
 too?

 --
 /kinkie





-- 
/kinkie


[squid-users] Re: helper program

2014-01-08 Thread Kinkie
On Wed, Jan 8, 2014 at 12:11 PM, mailsysadmin mailsysad...@e-lcom.sy wrote:
 Dear Kinkie,

 Thx for your replay,
 I really confused about this case and I try many Perl helpers on squid 3.4
 and I couldn't see and HIT in my access log, in the other hand when I try
 to work with squid 2.7 every things goes OK and the dynamic content cached
 perfectly.

 so I'm wonder why squid 3.4 doesn't work with Perl helper ??
 and I follow this article but it doesn't work for me

 http://www.youtube.com/watch?v=nCgUWNXN25I

Please keep the list in Cc, otherwise others can't benefit from the discussion.
Squid 3.4.1 included a regression in the communication with some
helpers, which should have been fixed in 3.4.2; have you tried 3.4.2
too?

-- 
/kinkie


Re: [squid-users] Squid is not caching large objects!

2014-01-05 Thread Kinkie
On Sun, Jan 5, 2014 at 1:06 PM, Aris Squid Team
squid@arissystem.com wrote:
 I configured squid to cache large files i.e. 100MB
 but it does not cache these files.
 any idea?

Have you checked whether these files are cacheable, e.g. with redbot ?
(http://redbot.org/).



-- 
/kinkie


Re: [squid-users] linux oom situtation

2013-12-30 Thread Kinkie
On Mon, Dec 30, 2013 at 9:05 AM, Oguz Yilmaz oguzyilmazl...@gmail.com wrote:
 Hello,

 I have continous oom  panic situation unresolved, origined from
 squid. I am not sure system fills up all the ram (36GB). Why this
 system triggered this oom situation? Is it about some other memory?
 highmem? lowmem? stack size?

Hi Oguz,
  when squid uses that much memory, it's usually because the admin asks it to.
Especially relevant are the cache_mem and cache_dir settings, which
unfortunately are missing in your request for help.
See http://wiki.squid-cache.org/SquidFaq/SquidMemory for details about
how squid uses memory, and how much it is expected to use.
More than that is a bug - usually a memory leak - but continuously
using 36 GB of RAM is a lot to account for as one.


-- 
/kinkie


Re: [squid-users] Do windows machines *have* to join a domin to use NTLM?

2013-12-24 Thread Kinkie
On Tue, Dec 24, 2013 at 7:54 PM, Brian J. Murrell br...@interlinx.bc.ca wrote:
 [ Changed the subject to get down to the more basic issue ]

 On Tue, 2013-12-24 at 16:20 +1300, Amos Jeffries wrote:

 This is not an assumption from the documentation. NTLM protocol
 *requires* a DC to operate.

 TL;DR: Do windows machines *have* to join a domain in order to use NTLM
 with Squid?

Hi,
  as far as I know, having a domain is not a strict requirement, at
least if you restrict to the less-secure NTLMv1. YMMV with NTLMv2. But
if you do, then you have to manually ensure that the authenticating
system (samba) and the user accounts on the workstations are in sync.
That's no easy task, even for a small number of workstations and of
users.

 Perhaps I mis-stated my desires. I don't mind setting up Samba as a DC.
 But surely Samba users use the same password for Samba as they do for
 other PAM based services (i.e. loging, etc.), which here, actually
 utilizes Kerberos for account access. Does Samba need the clear-text
 value of the password for creating challenges, etc. or can it leverage
 PAM and/or Kerberos?  But I digress.

It is not mandatory that Samba password and Unix password be in sync.
Samba doesn't need the cleartext password, but it needs a hashed
password-equivalent which is stored in its password database.
Generally samba *tries* to keep the passwords in sync by replaying the
change-password activity to the unix password when an user invokes
smbpasswd, but it can be overridden and avoided.
It is not possible to obtain the SMB password-equivalent from the
encrypted and salted Unix password (to answer your next question)

 My only real hard requirement is to not to require the Windows users to
 join a domain here just to use Squid. I have no desire for the
 network infrastructure here to be the account control for these windows
 laptops as they are not within my administrative domain.  Is that
 impossible?  I simply want users to be able to just bring their own
 Windows machine and be able to use an existing PAM (kerberos actually)
 account and just use my Negotiate protocol offering Squid proxy.
 Good luck. You will need to start with finding a PAM service can
 authenticate NTLMSSP protocol. AFAIK there is no such service.

 I think you are misunderstanding.  What I was saying is that the
 NTLM authentication mechanism documentation from
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm seems to
 assume that one already has a separate AD domain that they can join
 Samba to:

 workgroup = mydomain
 password server = myPDC
 security = domain
 ...
 Join the NT domain as outlined in the winbindd man page for your
 version of samba.

 I don't have an AD/NT domain here. Surely I can use Samba to provide
 NTLM authentication for Squid, using the account information on the
 Samba server without having to create a whole MS-Windows based NT domain
 just for Samba to join to, can't I?  Is there a different NTLM example
 configuration for that?  I don't see one that seems to cover just using
 Samba alone as the NTLM authentication mechanism for Squid.

You don't need a domain. You need the NTLM hashes (each user can
create theirs using smbpasswd)

 If you do manage to find one, you will have to locate or write a NTLM
 authentication helper for Squid to use it. The PAM helper provided
 with Squid only supports Basic authentication.

 And TBH, I would be perfectly happy with Basic for these Windows users.
 Nobody is sniffing the network here.  But it seems that I cannot provide
 Negotiate only to my Linux/Kerberos using users and Basic to the Windows
 users.  The Windows users also end up getting offered Negotiate which is
 what opens up this whole NTLM can of worms.

No, you cannot. You can try setting up a kerberos realm and having
users authenticate against that, but I am not an expert there.


-- 
/kinkie


Re: [squid-users] Squid proxy lag

2013-11-29 Thread Kinkie
On Fri, Nov 29, 2013 at 3:42 PM, alamb200 alamb...@hotmail.com wrote:
 Hi,
 I have just installed a CentOS 6.4 server as a Hyper V guest on my Hyper V
 sever.
 I have given it 2gb of RAM and a Xeon 2.4Ghz processor to run Squid on a 30
 user network is this enough?
 Is there anything else I should be looking at to speed it up?

Hi alamb200,
  the answer is it depends, generally speaking it depends on what
your users are doing.
For normal browsing habits, in absence of any bottlenecks such as
networking issues or other VMs competing for resources on the same
server that sizing could even be enough for 3000 active users.
  Why would you need to speed it up? Do you have any evidence of bad
performance?

-- 
/kinkie


Re: [squid-users] 403 response before 407

2013-11-28 Thread Kinkie
On Thu, Nov 28, 2013 at 2:12 PM, Brian J. Murrell br...@interlinx.bc.ca wrote:
 Is there any way with Squid3 to send back a 407 response when a request
 is unauthenticated and the response fails to match any http_reply_access
 rules?

Hi Brian,
  Squid's authentication process is explained at
http://wiki.squid-cache.org/Features/Authentication#How_does_Proxy_Authentication_work_in_Squid.3F

You can find your answer there.
  Kinkie


Re: [squid-users] is SPDY supported by squid ?

2013-11-26 Thread Kinkie
Hi,
  as I understand from several messages on the squid-dev mailing list,
SPDY is not going to be supported.
The first HTTP/2.0-related code is being debated and worked on in these weeks.
If you are interested, you may want to join the squid-dev mailing
list. Contributions are always welcome :)

On Tue, Nov 26, 2013 at 4:20 PM, Dieter Bloms sq...@bloms.de wrote:
 Hi,

 I found http://wiki.squid-cache.org/Features/HTTP2 and I wonder if it is
 the actual state, that SPDY is planned for squid 3.5, or is it allready
 implemented in the actual version.


 --
 Regards

   Dieter

 --
 I do not get viruses because I do not use MS software.
 If you use Outlook then please do not put my email address in your
 address-book so that WHEN you get a virus it won't use my address in the
 From field.



-- 
/kinkie


Re: [squid-users] Slow internet navigation squid vs blue coat

2013-11-25 Thread Kinkie
On Mon, Nov 25, 2013 at 11:26 AM, Michele Mase' michele.m...@gmail.com wrote:
 Problem: internet navigation is extremely slow.
 I've used squid from 1999 with no problems at all; during last month,
 one proxy gave me a lot of troubles.
 First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
 squid3.4.x with no improvements.
 Second, we have bypassed the Trend Micro Interscan proxy (the parent
 proxy) without success.
 Third: I do not know what to do.
 So what should be done?
 Some configuration improvements (sysctl/squid)?
 Could it be a network related problem? (bandwidth/delay/MTU/other)?

Hi Michele,
  extremely slow is quite a poor indication, unfortunately. Can you
measure it? e.g. by using apachebench (ab) or squidclient to measure
the time download a large file and a small file through the proxy from
a local source and from a remote source. Then repeating the same from
the box where Squid runs and then from a different box.

Think about what has remained unchanged since when there was no
performance problems: e.g. network cables, switches, routers etc.

   Kinkie


Re: [squid-users] Slow internet navigation squid vs blue coat

2013-11-25 Thread Kinkie
Hm...
  Connection timed out is an OS/TCPIP issue. Can you try using
accessing the same resource from the server itself (e.g. with wget,
GET or running firefox on the proxy server).
It seems that there's something on the network level: faulty ethernet,
switch, router, firewall or network line, or high packet loss
somewhere on the path to the affected server(s).


On Mon, Nov 25, 2013 at 3:01 PM, Michele Mase' michele.m...@gmail.com wrote:
 The analysis was made using firebug from different browsers, using
 different proxyes/
 direct access; with the problematic proxy, what I can see is an high
 waiting time, indicating that the header's page response is high.
 Therefore in cache.log i see many
 local=xx: remote=yy:zz FD  flags=1: read/write
 failure: (110) Connection timed out
 local=xx: remote=yy:zz FD  flags=1: read/write
 failure: (32) Broken pipe
 WARNING: Closing client connection due to lifetime timeout


 How could I test network related problems such as:
 Imix traffic limit
 Delay
 Bandwidth
 ?
 Michele Masè



 On Mon, Nov 25, 2013 at 1:57 PM, Kinkie gkin...@gmail.com wrote:
 On Mon, Nov 25, 2013 at 11:26 AM, Michele Mase' michele.m...@gmail.com 
 wrote:
 Problem: internet navigation is extremely slow.
 I've used squid from 1999 with no problems at all; during last month,
 one proxy gave me a lot of troubles.
 First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
 squid3.4.x with no improvements.
 Second, we have bypassed the Trend Micro Interscan proxy (the parent
 proxy) without success.
 Third: I do not know what to do.
 So what should be done?
 Some configuration improvements (sysctl/squid)?
 Could it be a network related problem? (bandwidth/delay/MTU/other)?

 Hi Michele,
   extremely slow is quite a poor indication, unfortunately. Can you
 measure it? e.g. by using apachebench (ab) or squidclient to measure
 the time download a large file and a small file through the proxy from
 a local source and from a remote source. Then repeating the same from
 the box where Squid runs and then from a different box.

 Think about what has remained unchanged since when there was no
 performance problems: e.g. network cables, switches, routers etc.

Kinkie



-- 
/kinkie


Re: [squid-users] #Can't access certain webpages

2013-11-25 Thread Kinkie
On Mon, Nov 25, 2013 at 3:21 PM, Grooz, Marc (regio iT)
marc.gr...@regioit.de wrote:
 Hi,

 Currently I use Squid 3.3.8 and I can't use/access two webservers thru squid. 
 If I bypass squid this websites work great.

 One of this websites is a fileupload/download website with a generated 
 downloadlink. When I upload a file I receive the following Squidlog Entrys:

 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?
 .
 .
 TCP_MISS_ABORTED/000 0 GET http:// w.y.x.z/cgi-bin/upload_status.cgi?
 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?

 And the downloadlink never gets generated.


 In the second case you never get a webpage back from squid. If I use lynx 
 from the commandline of the squid system the Webpage gets loaded.
 With a tcpdump I see that if squid makes the request then the Webserver 
 didn't answer.

Well, this is consistent with the behavior in squid's logs.
Have you tried accessing the misbehaving server from a client running
on the squid box, and comparing the differences in the network traces?


-- 
/kinkie


Re: [squid-users] Install Squid 3.3.10 on Slackware 14

2013-11-12 Thread Kinkie
On Tue, Nov 12, 2013 at 5:57 PM, Vukovic Ivan
ivan.vuko...@schlattergroup.com wrote:
 Hello

 Please i need help to ./configure, make and install Squid 3.3.10 on Slackware 
 14.0 I Installed Slackware 14 with this packets:
[...]
 gcc-4.7.1-i486-1
 gcc-g++-4.7.1-i486-1

 I can Boot and the Installation is ok.
 Now i want install Squid 3.3.10 on this Slackware 14 Installation but 
 everytime when i did the ./configure command, this error came:
 gcc error: C Compiler works ..no
 gcc -v command unrecognized
 gcc -qversion command unrecognized

 But!, here is the Point, when i install slackware 14 full (with all packages) 
 then i can ./configure, make and install squid 3.3.10 without any Problem.

 So,
 Which package of slackware 14 is missing to ./configure, make and install 
 Squid 3.3.10 Here's is the list of all slackware 14 included packages:
 http://mirror.netcologne.de/slackware/slackware-14.0/PACKAGES.TXT

 Please help me to get the squid install process working, Thanks!

This is a question for the Slackware developers..
The error message is however quite telling: it seems that your gcc
setup is not working.

I suggest you to check config.log (it may contain additional
information) and/or gcc -v, and check what it prints. It may give
you a clue as to what's wrong with your C compiler.

-- 
/kinkie


Re: [squid-users] NTLM Auth helper issue

2013-10-25 Thread Kinkie
Hi Eric,
  you probably want to ask this question to the Samba lists. Squid
only uses ntlm_auth services..

On Tue, Oct 22, 2013 at 6:52 PM, Eric Vanderveer
e...@ericvanderveer.com wrote:
 Hi everyone when trying to run /usr/bin/ntlm_auth
 --helper-protocol=squid-2.5-basic domain+user password all it does is
 just hang and does not give me an OK response.  I have checked the
 winbind logs and can't see anything.  Any ideas?

 Thanks
 Eric Vanderveer



-- 
/kinkie


Re: [squid-users] Subdirectory in reverse proxy

2013-10-25 Thread Kinkie
On Fri, Oct 25, 2013 at 4:12 PM, dweimer dwei...@dweimer.net wrote:
 On 10/25/2013 7:32 am, Martin Rieß wrote:

 Hi everyone.

 I’m trying to set up squid3 on pfSense to work as reverse proxy.
 I plan to have several servers behind squid/pfsense and I want to set up
 the
 reverse proxy the following way:

 http://FQDN/owa -- http://ms-server/owa
 http://FQDN/webshop - http://ubuntu-server/webshop
 http://FQDN/website - http://ubuntu-server/

 Can someone please tell me how to set the different routes to the
 subdirectories, either in the pfsense webinterface or directly in
 squid.conf?
 I know that with owa I will have to deal with https and other details, but
 first I have to solve the topic of these subdirectories.

 Thank you in advance,

 Martin


 You are needing to look for url_rewrite_program, and several related
 directives within squid, not sure if they have added that into the current
 pfSense package or not.  It could of course be manually done, but I would
 search for rewrite on the web configuration to see if it has been added, as
 it would make sense for a firewall product setup with reverse proxy support
 to include that feature.  The only pfSense systems I have are running on
 Alix boards, which lack the memory and CPU power to handle Squid so I
 haven't tested out the Squid package, on it myself.

If you don't need to rewrite the urlpath, then a carefully crafted
series of cache_peer_access lines should be enough, without the need
to code a custom rewriter. It should also be faster :)


-- 
/kinkie


Re: [squid-users] squid 3.3.9 with centos 6.4 , error in compiling !!!!!

2013-10-24 Thread Kinkie
Hi Ahmad,

 i dont know why it ask me about cppunit and it is downloaed already !!!

 here is rpm -ql reasult :
 [root@drx]# rpm -ql cppunit
[...]

You also need cppunit-devel

-- 
/kinkie


Re: [squid-users] squid with muliwan

2013-10-21 Thread Kinkie
Hi Adamso,
  you can find a few pointers (but not a complete howto) here:
http://wiki.squid-cache.org/SquidFaq/NetworkOptimizations

On Mon, Oct 21, 2013 at 6:57 AM, adamso seduicto...@yahoo.fr wrote:
 Hi guys,

 I am using squid with multi wan. I want to forward traffic in each wan
 according the lan subnet source or distination.
 Eg : traffic for yahoo go to wan1 and others wan2
 I try tcp_outgoing_address with virtual interface without succes

 Please help me

 Thanks



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-with-muliwan-tp4662760.html
 Sent from the Squid - Users mailing list archive at Nabble.com.



-- 
/kinkie


Re: [squid-users] Squid authentication stopped working

2013-09-25 Thread Kinkie
What kind of ntlm auth helper are you using? Samba's?

If so, othe simplest reason I can think of without additional info  is
that your machine account in AD went stale for some reason.. can you
try rejoining the domain?

On Wed, Sep 25, 2013 at 6:27 PM, Eric Vanderveer
e...@ericvanderveer.com wrote:
 Hi,
I have been running squid, dansguardian and ntlm_authentication for
 about 2 months now with no problem.  This morning it stopped working.
 I can no longer surf and I get login pop ups on my window clients.  On
 the squid server I can see the domain and its users so I am connected.
  My cache.log is showing a lot of stuff but most of it is greek to me.
  Here is a snippet

 http://pastebin.com/YryKkC0J

 Any ideas?

 Thanks
 Eric Vanderveer



-- 
/kinkie


Re: [squid-users] Squid authentication stopped working

2013-09-25 Thread Kinkie
so it's kerberos, not ntlm, is it?

On Wed, Sep 25, 2013 at 6:52 PM, Eric Vanderveer
e...@ericvanderveer.com wrote:
 I already rejoined to the domain.  I checked to make sure and I can
 see the certificate when i do a klist.

 On Wed, Sep 25, 2013 at 12:45 PM, Kinkie gkin...@gmail.com wrote:
 What kind of ntlm auth helper are you using? Samba's?

 If so, othe simplest reason I can think of without additional info  is
 that your machine account in AD went stale for some reason.. can you
 try rejoining the domain?

 On Wed, Sep 25, 2013 at 6:27 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 Hi,
I have been running squid, dansguardian and ntlm_authentication for
 about 2 months now with no problem.  This morning it stopped working.
 I can no longer surf and I get login pop ups on my window clients.  On
 the squid server I can see the domain and its users so I am connected.
  My cache.log is showing a lot of stuff but most of it is greek to me.
  Here is a snippet

 http://pastebin.com/YryKkC0J

 Any ideas?

 Thanks
 Eric Vanderveer



 --
 /kinkie



-- 
/kinkie


Re: [squid-users] Squid authentication stopped working

2013-09-25 Thread Kinkie
can you do a ntlm_auth -v?

On Wed, Sep 25, 2013 at 6:54 PM, Eric Vanderveer
e...@ericvanderveer.com wrote:
 I am using /usr/bin/ntlm_auth with squid.

 On Wed, Sep 25, 2013 at 12:53 PM, Kinkie gkin...@gmail.com wrote:
 so it's kerberos, not ntlm, is it?

 On Wed, Sep 25, 2013 at 6:52 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 I already rejoined to the domain.  I checked to make sure and I can
 see the certificate when i do a klist.

 On Wed, Sep 25, 2013 at 12:45 PM, Kinkie gkin...@gmail.com wrote:
 What kind of ntlm auth helper are you using? Samba's?

 If so, othe simplest reason I can think of without additional info  is
 that your machine account in AD went stale for some reason.. can you
 try rejoining the domain?

 On Wed, Sep 25, 2013 at 6:27 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 Hi,
I have been running squid, dansguardian and ntlm_authentication for
 about 2 months now with no problem.  This morning it stopped working.
 I can no longer surf and I get login pop ups on my window clients.  On
 the squid server I can see the domain and its users so I am connected.
  My cache.log is showing a lot of stuff but most of it is greek to me.
  Here is a snippet

 http://pastebin.com/YryKkC0J

 Any ideas?

 Thanks
 Eric Vanderveer



 --
 /kinkie



 --
 /kinkie



-- 
/kinkie


Re: [squid-users] Squid authentication stopped working

2013-09-25 Thread Kinkie
That's the way NTLM is supposed to work. It requires 2x 407 DENIED for
each new tcp connection.

On Wed, Sep 25, 2013 at 7:36 PM, Eric Vanderveer
e...@ericvanderveer.com wrote:
 I see The reply for POST http://somedomain.com is DENIED because it
 matched 'ntlm_auth' but then right after I see the same thing but it
 says is ALLOWED.

 On Wed, Sep 25, 2013 at 1:30 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 Still at a loss on this.  If anyone has an idea let me know.


 On Wed, Sep 25, 2013 at 12:57 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 I am assuming you mean -V and its Version 3.6.3

 On Wed, Sep 25, 2013 at 12:56 PM, Kinkie gkin...@gmail.com wrote:
 can you do a ntlm_auth -v?

 On Wed, Sep 25, 2013 at 6:54 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 I am using /usr/bin/ntlm_auth with squid.

 On Wed, Sep 25, 2013 at 12:53 PM, Kinkie gkin...@gmail.com wrote:
 so it's kerberos, not ntlm, is it?

 On Wed, Sep 25, 2013 at 6:52 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 I already rejoined to the domain.  I checked to make sure and I can
 see the certificate when i do a klist.

 On Wed, Sep 25, 2013 at 12:45 PM, Kinkie gkin...@gmail.com wrote:
 What kind of ntlm auth helper are you using? Samba's?

 If so, othe simplest reason I can think of without additional info  is
 that your machine account in AD went stale for some reason.. can you
 try rejoining the domain?

 On Wed, Sep 25, 2013 at 6:27 PM, Eric Vanderveer
 e...@ericvanderveer.com wrote:
 Hi,
I have been running squid, dansguardian and ntlm_authentication for
 about 2 months now with no problem.  This morning it stopped working.
 I can no longer surf and I get login pop ups on my window clients.  On
 the squid server I can see the domain and its users so I am connected.
  My cache.log is showing a lot of stuff but most of it is greek to me.
  Here is a snippet

 http://pastebin.com/YryKkC0J

 Any ideas?

 Thanks
 Eric Vanderveer



 --
 /kinkie



 --
 /kinkie



 --
 /kinkie



-- 
/kinkie


Re: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-15 Thread Kinkie
On Sun, Sep 15, 2013 at 2:51 PM, Carlos Defoe carlosde...@gmail.com wrote:
 I got the same result as Mohsen. The only thing that worked was adding
 ulimit -n mynumber to the init script.

 It was weird for me, because the script is run by root, not the squid
 user, and i thought ulimit -n applied only to the current logged in
 user. But I think it applies to any session that will start later.

Ulimits are inherited by all child processes; lowering them is always
possible, raising them may be an administrator-only action.
bash's manual (man 1 bash) has an informative chapter on ulimit.
Otherwise you may want to check setrlimit(2).
System-wide settings may be set in /etc/security/limits.conf (or
/etc/limits.conf, depending on your distro). Man 5 limits.conf has the
details (at least on my Ubuntu Raring system).

   Kinkie


Re: [squid-users] Implementation details of of proxying?

2013-09-13 Thread Kinkie
On Fri, Sep 13, 2013 at 4:15 AM, Jeffrey Walton noloa...@gmail.com wrote:
 I'm looking for the implementation details of how squid proxies its
 connections. That is (in pseudo code):

 socket src = ... // client socket
 socket dest = ... // server socket

 int n = read(src, buffer)
 write(dest, buffer, n)

 I imagine its not that naive, and I'm really interested in the
 techniques squid uses to improve performance around that locus.

Hello.
Squid uses an event-driven approach, based on select(2) and its more
modern and effective heirs, depending on what the OS supports.
You can find most of the relevant code in the src/comm/ subdirectory
of the source tree, as well as in comm.cc


-- 
/kinkie


Re: [squid-users] Re: Log Squid logs to Syslog server on the Network

2013-09-04 Thread Kinkie
On Wed, Sep 4, 2013 at 7:32 AM, Sachin Gupta ching...@gmail.com wrote:
 Hi,
Hi Sachin,

 Is there a way to log SQUID log messages to a Syslog server listening on the
 network?

Yes: http://wiki.squid-cache.org/Features/LogModules#Module:_System_Log


-- 
/kinkie


Re: [squid-users] Well, this is what I concluded about using squid!

2013-09-02 Thread Kinkie
Hi Firan,
   besides what Amos already said, I'd like to add a comment to this
last statement of yours:

 I will give squid a little more time but I think I will give up soon and 
 advise the party I installed squid for to go for another, commercial, cache 
 proxy.

The status of Squid as a Free/Open Source Software project doesn't
mean it can't be commercially supported.
In fact, on the Squid website itself there is a pretty extensive list
of companies providing commercial support for squid
(http://www.squid-cache.org/Support/services.html) .
I also expect that most commercial Linux distributions (e.g. Red Hat
Enterprise Linux, SuSE, Ubuntu..) will support Squid on their
respective distribution as part of their commercial support packages.
If that's still not enough, by now in most countries there is a
well-developed market of local integrators who - for a price - will
support FOSS; in many cases these are companies built by experienced
members of the FOSS community.
Finally, you can try contacting the Squid developers; some in the team
are offering consulting services (try getting direct access to
engineering from any commercial product).

From some of your statements it seems that you want the benefits of a
gratis product, but with the support infrastructure of a commercial
venture.
You want a powerful product in an enterprise- or telco-class
environment, but which can be set up by an unexperienced admin.

I'm sorry to say this, but both sets of requirements are
contradictory. Setting up a support infrastructure has costs,
unfortunately, which can't be sustained by a gratis project (besides
the great community and developer support Squid is already offering);
likewise, a powerful product in a demanding environment requires an
experienced admin.

This said, thanks for your feedback.
As Amos already said, we try to do our best and to improve Squid and
make it even more powerful, flexible and easy to use than it is now.
I hope that the informations he and I provided you will help you
organize a business case that can support your needs, with Squid or
with any other product on the market.

-- 
/kinkie


Re: [squid-users] Reverse Proxy - Multiple domains - Multiple wildcard certs?

2013-08-19 Thread Kinkie
On Mon, Aug 19, 2013 at 1:53 AM, PSA sima...@operamail.com wrote:
 I can't figure out how to serve multiple domains with a single squid
 server/single IP address.

 I am currently serving:

 api.domain.com
 www.domain.com
 status.domain.com

 with a *.domain.com ssl certificate.

 I now want to add:

 www.domain2.com
 api.domain2.com

 I can't see how to use a different ssl certificate though.

Hi,
  the technique for doing that exists, it's named SNI (Server Name
Indication). Unfortunately as far as I can tell Squid doesn't yet
support it. You're currently stuck to one IP per certificate; patches
or sponsorship to develop them are of course more than welcome :)

-- 
/kinkie


Re: [squid-users] Force to cache a Website for 5 minutes

2013-05-01 Thread Kinkie
On Mon, Apr 29, 2013 at 2:20 PM,  sauro...@gmx.de wrote:
 Hi list,
 im searching for a way to cache a specific site like

 http://testdomain.com/cacheme/

 with all subfolders and documents for a specific time. So don´t matter what 
 the user tries he won´t get a newer version of the site.

 Is there a way?

Hi,
  you can check the refresh_pattern configuration directive and its arguments.

--
/kinkie


Re: [squid-users] HTML Realtime Report SqStat in SQUID

2013-04-11 Thread Kinkie
 When I do I access http://192.168.0.1/ Apache page appears, then I think that 
 is missing in some additional configuration may be due SQUID or modify any 
 part of the SqStat PHP code.

 First described above ask for your kind help to correct my problem.

Hi Daniel,
On the SqStat page, I see:


Copy file config.inc.php.defaults to config.inc.php, edit
config.inc.php to specify your squid proxy server IP and port.


If that doesn't work, you probably want to contact SqStat's author.

--
/kinkie


Re: [squid-users] squid-internal-mgr not found - cannot login to cachemgr

2013-04-11 Thread Kinkie
On Thu, Apr 11, 2013 at 2:28 AM, brendan kearney bpk...@gmail.com wrote:
 resending because i got a mailer-daemon failure for HTML formatting...

 all,

 i am running squid 3.2.5 on fedora 16 64 bit on two separate boxes,
 load balanced with HA Proxy.  i am trying to access cachemgr on either
 one of the squid instances, and both exhibit the behaviour where the
 squid-internal-mgr URI is not found.  attempts to login via the HA
 Proxy VIP as well as with no proxy configured (direct access) have
 been tried.  both ways produce the same error.  below is some header
 info:

 http://192.168.25.1/squid-internal-mgr/

[...]
 # 
 -
 #  TAG: http_port
 http_port 192.168.25.1:3128
[...]

 can anyone tell my why i am not able to get logged into the cachemgr?
 the page presents, but the login fails.  cachemgr.conf has the IP of
 both proxies listed, and /etc/httpd/conf.d/squid.conf has the right
 access allowed by network.  /usr/lib64/squid/cachemgr.cgi is chmod'd
 755 (rwxr-xr-x) and is chown'd root:root.

Hi Brendan,
 the reason is that Squid is listening on port 3128, you're connecting
to the Apache server listening on port 80.

--
/kinkie


Re: [squid-users] squid 3.x expected max throughput

2013-04-10 Thread Kinkie
On Tue, Apr 9, 2013 at 11:35 PM, Youssef Ghorbal d...@pasteur.fr wrote:
 Hello,

 Is there any recent figures about max throughput to be expected from 
 a squid 3.x install (on recent hardware) in the scenario of a single stream 
 downloading a large file ( 1GB) (read not cacheable)
 I'm aware that's not a performance metric per se, but it's one of the 
 scenarios we have to deal with.

 Few weeks ago, Amos talked about 50Mb/s (client + server) for a squid 
 3.1

Hi,
  50mb/s seems a very conservative estimate to me; in that scenario
Squid is essentially acting as a network pipe.
Assuming this is a lab (and we can thus ignore bandwidth and latency
on the internet link), the expectation is that this kind of scenario
for squid will be CPU and network I/O bound, so in order to give any
sensible answer we'd need to know what kind of network interface you
would use (fast-ethernet? Giga-ethernet copper? Giga-ethernet fiber?
Even faster?), what kind of CPU and what kind of system architecture
(server-class? pc-class? virtual?)


--
/kinkie


Re: [squid-users] squid 3.x expected max throughput

2013-04-10 Thread Kinkie
You probably want to check http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

Unfortunately the benchmarks reported are often expressed as RPS and
not bandwidth.



On Wed, Apr 10, 2013 at 1:21 PM, Youssef Ghorbal d...@pasteur.fr wrote:
 On Apr 10, 2013, at 10:37 AM, Kinkie gkin...@gmail.com wrote:

 On Tue, Apr 9, 2013 at 11:35 PM, Youssef Ghorbal d...@pasteur.fr wrote:
 Hello,

Is there any recent figures about max throughput to be expected from 
 a squid 3.x install (on recent hardware) in the scenario of a single stream 
 downloading a large file ( 1GB) (read not cacheable)
I'm aware that's not a performance metric per se, but it's one of 
 the scenarios we have to deal with.

Few weeks ago, Amos talked about 50Mb/s (client + server) for a 
 squid 3.1

 Hi,
  50mb/s seems a very conservative estimate to me; in that scenario
 Squid is essentially acting as a network pipe.
 Assuming this is a lab (and we can thus ignore bandwidth and latency
 on the internet link), the expectation is that this kind of scenario
 for squid will be CPU and network I/O bound, so in order to give any
 sensible answer we'd need to know what kind of network interface you
 would use (fast-ethernet? Giga-ethernet copper? Giga-ethernet fiber?
 Even faster?), what kind of CPU and what kind of system architecture
 (server-class? pc-class? virtual?)

 I'm agree, it's a LAB and we can ignore bandwidth and latency of Internet 
 link. And it's exactly what you said it's a scenario where Squid is 
 essentially acting as a network pipe.
 Here is a summary of the hardware :
 - 1Gb ethernet NIC (copper) : the same of client and server traffic.
 - 16GB RAM
 - 2 CPUs Quad Core (Xeon E5420 2.5Ghz per core) : I think that the many cores 
 are not relevent since a single stream will eventually be handled by a single 
 core.
 - FreeBSD 8.3 amd64

 What I'm seeing right now is ~50Mb/s on 3.1 release (as Amos said earlier) 
 which seems very conservative estimate to me too, and I was seeking infos on 
 what can be expected in a perfect world.
 If it's the current best figures I can get, that's fine and I'll not be 
 looking to optmise anything else. If it can do a lot better (let's say 10 
 times better) I'll try to investgate time to reach this (upgrade to the last 
 release, tune the configuration, tune the system etc)

 Youssef



-- 
/kinkie


Re: [squid-users] blog entry on core Squid concepts

2013-03-17 Thread Kinkie
Hi Kent,
 thanks :)
I've referenced your article from http://wiki.squid-cache.org/ExternalLinks

On Sun, Mar 17, 2013 at 11:39 AM, Kent Tong kent.tong...@gmail.com wrote:
 Hi,

 I've written a blog entry on Squid:
 http://kenttongmo.blogspot.com/2013/03/concepts-of-squid.html

 If it is considered useful, would an authorized editor like to add a link
 to it from the Squid wiki?

 Thanks!

 --
 Kent Tong
 IT author and consultant, child education coach



--
/kinkie


Re: [squid-users] Squid with external auth problems

2013-03-13 Thread Kinkie
On Thu, Mar 7, 2013 at 6:18 PM, Noc Phibee Telecom
n...@phibee-telecom.net wrote:
 Hi

 I have a problem with my squid server. Squid works with authentication
 NTLM, it works very, well to access a website that requires also a
 authentication it creates problems.

 If i create a ACL with http_allow, the user have:
  401 Unauthaurized:  Acces is denied due to  invalid credentials
 He don't have a login/password request.

 If i delete the http_allow, a login/pass windows is open but i see the
 Active Directory Domain
 i think's that he don't sent the good information at the web server.

 On a direct Internet Access, without proxy, no problems


 Anyone can help me ?

Hi Jerome,
more input is needed in order to help you.

Can you please extract at least the auth_param, acl and http_access
lines from your configuration, in the same order as they are written
in the config file?

Thank you.

-- 
/kinkie


Re: [squid-users] Is it possible use Squid and virtualization(xen or kvm) have a good performance ?

2013-03-07 Thread Kinkie
 root@backend4a:~# ifstat -b
eth0eth1eth2eth3   
  eth4eth5eth6
  Kbps in  Kbps out   Kbps in  Kbps out   Kbps in  Kbps out   Kbps in  Kbps 
 out   Kbps in  Kbps out   Kbps in  Kbps out   Kbps in  Kbps out
  8375.12  52575.00  0.44  0.00  4.31  0.00  43416.20   
 7713.50  0.00  0.00 43.38 31.68300.96 26.59
  9268.63  51068.23  0.44  0.00  3.73  0.00  40364.53   
 8733.16  0.00  0.00 13.40 15.26  8.79 12.37
  7685.08  42964.82  0.44  0.00  3.71  0.00  45665.90   
 7254.55  0.00  0.00   1884.36 72.96  9.17  9.19

 U see the cpu use about 40% of the cpu but the traffic is just 3~5MB/s.

It depends on your traffic patterns, but this is arguably high. What
was it like on the machines you replaced?

 This make me doubt does the cache system Squid is suitable for 
 virtualization. Or the bad performance caused by our configuration ? Has 
 anyone use squid in xen can talk about ?

Virtualization _does_ add overhead, especially for
data-transmission-intensive tasks such as caching. The average
overhead of virtualization is a 5-7% performance penalty; probably
caches make it worse but even doubling the average numbers we get a
10-15% expected performance drop.

I haven't use virtualization in a performance-sensitive environment.
In general I would recommend against going virtual so if absolute
performance is important.
Also because caches in general (and squid in particular) are at most
at ease when they have very simple paths to the disks.
If you're running on a VM, a common deployment scenario is that you're
running the VM virtual disks off a RAID or (even worse) off a NAS
(which is then again on RAID) or SAN.
Squid's data access patterns are especially unfriendly to these
scenarios: you get all sorts of misaligned writes, parity
recalculations, extra read/write load.

-- 
/kinkie


Re: [squid-users] Hello, can 'squidclient' check if a file is cached in the squid?

2013-01-22 Thread Kinkie
Hi He,
  Amos already replied you.

URL=http://what.you.want.to.test/
squidclient $URL | fgrep X-Cache:

will tell you.

On Tue, Jan 22, 2013 at 8:15 AM, He, Qingsheng 2
qingsheng2...@sonymobile.com wrote:
 Hello,

 Anyone could help me?

 Thanks.

 BRs,
 Qingsheng.

 -Original Message-
 From: He, Qingsheng 2
 Sent: Thursday, January 10, 2013 2:43 PM
 To: 'Amos Jeffries'; squid-users@squid-cache.org
 Subject: RE: [squid-users] Hello, can 'squidclient' check if a file is cached 
 in the squid?

 Hello Amos,

 Sorry, I am a new subscriber for the mailing list.
 I am not sure how to raise a question.

 About use the squidclient to check if a file(url) has been cached by the 
 squid, I just made a test, but it seem not work as my expectation.

 #Squidclient -t 1 -h SquidServerDNS -p 80 $url

 It return 'X-Cache: MISS from localhost'.

 But actally the file has been cached since I use wget to download the file 
 very fast.
 If I use Icp to quiry it will return 'UDP_HIT'.

 Do you know why?
 Thanks.

 He Qingsheng


 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Thursday, January 10, 2013 2:20 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Hello, can 'squidclient' check if a file is cached 
 in the squid?

 On 10/01/2013 7:06 p.m., He, Qingsheng 2 wrote:
 Hello all,

 Can 'squidclient' check if a file is cached in the squid?
 Thanks.


 He Qingsheng


 Please do not hijack other peoples threads with unrelated topics.
 Yes it can. squidclient $URL | more and look for X-Cache: header contents.

 Amos



--
/kinkie


  1   2   3   4   5   6   7   >