Re: [squid-users] compile error on Squid 3.1.18

2011-12-07 Thread Víctor José Hernández Gómez

Hi everybody,

the problem related to ssl detection still stands in 3.1.18 branch.

/usr/local/src/squid-3.1.18/src/ssl/gadgets.cc:110: undefined reference 
to `X509_set_issuer_name'
/usr/local/src/squid-3.1.18/src/ssl/gadgets.cc:117: undefined reference 
to `X509_set_notAfter'
/usr/local/src/squid-3.1.18/src/ssl/gadgets.cc:122: undefined reference 
to `X509_set_subject_name'
/usr/local/src/squid-3.1.18/src/ssl/gadgets.cc:127: undefined reference 
to `X509_set_pubkey'


This time I have tried gcc 4.4.6 and OpenSSL 0.9.8r (on a RedHat 6.2 
machine)..


Let me know if I can help you somehow to bypass this compilation error.

Best regards,
--
Víctor J. Hernández Gómez





My problem (a link problem) was solved when I inverted the order of
SSLLIB and SSL_LIBS inside src/Makefile

squid/squid-3.1.18  diff src/Makefile.org src/Makefile
1893c1893
 $(SSLLIB) $(SSL_LIBS) -lmiscutil $(EPOLL_LIBS) $(MINGW_LIBS) \
---
  $(SSL_LIBS) $(SSLLIB) -lmiscutil $(EPOLL_LIBS) $(MINGW_LIBS) \
squid/squid-3.1.18 

I'm using gcc-3.4.6.


Re: [squid-users] compile error on Squid 3.1.18

2011-12-07 Thread Víctor José Hernández Gómez

Hi Jose,


My problem (a link problem) was solved when I inverted the order of
SSLLIB and SSL_LIBS inside src/Makefile

squid/squid-3.1.18  diff src/Makefile.org src/Makefile
1893c1893
 $(SSLLIB) $(SSL_LIBS) -lmiscutil $(EPOLL_LIBS) $(MINGW_LIBS) \
---
  $(SSL_LIBS) $(SSLLIB) -lmiscutil $(EPOLL_LIBS) $(MINGW_LIBS) \
squid/squid-3.1.18 



thank you very much !!! it works wonders now!!

Regards,
--
Víctor



Re: [squid-users] Compiling squid-3.1.15 + openssl 1.0.0c

2011-09-06 Thread Víctor José Hernández Gómez



I am trying to compile squid from source, using 3.1.15 and openssl
1.0.0c...

Server: RedHat 5.5 x86_64
GCC version is 4.1.2



Small mistake on the fixes for OpenSSL 1.0.0d problems.
Please try this patch:

http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11702.patch


I am getting the same error.

Can I help you any other way?

--
Víctor


Re: [squid-users] Compiling squid-3.1.15 + openssl 1.0.0c

2011-09-06 Thread Víctor José Hernández Gómez


 I am trying to compile squid from source, using 3.1.15 and openssl
 1.0.0c...

 Server: RedHat 5.5 x86_64
 GCC version is 4.1.2

 Small mistake on the fixes for OpenSSL 1.0.0d problems.
 Please try this patch:

 
http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11702.patch


 I am getting the same error.

 Can I help you any other way?

 Doh. I should have checked the filename. (you will need that patch when
 you get to certificates_db.cc anyway).


 Yes, if you can find out whether those functions are actually defined in
 your openssl headers and if so why they are not being included from
 openssl/ssl.h it would help a lot.

 Amos

The error:

/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:108: undefined reference 
to `X509_set_issuer_name'
/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:115: undefined reference 
to `X509_set_notAfter'
/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:120: undefined reference 
to `X509_set_subject_name'
/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:125: undefined reference 
to `X509_set_pubkey'


Functions are defined in x509.h, located in /usr/local/ssl/include/openssl:

x509.h:int X509_set_issuer_name(X509 *x, X509_NAME *name);
x509.h:int X509_set_notAfter(X509 *x, const ASN1_TIME *tm);
x509.h:int X509_set_subject_name(X509 *x, X509_NAME *name);
x509.h:int X509_set_pubkey(X509 *x, EVP_PKEY *pkey);

The ssl.h file include openssl/x509.h, but I am not able to interpret 
correctly when the inclusion works. Let me know the way to do it, 
please. May I send you ssl.h?


Thanks,
--
Víctor



[squid-users] Compiling squid-3.1.15 + openssl 1.0.0c

2011-09-05 Thread Víctor José Hernández Gómez

Dear squid users,

I am trying to compile squid from source, using 3.1.15 and openssl 
1.0.0c...


Server: RedHat 5.5 x86_64
GCC version is 4.1.2

Error: (make)

/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:108: undefined reference 
to `X509_set_issuer_name'
/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:115: undefined reference 
to `X509_set_notAfter'
/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:120: undefined reference 
to `X509_set_subject_name'
/usr/local/src/squid-3.1.15/src/ssl/gadgets.cc:125: undefined reference 
to `X509_set_pubkey'

collect2: ld returned 1 exit status
libtool: link: rm -f .libs/squidS.o
make[3]: *** [squid] Error 1
make[3]: Leaving directory `/usr/local/src/squid-3.1.15/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/local/src/squid-3.1.15/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/squid-3.1.15/src'
make: *** [all-recursive] Error 1

Squid 3.1.10 is working ok in the same box with openssl 1.0.0c, also 
from source.


Any idea would be welcome.

Thank you in advance for your help.
--
Víctor J. Hernández


[squid-users] SSL traffic

2011-04-05 Thread Víctor José Hernández Gómez

Dear squid users,

we remember to have measured the percentage of bandwitch devoted to SSL 
in our squid installation, and it was about 10 percent of total traffic.


SSL is not cacheable, and I think its use is increasing. I wonder if 
there is any experience with squid software using SSL engines (hardware 
devices) via openssl to get a better behaviour (that is, better 
perfomance) of SSL traffic.


Any other idea regarding SSL treatment would be very welcome (parameter 
tuning either on SO, squid, or openssl, etc..)


We are now using a 3.1.12 installation, with openssl 1.0.0

Regards,
--
Víctor J. Hernández Gómez



Re: [squid-users] SSL traffic

2011-04-05 Thread Víctor José Hernández Gómez

El 05/04/11 10:31, Amos Jeffries escribió:

On 05/04/11 20:01, Víctor José Hernández Gómez wrote:

Dear squid users,

we remember to have measured the percentage of bandwitch devoted to SSL
in our squid installation, and it was about 10 percent of total traffic.

SSL is not cacheable, and I think its use is increasing. I wonder if
there is any experience with squid software using SSL engines (hardware
devices) via openssl to get a better behaviour (that is, better
perfomance) of SSL traffic.


What do you think Squid would do with such hardware? HTTPS traffic is
encrypted/decrypted by the client and server. Squid just shuffles their
pre-encrypted bytes to and fro.



I thought that --enable-ssl and --with-openssl compilation options would 
provide squid with the ability to use openssl functions to treat SSL 
traffic. In such a case, operating with hardware instead of software 
would accelerate squid. I see that is not the case.




Any other idea regarding SSL treatment would be very welcome (parameter
tuning either on SO, squid, or openssl, etc..)



If Squid is peritted to see the HTTP reuqets inside the SSL they are

usually as cacheable as non-SSL requests.

Please help us encourage the browser developers to make SSL links to a
trusted SSL-enabled proxy and pass the requests to it. Then we can all
benefit from improved HTTPS speeds.


For now the tunneling Squid perform as good as non-caching proxies. Or
in situations where ssl-bump feature can be used they work slower but
with cache HITs being possible.


Thank you for your help.
--
Víctor J. Hernández Gómez


[squid-users] http_pipelining prefetch ..

2011-03-23 Thread Víctor José Hernández Gómez

Dear squid-users,

I have been reading about pipelining 
(http://en.wikipedia.org/wiki/HTTP_pipelining), and we are thinking to 
change the pipeline_prefetch directive to on in our Squid-based 
installation, but have found some notes which prevent us to try it


Notes: (Now defaults to off for bandwidth management and access logging 
reasons)


Can someone go into a more detailed explanation on this? Is it really 
worth to try?


Thank you in advance for your help,
--
Víctor J. Hernández Gómez
Servicio de Aplicaciones Transversales y Grandes Sistemas
Centro de Informática y Comunicaciones
Universidad Pablo de Olavide, de Sevilla.
e-mail: vjher...@cic.upo.es, Tlfno: 954977903 , Directo: 954349260


[squid-users] work on partition used for cache dir

2011-03-13 Thread Víctor José Hernández Gómez

Hi all,

I had planned some work on the ext3 partition used for cache data in our 
squid 3.1.11 instalation, which should be unomunted and mounted after 
some filesystem parameters modifications.


So I thought it could work the following

 1. comment cache_dir line on squid.conf
 2. add cache deny all
 3. (reconfigure) squid -k reconfigure

(so users could navigate and ...)  If squid is not using cache data at 
all, then I could modify ext3 params on the partition...


 4. work on partition
 5. umount partition
 6. mount
 7. undo changes on squid.conf and reconfigure

This was not the case, as before I tried to unmount the partition  squid 
still was using swap.state file ... (lsof /partition warned me), so I 
had to undo the work on partition and undo changes on squid.conf...


What asumption was wrong? Should squid free the filedescriptor related 
to swapstate file? Any other approach to get what we need?


Thank you in advanced for your help,
--
Víctor J. Hernández Gómez



Re: [squid-users] Asking for advice on obtaining a better throughput.

2011-03-11 Thread Víctor José Hernández Gómez

Hi again,


I am using one box (4Gb RAM, modern multicore CPU) for a mono-instance
proxy-only (non-caching) squid 3.1.9 version, serving about 2500
clients .

CPU is never over 30%, and vmstat output does not show any swapping.

1) The configuration of the instance is very simple indeed:

# --
# Recommended minimum configuration:
# Recommended access permissions, acls, ... etc...


#
# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

cache_dir aufs /san/cache 241904 64 512 - NOt used
cache deny all


NP: with a cache_dir configured it *is* used. The cache deny all means
that new data will not be added AND that existing data will be removed
when detected as stale or a variant collision.



# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

# More rules
cache_mem 512 MB
# --

2) TCP is also tuned for performace, timeout optimization, etc..

3) On a high load moment I invoke the following command.

squidclient -p 8080 mgr:60min

which shows:

sample_start_time = 1294827593.936461 (Wed, 12 Jan 2011 10:19:53 GMT)
sample_end_time = 1294831195.510801 (Wed, 12 Jan 2011 11:19:55 GMT)
client_http.requests = 176.550847/sec
client_http.hits = 0.00/sec
client_http.errors = 0.337075/sec
client_http.kbytes_in = 401.282290/sec
client_http.kbytes_out = 4950.528662/sec
client_http.all_median_svc_time = 0.121063 seconds
client_http.miss_median_svc_time = 0.121063 seconds
client_http.nm_median_svc_time = 0.00 seconds
client_http.nh_median_svc_time = 0.00 seconds
client_http.hit_median_svc_time = 0.00 seconds
server.all.requests = 175.215042/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 4939.614269/sec
server.all.kbytes_out = 226.921319/sec
server.http.requests = 167.784681/sec
server.http.errors = 0.00/sec
server.http.kbytes_in = 4441.452123/sec
server.http.kbytes_out = 167.159676/sec
server.ftp.requests = 0.033319/sec
server.ftp.errors = 0.00/sec
server.ftp.kbytes_in = 24.191365/sec
server.ftp.kbytes_out = 0.001944/sec
server.other.requests = 7.397043/sec
server.other.errors = 0.00/sec
server.other.kbytes_in = 473.970780/sec
server.other.kbytes_out = 59.759977/sec
icp.pkts_sent = 0.00/sec
icp.pkts_recv = 0.00/sec
icp.queries_sent = 0.00/sec
icp.replies_sent = 0.00/sec
icp.queries_recv = 0.00/sec
icp.replies_recv = 0.00/sec
icp.replies_queued = 0.00/sec
icp.query_timeouts = 0.00/sec
icp.kbytes_sent = 0.00/sec
icp.kbytes_recv = 0.00/sec
icp.q_kbytes_sent = 0.00/sec
icp.r_kbytes_sent = 0.00/sec
icp.q_kbytes_recv = 0.00/sec
icp.r_kbytes_recv = 0.00/sec
icp.query_median_svc_time = 0.00 seconds
icp.reply_median_svc_time = 0.00 seconds
dns.median_svc_time = 0.030792 seconds
unlink.requests = 0.00/sec
page_faults = 0.000278/sec
select_loops = 7354.368257/sec
select_fds = 6158.774443/sec
average_select_fd_period = 0.00/fd
median_select_fds = 0.00
swap.outs = 0.00/sec
swap.ins = 0.00/sec
swap.files_cleaned = 0.00/sec
aborted_requests = 3.468483/sec
syscalls.disk.opens = 0.187418/sec
syscalls.disk.closes = 0.185474/sec
syscalls.disk.reads = 0.00/sec
syscalls.disk.writes = 0.101622/sec
syscalls.disk.seeks = 0.00/sec
syscalls.disk.unlinks = 0.101622/sec
syscalls.sock.accepts = 73.385407/sec
syscalls.sock.sockets = 63.842636/sec
syscalls.sock.connects = 63.166821/sec
syscalls.sock.binds = 63.842636/sec
syscalls.sock.closes = 102.078415/sec
syscalls.sock.reads = 3467.104333/sec
syscalls.sock.writes = 2584.626089/sec
syscalls.sock.recvfroms = 20.185062/sec
syscalls.sock.sendtos = 12.178285/sec
cpu_time = 709.070205 seconds
wall_time = 3601.574340 seconds
cpu_usage = 19.687785%

So, the system is serving 175 Request per Second (60 min. average), if I
am understanding well the output.


Well ... client_http.requests = 176.550847/sec



4) Looking the actual connections, I found (extracted from netstat
output)
8 CLOSE_WAIT
3 CLOSING
4216 ESTABLISHED
100 FIN_WAIT1
24 FIN_WAIT2
45 LAST_ACK
5 LISTEN
43 SYN_RECV
46 SYN_SENT
2273 TIME_WAIT

there are many connections, but not so many that cannot be handled for
the system, I guess.

However, network throughput is about 50 Mbps. Let us look, iptraf output
(general statistic for my 1 Gb eth0 interface):

Total rates: 92146,8 kbits/sec and 13393.6 packets/sec
Incoming rates: 43702,1 kbits/sec and 7480 packets/sec
Outgoing rates: 48464,k ktis/sec and 5913.6 packets/sec

The users, however, does not get a really good navigation experience
when reaching these values (on high load moments).. and I am not able to
discover where the bottleneck is (Squid itself, SO, network, drivers,
parameter tuning)..

Any idea on how to get a better throughput with this equipment. Any idea
about subsytems configuration, or 

Re: [squid-users] xcalloc Fatal (squid 3.1.9)

2011-03-09 Thread Víctor José Hernández Gómez



I have found a message such as:

FATAL: xcalloc: Unable to allocate 1 blocks of 536870912 bytes!
Looks like a strange big block.

My squid versión is 3.1.9. Any suggestion?


Probably bug http://bugs.squid-cache.org/show_bug.cgi?id=3113

Please try an upgrade to 3.1.11 to resolve that and a few other smaller
leaks.


thank you, Amos, I have already upgraded squid to 3.1.11.

In 3.1.9 we also got messages warning in that way :
squidaio_queue_request: WARNING - Queue congestion (they were no very 
frequent)


So I raised aufs_threads from 100 to 192 in compilation time. With very 
few activity I have received again the same warning. ¿Any idea or 
recommendation? ¿May be 192 threads is a very high value?


Regards,
--
Víctor



[squid-users] xcalloc Fatal (squid 3.1.9)

2011-03-08 Thread Víctor José Hernández Gómez

Hi all,

I have found a message such as:

FATAL: xcalloc: Unable to allocate 1 blocks of 536870912 bytes!
Looks like a strange big block.

My squid versión is 3.1.9. Any suggestion?

Regards,
--
Víctor Hernández
Centro de Informatica y Comunicaciones


[squid-users] warning: http reply without date

2011-02-03 Thread Víctor José Hernández Gómez

Hi all,

after some time with a proxy only configuration:

cache_dir aufs /san/cache 241904 64 512
cache deny all

we have changed to :

cache_dir aufs /san/cache 241904 64 512
#cache deny all

and started to see messages such as:

WARNING: An error inside Squid has caused an HTTP reply without Date

Is it normal? Should I delete swap.state ?

Thank you in advance for your help,
--
Víctor J. Hernández Gómez


Re: [squid-users] Asking for advice on obtaining a better throughput.

2011-01-13 Thread Víctor José Hernández Gómez

Hi all,

Thank you for your quick reply..

[ rest of message skipped for clarity ..]

cache_dir aufs /san/cache 241904 64 512 - NOt used
cache deny all


NP: with a cache_dir configured it *is* used. The cache deny all means
that new data will not be added AND that existing data will be removed
when detected as stale or a variant collision.


upss, what should I do if I do NOT want to use any disk at all?

[ .]

3.1.10 may be worth trialling. We have just had some surprising
benchmarks submitted for it. On a basic config it seems to perform a few
dozen % RPS faster than 3.1.9. Reason unclear, but the big change
between them was better memory management and better validation support.

I say trial because under high load the benchmarker found some CPU
problems we have not yet isolated and fixed. I hope you will experience
a similar improvement without the problems.

[...]

I will be following your advice (trying 3.1.10) in one or two weeks. I 
will keep you informed.


Would a one frontend/some backends multi-instance configuration help 
in a situation such as that I explained? What about changing congestion 
control schemes ? Any experiences on this particular subject?


Thank you again,
--
Víctor





[squid-users] Asking for advice on obtaining a better throughput.

2011-01-12 Thread Víctor José Hernández Gómez

Hi all,

I am using one box (4Gb RAM, modern multicore CPU) for a mono-instance 
proxy-only (non-caching) squid 3.1.9 version, serving about 2500 clients .


CPU is never over 30%, and vmstat output does not show any swapping.

1) The configuration of the instance is very simple indeed:

# --
# Recommended minimum configuration:
# Recommended access permissions, acls, ... etc...


#
# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

cache_dir aufs /san/cache 241904 64 512 - NOt used
cache deny all

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

# More rules
cache_mem 512 MB
# --

2) TCP is also tuned for performace, timeout optimization, etc..

3) On a high load moment I invoke the following command.

squidclient -p 8080 mgr:60min

which shows:

sample_start_time = 1294827593.936461 (Wed, 12 Jan 2011 10:19:53 GMT)
sample_end_time = 1294831195.510801 (Wed, 12 Jan 2011 11:19:55 GMT)
client_http.requests = 176.550847/sec
client_http.hits = 0.00/sec
client_http.errors = 0.337075/sec
client_http.kbytes_in = 401.282290/sec
client_http.kbytes_out = 4950.528662/sec
client_http.all_median_svc_time = 0.121063 seconds
client_http.miss_median_svc_time = 0.121063 seconds
client_http.nm_median_svc_time = 0.00 seconds
client_http.nh_median_svc_time = 0.00 seconds
client_http.hit_median_svc_time = 0.00 seconds
server.all.requests = 175.215042/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 4939.614269/sec
server.all.kbytes_out = 226.921319/sec
server.http.requests = 167.784681/sec
server.http.errors = 0.00/sec
server.http.kbytes_in = 4441.452123/sec
server.http.kbytes_out = 167.159676/sec
server.ftp.requests = 0.033319/sec
server.ftp.errors = 0.00/sec
server.ftp.kbytes_in = 24.191365/sec
server.ftp.kbytes_out = 0.001944/sec
server.other.requests = 7.397043/sec
server.other.errors = 0.00/sec
server.other.kbytes_in = 473.970780/sec
server.other.kbytes_out = 59.759977/sec
icp.pkts_sent = 0.00/sec
icp.pkts_recv = 0.00/sec
icp.queries_sent = 0.00/sec
icp.replies_sent = 0.00/sec
icp.queries_recv = 0.00/sec
icp.replies_recv = 0.00/sec
icp.replies_queued = 0.00/sec
icp.query_timeouts = 0.00/sec
icp.kbytes_sent = 0.00/sec
icp.kbytes_recv = 0.00/sec
icp.q_kbytes_sent = 0.00/sec
icp.r_kbytes_sent = 0.00/sec
icp.q_kbytes_recv = 0.00/sec
icp.r_kbytes_recv = 0.00/sec
icp.query_median_svc_time = 0.00 seconds
icp.reply_median_svc_time = 0.00 seconds
dns.median_svc_time = 0.030792 seconds
unlink.requests = 0.00/sec
page_faults = 0.000278/sec
select_loops = 7354.368257/sec
select_fds = 6158.774443/sec
average_select_fd_period = 0.00/fd
median_select_fds = 0.00
swap.outs = 0.00/sec
swap.ins = 0.00/sec
swap.files_cleaned = 0.00/sec
aborted_requests = 3.468483/sec
syscalls.disk.opens = 0.187418/sec
syscalls.disk.closes = 0.185474/sec
syscalls.disk.reads = 0.00/sec
syscalls.disk.writes = 0.101622/sec
syscalls.disk.seeks = 0.00/sec
syscalls.disk.unlinks = 0.101622/sec
syscalls.sock.accepts = 73.385407/sec
syscalls.sock.sockets = 63.842636/sec
syscalls.sock.connects = 63.166821/sec
syscalls.sock.binds = 63.842636/sec
syscalls.sock.closes = 102.078415/sec
syscalls.sock.reads = 3467.104333/sec
syscalls.sock.writes = 2584.626089/sec
syscalls.sock.recvfroms = 20.185062/sec
syscalls.sock.sendtos = 12.178285/sec
cpu_time = 709.070205 seconds
wall_time = 3601.574340 seconds
cpu_usage = 19.687785%

So, the system is serving 175 Request per Second (60 min. average), if I 
am understanding well the output.


4)  Looking the actual connections, I found (extracted from netstat output)
  8 CLOSE_WAIT
  3 CLOSING
   4216 ESTABLISHED
100 FIN_WAIT1
 24 FIN_WAIT2
 45 LAST_ACK
  5 LISTEN
 43 SYN_RECV
 46 SYN_SENT
   2273 TIME_WAIT

there are many connections, but not so many that cannot be handled for 
the system, I guess.


However, network throughput is about 50 Mbps. Let us look, iptraf output 
(general statistic for my 1 Gb eth0 interface):


Total rates: 92146,8 kbits/sec and 13393.6 packets/sec
Incoming rates: 43702,1 kbits/sec and 7480 packets/sec
Outgoing rates: 48464,k ktis/sec and 5913.6 packets/sec

The users, however, does not get a really good navigation experience 
when reaching these values (on high load moments).. and I am not able to 
discover where the bottleneck is (Squid itself, SO, network, drivers, 
parameter tuning)..


Any idea on how to get a better throughput with this equipment. Any idea 
about subsytems configuration, or general configuration of squid software?


Thank you in advance for your help, and congratulations for your great