Hello Amos

Thank you for your answer. Here is the information.


MTU:

I've checked this following your suggestion and it appears that there are no 
MTU issues with the upload sites as ping shows below: 

[r...@fw01-sao ~]# ping -c 5 -M do -s 1472 discovirtual.terra.com.br
PING produtos.terra.com.br (200.154.56.65) 1472(1500) bytes of data.
1480 bytes from produtos.terra.com.br (200.154.56.65): icmp_seq=1 ttl=245 
time=75.1 ms
<snip>
1480 bytes from produtos.terra.com.br (200.154.56.65): icmp_seq=5 ttl=245 
time=72.1 ms

--- produtos.terra.com.br ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 72.081/73.193/75.138/1.157 ms
 [r...@fw01-sao ~]# ping -c 5 -M do -s 1472 www.freeaspupload.net
PING www.freeaspupload.net (208.106.217.3) 1472(1500) bytes of data.
1480 bytes from innerstrengthfit.com (208.106.217.3): icmp_seq=1 ttl=113 
time=230 ms
<snip>
1480 bytes from innerstrengthfit.com (208.106.217.3): icmp_seq=5 ttl=114 
time=233 ms

--- www.freeaspupload.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 227.849/230.390/233.794/2.042 ms
 [r...@fw01-sao ~]# ping -c 5 -M want -s 1472 discovirtual.terra.com.br  
PING produtos.terra.com.br (200.154.56.65) 1472(1500) bytes of data.
1480 bytes from produtos.terra.com.br (200.154.56.65): icmp_seq=1 ttl=245 
time=76.1 ms
<snip>
1480 bytes from produtos.terra.com.br (200.154.56.65): icmp_seq=5 ttl=245 
time=71.9 ms

--- produtos.terra.com.br ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 71.634/72.920/76.120/1.655 ms
 [r...@fw01-sao ~]# ping -c 5 -M want -s 1472 www.freeaspupload.net    
PING www.freeaspupload.net (208.106.217.3) 1472(1500) bytes of data.
1480 bytes from webmailasp.net (208.106.217.3): icmp_seq=1 ttl=113 time=233 ms
<snip>
1480 bytes from webmailasp.net (208.106.217.3): icmp_seq=5 ttl=114 time=232 ms

--- www.freeaspupload.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 228.214/231.006/233.755/1.985 ms
[r...@fw01-sao ~]#


Persistent connections:

I'm not really sure if I understood your suggestion correctly, but isn't 
"server_persistent_connections on" the default? Anyway, forcing it in 
configuration did not have any impact on the problem.

 [r...@fw01-sao ~]# squid -v
Squid Cache: Version 3.1.4
configure options:  '--build=i386-koji-linux-gnu' '--host=i386-koji-linux-gnu' 
'--target=i386-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--localstatedir=/var' 
'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' 'CPPFLAGS= -DOPENSSL_NO_KRB5' 
'--sysconfdir=/etc/squid' '--libexecdir=/usr/libexec/squid' 
'--datadir=/usr/share/squid' '--enable-async-io=64' 
'--enable-storeio=aufs,diskd,ufs' 
'--enable-disk-io=AIO,Blocking,DiskDaemon,DiskThreads' 
'--enable-removal-policies=heap,lru' '--enable-icmp' '--enable-delay-pools' 
'--enable-icap-client' '--enable-useragent-log' '--enable-referer-log' 
'--enable-kill-parent-hack' '--enable-arp-acl' '--enable-ssl' 
'--enable-forw-via-db' '--enable-cache-digests' '--disable-http-violations' 
'--enable-linux-netfilter' '--enable-follow-x-forwarded-for' 
'--disable-ident-lookups' '--enable-auth=basic,digest,negotiate,ntlm' 
'--enable-basic-auth-helpers=DB,LDAP,MSNT,NCSA,PAM,SASL,SMB,getpwnam,multi-domain-NTLM,squid_radius_auth'
 '--enable-ntlm-auth-helpers=fakeauth,no_check,smb_lm' 
'--enable-ntlm-fail-open' 
'--enable-digest-auth-helpers=eDirectory,ldap,password' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
 '--enable-stacktraces' '--enable-x-accelerator-vary' '--enable-zph-qos' 
'--with-default-user=squid' '--with-logdir=/var/log/squid' 
'--with-pidfile=/var/run/squid.pid' '--with-pthreads' '--with-aio' '--with-dl' 
'--with-openssl=/usr' '--with-large-files' '--with-filedescriptors=32768' 
'build_alias=i386-koji-linux-gnu' 'host_alias=i386-koji-linux-gnu' 
'target_alias=i386-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic 
-fasynchronous-unwind-tables' 'CXXFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic 
-fasynchronous-unwind-tables' 'FFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic 
-fasynchronous-unwind-tables' --with-squid=/builddir/build/BUILD/squid-3.1.4 
--enable-ltdl-convenience
[r...@fw01-sao ~]#

DNS:

I've compared local resolution (in squid's box) results to what this online 
nslookup tool (http://www.zoneedit.com/lookup.html) provides, and they are all 
consistent, same records and addresses. I'm sorry for not sending all the 
outputs of dig/nslookup, since they would make this message too long.


Rodrigo Ferraz

-----Mensagem original-----
De: Amos Jeffries [mailto:[email protected]] 
Enviada em: sexta-feira, 6 de agosto de 2010 12:51
Para: [email protected]
Assunto: Re: [squid-users] Http upload problem with TCP_MISS/000 and ctx: 
enter/exit messages

Rodrigo Ferraz wrote:
> Hello
> 
> We've been struggling for a few days with a permanent problem on a newly 
> installed squid 3.1.4 and those web form-based uploads, either using ASP, 
> javascript or any other language behind.
> Let me assure you guys, ALL uploads are failing, not with a few specific 
> sites. It is just a matter of clicking an OK button to submit the file and 
> the browser (IE or Firefox) instantly shows either its own error page (Page 
> could not be opened) in 90% of the tries or squid's error page (Connection 
> Reset by Peer)  in the remaining 10%.
> 
> By configuring a remote client to use the proxy server through an external 
> SSH tunnel (i.e. by excluding all the local network devices), we can reduce 
> the error ratio to around 5% of the tries. So, when the upload works, it 
> shows this:
> 
> 1281099317.664 409638 127.0.0.1 TCP_MISS/200 1840 POST 
> http://discovirtual.terra.com.br/vd.cgi administrator 
> DIRECT/200.154.56.65 text/html
> 
> When it doesn't, it shows this:
> 
> 1281102595.774  21086 127.0.0.1 TCP_MISS/000 0 POST 
> http://discovirtual.terra.com.br/vd.cgi administrator 
> DIRECT/200.154.56.65 -
> 

either connection to client or server died before the reply came back.
this is consistent with squid->server TCP not getting any replies back.

check that PMTU discovery works to those sites from the squid box.


> Plus, cache.log has a lot of these messages which I don't understand:
> 
<snip>
> 2010/08/06 10:24:12.867| ctx: enter level  5: 
> 'http://dnl-14.geo.kaspersky.com/bases/av/emu/emu-0607g.xml.dif'
> 2010/08/06 10:24:12.867| ctx: exit level  5
> 

ctx is not something to worry overly much about.
It's just a counter of how many times squid has had to stop and wait for a 
particular request's headers to arrive. 3.1.4 had a small 'leak', that meant 
the counter was not reset properly when the headers were finished.


> Additional info:
> 
> * CentOS release 5.5 (Final), 32 bit
> * squid3-3.1.4-1.el5.pp.i386.rpm (from 
> http://www.pramberger.at/peter/services/repository/)
> * No more than 5 simultaneous users
> * Intel Core 2 Duo E7600, 4 GB RAM, Intel DG31PR motherboard
> * Direct connections, without squid, always work.
> * Resolv.conf points to 127.0.0.1, which is bind-9.3.6-4.P1.el5_4.2
> * Tried with and without "half_closed_clients off".
> * Already deleted and recreated /var/cache/squid.
> * One of the cache.log files seem to be truncated or with binary characters 
> preventing it to be properly read from the console.
> * Found two occurrences of "Exception error:found data bewteen chunk end and 
> CRLF" in cache.log.

Not good. That is a sign of the remote end of those links sending corrupted 
data.

> 
> My guesses are:
> 
> - It could be a hardware problem with the server specifically related 
> to faulty NIC, I/O or bad memory, but there are no system wide errors 
> being logged which would support this and all other server 
> applications are working fine;
> - It could be a hardware problem with the wan circuit or provider, but 
> without the proxy server, going directly to Internet, the problem never 
> happens.
> - It could be a DNS problem. Unlikely, since the problem is only relates to 
> upload (POST) operations to the same websites which were already resolved by 
> its own named.
> - It coud be a DoS launched from an internal infected workstation. Unlikely, 
> squid is not crashing and server load stays at 0.00.
> - It could be a squid bug or problem in face of an unknown condition? 
> Unlikely, we have the same software setup (O.S., the same rpm and config of 
> squid 3.1.4) in another remote office which works perfectly with these same 
> upload websites.
> - It could be a problem with all the upload websites tried? REALLY unlikely.
> 
> So I would like to kindly ask for any suggestions on diagnostics and 
> troubleshooting of this problem.

Looks like you have eliminated everything except network lag.
does persistent connections help (particularly to servers)

What squid -v show please?

and what do the dying sites resolve to from the squid box (both AAAA and A).

> 
> --------
> 
> squid.conf
> 
> half_closed_clients off
> range_offset_limit -1
> maximum_object_size 200 MB
> quick_abort_min -1
<snip>


Amos
--
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.6
   Beta testers wanted for 3.2.0.1

Reply via email to