[squid-users] refresh_pattern rule

2008-11-12 Thread nitesh naik
Hi All,

Most of the requests served by squid has expire time of 1 hour because
of this we are not seeing expected HIT ratio. What would be
refresh_pattern rule we should apply to get higher HIT ratio ?

Cache_mem is 2 GB and cache_dir is 6 GB.

Currently we are using following refresh pattern rule.

refresh_pattern .   020%  3600

Regards
Nitesh


Re: [squid-users] Squid memory usage

2008-11-10 Thread nitesh naik
Henrik,

I read FAQ and implemented almost most of the suggestion to reduce
memory usage. I am not much concern about memory usage as there plenty
of available memory but the issue is CPU usage goes high up to 100%
and slows down squid response once squid grows beyond allocated
cache_mem size . Does that mean squid is spending most of time in
releasing the objects from cache ? Most of the objects stored in cache
has TTL of 1 hour.



Following are few lines from squid.conf file.

http_port 0.0.0.0:80 accel defaultsite=s1.xyz.com vhost protocol=http
cache_peer 10.0.0.175 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.175:80/healthcheck.gif
cache_peer 10.0.0.177 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.177:80/healthcheck.gif
cache_peer 10.0.0.179 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.179:80/healthcheck.gif
cache_peer 10.0.0.181 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.181:80/healthcheck.gif
dead_peer_timeout 10 seconds
hierarchy_stoplist cgi-bin
hierarchy_stoplist ?
cache_mem 4294967296 bytes
maximum_object_size_in_memory 1048576 bytes
memory_replacement_policy lru
cache_replacement_policy lru
cache_dir null /empty
cache_swap_low 60
cache_swap_high 80
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern (cgi-bin|\?) 0 0% 0
refresh_pattern . 1800 20% 3600

Regards
Nitesh



On Sat, Nov 8, 2008 at 2:06 AM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 Have you read the faq section on memory usage?



 On fre, 2008-11-07 at 20:02 +0530, nitesh naik wrote:
 Henrik / Amos,

 Do you all think I should reduce cache_mem to lesser value ? Squid
 stops responding as memory usage of squid grows upto 12GB. I have
 allocate 8 GB cache_mem.

 We are using 64 bit machine running on Suse 10.1.

 Regards
 Nitesh


 On Thu, Nov 6, 2008 at 11:35 PM, nitesh naik [EMAIL PROTECTED] wrote:
  Thanks everyone for your reply.
 
  I went through all these docs and also compiled squid with dmalloc
  option and disabled memory_pool. Squid memory usage grows upto 12GB+
  and squid stops responding when we try to rotate logs using squid -k
  rotate.
 
  I want squid up and running all the time even if its memory usage
  grows double the allocate cache_mem value.
 
  Regards
  Nitesh
  On Thu, Nov 6, 2008 at 3:58 PM, Adam Carter [EMAIL PROTECTED] wrote:
  Squid memory usage grows beyond allocate cache_mem size of 8 GB.
 
  http://wiki.squid-cache.org/SquidFaq/SquidMemory
 
 



Re: [squid-users] Squid memory usage

2008-11-07 Thread nitesh naik
Henrik / Amos,

Do you all think I should reduce cache_mem to lesser value ? Squid
stops responding as memory usage of squid grows upto 12GB. I have
allocate 8 GB cache_mem.

We are using 64 bit machine running on Suse 10.1.

Regards
Nitesh


On Thu, Nov 6, 2008 at 11:35 PM, nitesh naik [EMAIL PROTECTED] wrote:
 Thanks everyone for your reply.

 I went through all these docs and also compiled squid with dmalloc
 option and disabled memory_pool. Squid memory usage grows upto 12GB+
 and squid stops responding when we try to rotate logs using squid -k
 rotate.

 I want squid up and running all the time even if its memory usage
 grows double the allocate cache_mem value.

 Regards
 Nitesh
 On Thu, Nov 6, 2008 at 3:58 PM, Adam Carter [EMAIL PROTECTED] wrote:
 Squid memory usage grows beyond allocate cache_mem size of 8 GB.

 http://wiki.squid-cache.org/SquidFaq/SquidMemory




[squid-users] Squid memory usage

2008-11-06 Thread nitesh naik
Hi All,

Squid memory usage grows beyond allocate cache_mem size of 8 GB. Total
physical memory available on machine is 20 GB.

Does that mean there is memory leak and I should replace malloc
library and compile squid ?

I am using squid2.6 without using disk for caching.

Memory usage for squid via mallinfo():
Total space in arena:  -52916 KB
Ordinary blocks:   -82192 KB 215707 blks
Small blocks:   0 KB  0 blks
Holding blocks: 11404 KB  3 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   29275 KB
Total in use:  -70788 KB 171%
Total free: 29275 KB -70%
Total size:-41512 KB
Memory accounted for:
Total accounted:   11256849 KB
memPoolAlloc calls: 870846623
memPoolFree calls: 819677123

Regards
Nitesh


Re: [squid-users] Squid memory usage

2008-11-06 Thread nitesh naik
Thanks everyone for your reply.

I went through all these docs and also compiled squid with dmalloc
option and disabled memory_pool. Squid memory usage grows upto 12GB+
and squid stops responding when we try to rotate logs using squid -k
rotate.

I want squid up and running all the time even if its memory usage
grows double the allocate cache_mem value.

Regards
Nitesh
On Thu, Nov 6, 2008 at 3:58 PM, Adam Carter [EMAIL PROTECTED] wrote:
 Squid memory usage grows beyond allocate cache_mem size of 8 GB.

 http://wiki.squid-cache.org/SquidFaq/SquidMemory



Re: [squid-users] origin server health detect

2008-11-04 Thread nitesh naik
Amos,

Thanks for your reply.

I tried squid2.6 and it works as per requirement . Squid will stop
sending request to origin if origin returns http status code is other
than 2xx.

cache_peer 10.0.0.175 parent 80 0 no-query originserver round-robin
monitorurl=/healthcheck.gif monitorinterval=1

Not sure how difficult it is to port it to squid3 as it is already
supported in squid2.6.  We will take a look at code and see if we
could port it to squid3.

Regards
Nitesh


On Tue, Nov 4, 2008 at 6:22 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 nitesh naik wrote:

 Hi,

 Is there way to stop forwarding requests to origin if monitoring url
 returns 404 in squid 3 ?  Sometimes few nodes in our origin server
 cluster are unavailable and we would like to disable origin which is
 up but responding with 404 http status code.

 Also I would like to know if there is option to check origin server
 health in squid 3.

 Regards
 Nitesh

 Sory, those particular options are not yet ported to either Squid-3 release.
 Such failover detection is automatic behavior in all current Squid, but I
 suspect the automatics are not fast enough if you are needing the explicit
 settings.
 The monitor* settings are marked for porting in 3.2 at some point. please
 help out by sponsoring a developer to pick up the feature if its very
 important for you.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1



Re: [squid-users] Ignoring query string from url

2008-11-03 Thread nitesh naik
min_http_poll_cnt 8
tcp_recv_bufsize 0 bytes
check_hostnames off
allow_underscore on
dns_retransmit_interval 5 seconds
dns_timeout 120 seconds
dns_defnames off
hosts_file /etc/hosts
ignore_unknown_nameservers on
ipcache_size 1024
ipcache_low 90
ipcache_high 95
fqdncache_size 1024
memory_pools on
memory_pools_limit 5242880 bytes
forwarded_for on
cachemgr_passwd XX shutdown config reconfigure offline_toggle
client_db on
refresh_all_ims off
reload_into_ims off
maximum_single_addr_tries 1
retry_on_error off
as_whois_server whois.ra.net
offline_mode off
uri_whitespace strip
coredump_dir /home/zdn/squid/var/cache
balance_on_multiple_ip on
pipeline_prefetch off
high_response_time_warning 0
high_page_fault_warning 0
high_memory_warning 0 bytes
sleep_after_fork 0
windows_ipaddrchangemonitor on

Regards
Nitesh

On Mon, Nov 3, 2008 at 10:27 AM, nitesh naik [EMAIL PROTECTED] wrote:
 Henrik / Amos,

 Tried using these setting and I could see see delay in serving the
 requests even for cached objects.

 1225687535.330   5459 81.52.249.101 TCP_MEM_HIT/200 1475 GET
 http://abc.xyz.com/3613/172/500/248/211/i5.js?z=9059 - NONE/-
 application/x-javascript
 1225687535.330   5614 81.52.249.100 TCP_MEM_HIT/200 8129 GET
 http://abc.xyz.com/3357/172/211/4/1/i4.js?z=6079 - NONE/-
 application/x-javascript
 1225687539.661  12327 168.143.241.12 TCP_MISS/200 2064 GET
 http://bc.xyz.com/2333/254/496/158/122/i17.js?z=3473 -
 ROUNDROBIN_PARENT/10.0.0.181 application/x-javascript


 Following are timeouts that I have set.

  connect_timeout 10 seconds
  peer_connect_timeout 5 seconds
  read_timeout 2 minutes
  request_timeout 10 seconds
  icp_query_timeout 4000

 and cache peer settings.

 cache_peer 10.0.0.175 parent 80 0 no-query originserver round-robin
 cache_peer 10.0.0.177 parent 80 0 no-query originserver round-robin
 cache_peer 10.0.0.179 parent 80 0 no-query originserver round-robin
 cache_peer 10.0.0.181 parent 80 0 no-query originserver round-robin

 Regards
 Nitesh


 On Sun, Nov 2, 2008 at 1:11 AM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 On tor, 2008-10-30 at 19:50 +0530, nitesh naik wrote:

 url rewrite helper script works fine for few requests ( 100 req/sec )
 but slows down response as number of requests increase and it takes
 10+ second to deliver the objects.

 I'v run setups like this at more than thousand requests/s.

 Is there way to optimise it further ?

 url_rewrite_program  /home/zdn/bin/redirect_parallel.pl
 url_rewrite_children 2000
 url_rewrite_concurrency 5

 Those two should be the other way around.

 url_rewrite_concurrency 2000
 url_rewrite_children 2

 Regards
 Henrik




Re: [squid-users] Ignoring query string from url

2008-11-03 Thread nitesh naik
Does these Redirector statistics mean url rewrite helper program is
slowing down squid response ? avg service time is 1550 msec.

Redirector Statistics:
program: /home/zdn/bin/redirect_parallel.pl
number running: 2 of 2
requests sent: 1069753
replies received: 1069752
queue length: 0
avg service time: 1550 msec


#   FD  PID # Requests  Flags   TimeOffset  Request
1   10  18237   12645   B   0.002   38  (none)
2   15  18238   12335   2.144   0   (none)

Regards
Nitesh

On Mon, Nov 3, 2008 at 2:46 PM, nitesh naik [EMAIL PROTECTED] wrote:
 Not sure if url rewrite helper is slowing down process because via
 cache manager interface it didn't show any connection back log. What
 information I should look for in cache manager to find out the cause
 of the slow serving of requests ?

 Redirector Statistics:
 program: /home/zdn/bin/redirect_parallel.pl
 number running: 2 of 2
 requests sent: 155697
 replies received: 155692
 queue length: 0
 avg service time: 0 msec


 #   FD  PID # Requests  Flags   TimeOffset  Request
 1   8   21149   104125
 BW  0.033   38  http://s2.xyz.com/1821/78/570/1789/563/i88.js?z=4258
 81.52.249.106/- - GET myip=10.0.0.165 myport=80\n
 2   9   21150   51572   BW  0.039   0   
 http://s2.xyz.com/1813/2/570/1781/563/i7.js?z=8853
 81.52.249.106/- - GET myip=10.0.0.165 myport=80\n


 Following are my squid settings.

 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1
 acl to_localhost dst 127.0.0.0/255.0.0.0
 acl localnet src 10.0.0.0/255.0.0.0
 acl SSL_ports port 443
 acl Safe_ports port 80 21 443 70 210 1025-65535 280 488 591 777
 acl CONNECT method CONNECT
 http_access Allow manager localhost
 http_access Deny manager
 http_access Deny !Safe_ports
 http_access Deny CONNECT !SSL_ports
 http_access Allow all
 http_access Allow localnet
 http_access Deny all
 icp_access Allow localnet
 icp_access Deny all
 htcp_access Allow localnet
 htcp_access Deny all
 htcp_clr_access Deny all
 ident_lookup_access Deny all
 http_port 0.0.0.0:80 defaultsite=s1.xyz.com vhost
 cache_peer 10.0.0.175 Parent 80 0 no-query round-robin originserver
 cache_peer 10.0.0.177 Parent 80 0 no-query round-robin originserver
 cache_peer 10.0.0.179 Parent 80 0 no-query round-robin originserver
 cache_peer 10.0.0.181 Parent 80 0 no-query round-robin originserver
 dead_peer_timeout 10 seconds
 hierarchy_stoplist cgi-bin
 hierarchy_stoplist ?
 cache_mem 0 bytes
 maximum_object_size_in_memory 1048576 bytes
 memory_replacement_policy lru
 cache_replacement_policy lru
 cache_dir ufs /home/zdn/squid/var/cache 6000 16 256 IOEngine=Blocking
 store_dir_select_algorithm least-load
 max_open_disk_fds 0
 minimum_object_size 0 bytes
 maximum_object_size 4194304 bytes
 cache_swap_low 90
 cache_swap_high 95
 logformat combined %a %ui %un [%[tl] %rm %ru HTTP/%v %Hs %st
 %{Referer}h %{User-Agent}h %Ss:%Sh
 access_log /home/zdn/squid/var/logs/access.log squid
 cache_log /home/zdn/squid/var/logs/cache.log
 cache_store_log /home/zdn/squid/var/logs/store.log
 logfile_rotate 10
 emulate_httpd_log off
 log_ip_on_direct on
 mime_table /home/zdn/squid/etc/mime.conf
 log_mime_hdrs off
 pid_filename /home/zdn/squid/var/logs/squid.pid
 debug_options ALL,1
 log_fqdn off
 client_netmask 255.255.255.255
 strip_query_terms off
 buffered_logs off
 url_rewrite_program /home/zdn/bin/redirect_parallel.pl
 url_rewrite_children 2
 url_rewrite_concurrency 2000
 url_rewrite_host_header off
 url_rewrite_bypass off
 refresh_pattern ^ftp: 1440 20% 10080

 refresh_pattern ^gopher: 1440 0% 1440

 refresh_pattern (cgi-bin|\?) 0 0% 0

 refresh_pattern . 0 20% 4320

 quick_abort_min 16 KB
 quick_abort_max 16 KB
 quick_abort_pct 95
 read_ahead_gap 16384 bytes
 negative_ttl 0 seconds
 positive_dns_ttl 21600 seconds
 negative_dns_ttl 60 seconds
 range_offset_limit 0 bytes
 minimum_expiry_time 60 seconds
 store_avg_object_size 13 KB
 store_objects_per_bucket 20
 request_header_max_size 20480 bytes
 reply_header_max_size 20480 bytes
 request_body_max_size 0 bytes
 via off
 ie_refresh off
 vary_ignore_expire off
 request_entities off
 relaxed_header_parser on
 forward_timeout 240 seconds
 connect_timeout 10 seconds
 peer_connect_timeout 5 seconds
 read_timeout 120 seconds
 request_timeout 10 seconds
 persistent_request_timeout 120 seconds
 client_lifetime 86400 seconds
 half_closed_clients off
 pconn_timeout 60 seconds
 ident_timeout 10 seconds
 shutdown_lifetime 30 seconds
 cache_mgr webmaster
 mail_program mail
 cache_effective_user zdn
 httpd_suppress_version_string off
 umask 23
 announce_period 31536000 seconds
 announce_host tracker.ircache.net
 announce_port 3131
 client_persistent_connections off
 server_persistent_connections off
 persistent_connection_after_error off
 detect_broken_pconn off
 snmp_port 0
 snmp_access Deny all
 snmp_incoming_address 0.0.0.0
 snmp_outgoing_address 255.255.255.255
 icp_port 3130
 htcp_port 0

Re: [squid-users] Ignoring query string from url

2008-11-03 Thread nitesh naik
Hi All,

Issues was with Disk I/O. I have used null cache dir and squid
response is much faster now.

 cache_dir null /empty

Thanks everyone for your help.

Regards
Nitesh

On Tue, Nov 4, 2008 at 9:40 AM, nitesh naik [EMAIL PROTECTED] wrote:
 Does these Redirector statistics mean url rewrite helper program is
 slowing down squid response ? avg service time is 1550 msec.

 Redirector Statistics:
 program: /home/zdn/bin/redirect_parallel.pl
 number running: 2 of 2
 requests sent: 1069753
 replies received: 1069752
 queue length: 0
 avg service time: 1550 msec


 #   FD  PID # Requests  Flags   TimeOffset  Request
 1   10  18237   12645   B   0.002   38  (none)
 2   15  18238   12335   2.144   0   (none)

 Regards
 Nitesh

 On Mon, Nov 3, 2008 at 2:46 PM, nitesh naik [EMAIL PROTECTED] wrote:
 Not sure if url rewrite helper is slowing down process because via
 cache manager interface it didn't show any connection back log. What
 information I should look for in cache manager to find out the cause
 of the slow serving of requests ?

 Redirector Statistics:
 program: /home/zdn/bin/redirect_parallel.pl
 number running: 2 of 2
 requests sent: 155697
 replies received: 155692
 queue length: 0
 avg service time: 0 msec


 #   FD  PID # Requests  Flags   TimeOffset  Request
 1   8   21149   104125
 BW  0.033   38  http://s2.xyz.com/1821/78/570/1789/563/i88.js?z=4258
 81.52.249.106/- - GET myip=10.0.0.165 myport=80\n
 2   9   21150   51572   BW  0.039   0   
 http://s2.xyz.com/1813/2/570/1781/563/i7.js?z=8853
 81.52.249.106/- - GET myip=10.0.0.165 myport=80\n


 Following are my squid settings.

 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1
 acl to_localhost dst 127.0.0.0/255.0.0.0
 acl localnet src 10.0.0.0/255.0.0.0
 acl SSL_ports port 443
 acl Safe_ports port 80 21 443 70 210 1025-65535 280 488 591 777
 acl CONNECT method CONNECT
 http_access Allow manager localhost
 http_access Deny manager
 http_access Deny !Safe_ports
 http_access Deny CONNECT !SSL_ports
 http_access Allow all
 http_access Allow localnet
 http_access Deny all
 icp_access Allow localnet
 icp_access Deny all
 htcp_access Allow localnet
 htcp_access Deny all
 htcp_clr_access Deny all
 ident_lookup_access Deny all
 http_port 0.0.0.0:80 defaultsite=s1.xyz.com vhost
 cache_peer 10.0.0.175 Parent 80 0 no-query round-robin originserver
 cache_peer 10.0.0.177 Parent 80 0 no-query round-robin originserver
 cache_peer 10.0.0.179 Parent 80 0 no-query round-robin originserver
 cache_peer 10.0.0.181 Parent 80 0 no-query round-robin originserver
 dead_peer_timeout 10 seconds
 hierarchy_stoplist cgi-bin
 hierarchy_stoplist ?
 cache_mem 0 bytes
 maximum_object_size_in_memory 1048576 bytes
 memory_replacement_policy lru
 cache_replacement_policy lru
 cache_dir ufs /home/zdn/squid/var/cache 6000 16 256 IOEngine=Blocking
 store_dir_select_algorithm least-load
 max_open_disk_fds 0
 minimum_object_size 0 bytes
 maximum_object_size 4194304 bytes
 cache_swap_low 90
 cache_swap_high 95
 logformat combined %a %ui %un [%[tl] %rm %ru HTTP/%v %Hs %st
 %{Referer}h %{User-Agent}h %Ss:%Sh
 access_log /home/zdn/squid/var/logs/access.log squid
 cache_log /home/zdn/squid/var/logs/cache.log
 cache_store_log /home/zdn/squid/var/logs/store.log
 logfile_rotate 10
 emulate_httpd_log off
 log_ip_on_direct on
 mime_table /home/zdn/squid/etc/mime.conf
 log_mime_hdrs off
 pid_filename /home/zdn/squid/var/logs/squid.pid
 debug_options ALL,1
 log_fqdn off
 client_netmask 255.255.255.255
 strip_query_terms off
 buffered_logs off
 url_rewrite_program /home/zdn/bin/redirect_parallel.pl
 url_rewrite_children 2
 url_rewrite_concurrency 2000
 url_rewrite_host_header off
 url_rewrite_bypass off
 refresh_pattern ^ftp: 1440 20% 10080

 refresh_pattern ^gopher: 1440 0% 1440

 refresh_pattern (cgi-bin|\?) 0 0% 0

 refresh_pattern . 0 20% 4320

 quick_abort_min 16 KB
 quick_abort_max 16 KB
 quick_abort_pct 95
 read_ahead_gap 16384 bytes
 negative_ttl 0 seconds
 positive_dns_ttl 21600 seconds
 negative_dns_ttl 60 seconds
 range_offset_limit 0 bytes
 minimum_expiry_time 60 seconds
 store_avg_object_size 13 KB
 store_objects_per_bucket 20
 request_header_max_size 20480 bytes
 reply_header_max_size 20480 bytes
 request_body_max_size 0 bytes
 via off
 ie_refresh off
 vary_ignore_expire off
 request_entities off
 relaxed_header_parser on
 forward_timeout 240 seconds
 connect_timeout 10 seconds
 peer_connect_timeout 5 seconds
 read_timeout 120 seconds
 request_timeout 10 seconds
 persistent_request_timeout 120 seconds
 client_lifetime 86400 seconds
 half_closed_clients off
 pconn_timeout 60 seconds
 ident_timeout 10 seconds
 shutdown_lifetime 30 seconds
 cache_mgr webmaster
 mail_program mail
 cache_effective_user zdn
 httpd_suppress_version_string off
 umask 23
 announce_period 31536000 seconds
 announce_host tracker.ircache.net
 announce_port 3131

[squid-users] origin server health detect

2008-11-03 Thread nitesh naik
Hi,

Is there way to stop forwarding requests to origin if monitoring url
returns 404 in squid 3 ?  Sometimes few nodes in our origin server
cluster are unavailable and we would like to disable origin which is
up but responding with 404 http status code.

Also I would like to know if there is option to check origin server
health in squid 3.

Regards
Nitesh


Re: [squid-users] Ignoring query string from url

2008-11-02 Thread nitesh naik
Henrik / Amos,

Tried using these setting and I could see see delay in serving the
requests even for cached objects.

1225687535.330   5459 81.52.249.101 TCP_MEM_HIT/200 1475 GET
http://abc.xyz.com/3613/172/500/248/211/i5.js?z=9059 - NONE/-
application/x-javascript
1225687535.330   5614 81.52.249.100 TCP_MEM_HIT/200 8129 GET
http://abc.xyz.com/3357/172/211/4/1/i4.js?z=6079 - NONE/-
application/x-javascript
1225687539.661  12327 168.143.241.12 TCP_MISS/200 2064 GET
http://bc.xyz.com/2333/254/496/158/122/i17.js?z=3473 -
ROUNDROBIN_PARENT/10.0.0.181 application/x-javascript


Following are timeouts that I have set.

 connect_timeout 10 seconds
 peer_connect_timeout 5 seconds
 read_timeout 2 minutes
 request_timeout 10 seconds
 icp_query_timeout 4000

and cache peer settings.

cache_peer 10.0.0.175 parent 80 0 no-query originserver round-robin
cache_peer 10.0.0.177 parent 80 0 no-query originserver round-robin
cache_peer 10.0.0.179 parent 80 0 no-query originserver round-robin
cache_peer 10.0.0.181 parent 80 0 no-query originserver round-robin

Regards
Nitesh


On Sun, Nov 2, 2008 at 1:11 AM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On tor, 2008-10-30 at 19:50 +0530, nitesh naik wrote:

 url rewrite helper script works fine for few requests ( 100 req/sec )
 but slows down response as number of requests increase and it takes
 10+ second to deliver the objects.

 I'v run setups like this at more than thousand requests/s.

 Is there way to optimise it further ?

 url_rewrite_program  /home/zdn/bin/redirect_parallel.pl
 url_rewrite_children 2000
 url_rewrite_concurrency 5

 Those two should be the other way around.

 url_rewrite_concurrency 2000
 url_rewrite_children 2

 Regards
 Henrik



Re: [squid-users] Ignoring query string from url

2008-10-30 Thread nitesh naik
Henrik,

With this approach I see that only one redirector process is being
used and requests are processed in serial order. This causes delay in
serving the objects and even response for cache object is slower.

I tried changing url_rewrite_concurrency to 1 but with this setting
squid is not caching the Object. I guess I need to use url rewrite
program which will process requests in parallel to handle the load of
5000 req/sec.

Regards
Nitesh

On Mon, Oct 27, 2008 at 5:18 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 See earlier response.

 On mån, 2008-10-27 at 16:59 +0530, nitesh naik wrote:
 Henrik,

 What if I use following code ?  logic is same as your program ?


 #!/usr/bin/perl
 $|=1;
 while () {
 s|(.*)\?(.*$)|$1|;
 print;
 next;
 }

 Regards
 Nitesh

 On Mon, Oct 27, 2008 at 4:25 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 
  Sorry, forgot the following important line in both
 
  BEGIN { $|=1; }
 
  should be inserted as the second line in each script (just after the #! 
  line)
 
 
  On mån, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:
 
   Example script removing query strings from any file ending in .ext:
  
   #!/usr/bin/perl -an
   $id = $F[0];
   $url = $F[1];
   if ($url =~ m#\.ext\?#) {
   $url =~ s/\?.*//;
   print $id $url\n;
   next;
   }
   print $id\n;
   next;
  
  
   Or if you want to keep it real simple:
  
   #!/usr/bin/perl -p
   s%\.ext\?.*%.ext%;
  
   but doesn't illustrate the principle that well, and causes a bit more
   work for Squid.. (but not much)
  
I am still not clear as how to write
help program which will process requests in parallel using perl ? Do
you think squirm with 1500 child processes  works differently
compared to the solution you are talking about ?
  
   Yes.
  
   Regards
   Henrik



Re: [squid-users] Ignoring query string from url

2008-10-30 Thread nitesh naik
There was mistake on my part I should have used following script to
process concurrent requests. Its working properly now.

#!/usr/bin/perl -an
BEGIN { $|=1; }
$id = $F[0];
$url = $F[1];
   $url =~ s/\?.*//;
   print $id $url\n;
   next;

Regards
Nitesh

On Thu, Oct 30, 2008 at 12:15 PM, nitesh naik [EMAIL PROTECTED] wrote:
 Henrik,

 With this approach I see that only one redirector process is being
 used and requests are processed in serial order. This causes delay in
 serving the objects and even response for cache object is slower.

 I tried changing url_rewrite_concurrency to 1 but with this setting
 squid is not caching the Object. I guess I need to use url rewrite
 program which will process requests in parallel to handle the load of
 5000 req/sec.

 Regards
 Nitesh

 On Mon, Oct 27, 2008 at 5:18 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 See earlier response.

 On mån, 2008-10-27 at 16:59 +0530, nitesh naik wrote:
 Henrik,

 What if I use following code ?  logic is same as your program ?


 #!/usr/bin/perl
 $|=1;
 while () {
 s|(.*)\?(.*$)|$1|;
 print;
 next;
 }

 Regards
 Nitesh

 On Mon, Oct 27, 2008 at 4:25 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 
  Sorry, forgot the following important line in both
 
  BEGIN { $|=1; }
 
  should be inserted as the second line in each script (just after the #! 
  line)
 
 
  On mån, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:
 
   Example script removing query strings from any file ending in .ext:
  
   #!/usr/bin/perl -an
   $id = $F[0];
   $url = $F[1];
   if ($url =~ m#\.ext\?#) {
   $url =~ s/\?.*//;
   print $id $url\n;
   next;
   }
   print $id\n;
   next;
  
  
   Or if you want to keep it real simple:
  
   #!/usr/bin/perl -p
   s%\.ext\?.*%.ext%;
  
   but doesn't illustrate the principle that well, and causes a bit more
   work for Squid.. (but not much)
  
I am still not clear as how to write
help program which will process requests in parallel using perl ? Do
you think squirm with 1500 child processes  works differently
compared to the solution you are talking about ?
  
   Yes.
  
   Regards
   Henrik




Re: [squid-users] Ignoring query string from url

2008-10-30 Thread nitesh naik
Henrik,

url rewrite helper script works fine for few requests ( 100 req/sec )
but slows down response as number of requests increase and it takes
10+ second to deliver the objects.

Is there way to optimise it further ?

url_rewrite_program  /home/zdn/bin/redirect_parallel.pl
url_rewrite_children 2000
url_rewrite_concurrency 5

Regards
Nitesh

On Thu, Oct 30, 2008 at 3:16 PM, nitesh naik [EMAIL PROTECTED] wrote:
 There was mistake on my part I should have used following script to
 process concurrent requests. Its working properly now.

 #!/usr/bin/perl -an
 BEGIN { $|=1; }
 $id = $F[0];
 $url = $F[1];
   $url =~ s/\?.*//;
   print $id $url\n;
   next;

 Regards
 Nitesh

 On Thu, Oct 30, 2008 at 12:15 PM, nitesh naik [EMAIL PROTECTED] wrote:
 Henrik,

 With this approach I see that only one redirector process is being
 used and requests are processed in serial order. This causes delay in
 serving the objects and even response for cache object is slower.

 I tried changing url_rewrite_concurrency to 1 but with this setting
 squid is not caching the Object. I guess I need to use url rewrite
 program which will process requests in parallel to handle the load of
 5000 req/sec.

 Regards
 Nitesh

 On Mon, Oct 27, 2008 at 5:18 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 See earlier response.

 On mån, 2008-10-27 at 16:59 +0530, nitesh naik wrote:
 Henrik,

 What if I use following code ?  logic is same as your program ?


 #!/usr/bin/perl
 $|=1;
 while () {
 s|(.*)\?(.*$)|$1|;
 print;
 next;
 }

 Regards
 Nitesh

 On Mon, Oct 27, 2008 at 4:25 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 
  Sorry, forgot the following important line in both
 
  BEGIN { $|=1; }
 
  should be inserted as the second line in each script (just after the #! 
  line)
 
 
  On mån, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:
 
   Example script removing query strings from any file ending in .ext:
  
   #!/usr/bin/perl -an
   $id = $F[0];
   $url = $F[1];
   if ($url =~ m#\.ext\?#) {
   $url =~ s/\?.*//;
   print $id $url\n;
   next;
   }
   print $id\n;
   next;
  
  
   Or if you want to keep it real simple:
  
   #!/usr/bin/perl -p
   s%\.ext\?.*%.ext%;
  
   but doesn't illustrate the principle that well, and causes a bit more
   work for Squid.. (but not much)
  
I am still not clear as how to write
help program which will process requests in parallel using perl ? Do
you think squirm with 1500 child processes  works differently
compared to the solution you are talking about ?
  
   Yes.
  
   Regards
   Henrik





[squid-users] slow response for cached objects

2008-10-29 Thread nitesh naik
Hi,

Sometimes I see squid is taking time in delivering contents even if
object is available in its cache. Any idea what could be the reason?
I used external url rewrite program to strip the query string. Is it
slowing down serving process ?

First 2 line shows squid took 703 milliseconds to deliver the contents
and rest of the url shows 0 milliseconds

1225272393.185703   81.52.249.107TCP_MEM_HIT/200 1547 GET
http://s2.xyz.com/1699/563/i0.js?z=5002 - NONE/-
application/x-javascript
1225272393.185703  168.143.241.52   TCP_MEM_HIT/200 2230 GET
http://s5.xyz.com/496/111/109/i30.js?z=6718 - NONE/-
application/x-javascript
1225272393.375  081.52.249.100TCP_MEM_HIT/200 1418 GET
http://s2.xyz.com/371/9/i10.js?z=148 - NONE/- application/x-javascript
1225272393.375  0   168.143.241.12   TCP_MEM_HIT/200 1361 GET
http://s5.xyz.com/670/28/i6.js?z=5812 - NONE/-
application/x-javascript
1225272393.381  081.52.249.101TCP_MEM_HIT/200 1288 GET
http://s1.xyz.com/558/622/9/i0.js?z=4158 - NONE/-
application/x-javascript

Following is rewrite url helper program I use which was sent by Henrik
and I have modified it bit to strip the query string.

#!/usr/bin/perl -an
BEGIN { $|=1; }
$id = $F[0];
   $id =~ s/\?.*//;
   print $id\n;
   next;

Regards
Nitesh


Re: [squid-users] slow response for cached objects

2008-10-29 Thread Nitesh Naik
Henrik,

We use Squid 3 version and I could see these delays at client end
also. Direct request to origin hands out object much faster as
compared to squid.

Squid is holding up the connections and I could see 3000+ connections
on loadbalancer when squid is used and 500 connection when origin is
requested directly bypassing squid.

Regards
Nitesh

On Wed, Oct 29, 2008 at 4:03 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:

 On ons, 2008-10-29 at 15:08 +0530, nitesh naik wrote:
  Hi,
 
  Sometimes I see squid is taking time in delivering contents even if
  object is available in its cache. Any idea what could be the reason?
  I used external url rewrite program to strip the query string. Is it
  slowing down serving process ?
 
  First 2 line shows squid took 703 milliseconds to deliver the contents
  and rest of the url shows 0 milliseconds
 
  1225272393.185703   81.52.249.107TCP_MEM_HIT/200 1547 GET
  http://s2.xyz.com/1699/563/i0.js?z=5002 - NONE/-
  application/x-javascript

 Just discovered that there is a noticeable measurement error in the
 response time in Squid-2 which may add up to a second.. may be this.

 Regards
 Henrik



--
Regards
Nitesh


Re: [squid-users] Ignoring query string from url

2008-10-27 Thread nitesh naik
We use query string in each url for bursting cache at client end (
browser) hence its not important for us and it won't provide any
incorrect results. We already use similar configuration at CDN level.

We are trying to add squid layer between origin and CDN to reduce the
load on our origin servers. setup works fine for few requests but as
traffic grows up to 100 req/sec, squid response is slow. Each of
machine that squid is running on has got 20 GB RAM and Dual core
processor.

I used squirm for striping query string but I am seeing squid
responding slowly when url_rewrite_program is introduced in between.

Henrik suggested some clever idea to make changes to
url_rewrite_program to process request in parallel but unfortunately i
am not sure how to incorporate it.

Here are my rewrite program settings.

url_rewrite_program  /home/zdn/squirm/bin/squirm
url_rewrite_children 1500
url_rewrite_concurrency 0
url_rewrite_host_header off
url_rewrite_bypass off

Regards
Nitesh

On Sun, Oct 26, 2008 at 4:51 PM, Matus UHLAR - fantomas
[EMAIL PROTECTED] wrote:
 On 25.10.08 12:40, Nitesh Naik wrote:
 Squid should give out same object for different query string.
 Basically it should strip query string and cache the object so that
 same object is delivered to the client browser for different query
 string.

 Did you understand what I've said - that such misconfiguration can provide
 incorrect results? Your users will hate you for that

 --
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 WinError #9: Out of error messages.



Re: [squid-users] Ignoring query string from url

2008-10-27 Thread Nitesh Naik
Henrik / Matus ,


For certain request we don't want client browser to look for object in
its cache and everything should be served fresh. CDN will determine
expire time for the object. Some of these objects doesn't send out
Last modified header. In our case it is not important to pass query
string  to the origin as query string is random number used for
bursting client side cache.


Is there any sample code available for url rewriter helper which will
process requests in parallel ?  I am still not clear as how to write
help program which will process requests in parallel using perl ? Do
you think squirm with 1500 child processes  works differently
compared to the solution you are talking about ?

Regards
Nitesh



On Mon, Oct 27, 2008 at 2:39 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On mån, 2008-10-27 at 12:30 +0530, nitesh naik wrote:
 We use query string in each url for bursting cache at client end (
 browser) hence its not important for us and it won't provide any
 incorrect results. We already use similar configuration at CDN level.

 Why do you do this?


 Henrik suggested some clever idea to make changes to
 url_rewrite_program to process request in parallel but unfortunately i
 am not sure how to incorporate it.

 Write your own url rewriter helper. It's no more than a couple of lines
 perl..

 Regards
 Henrik




-- 
Regards
Nitesh


Re: [squid-users] Ignoring query string from url

2008-10-27 Thread nitesh naik
Henrik,

Is this code capable for handling requests in parallel ?

#!/usr/bin/perl
$|=1;
while () {
s|(.*)\?(.*$)|$1|;
print;
}

Regards
Nitesh



On Mon, Oct 27, 2008 at 4:04 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On mån, 2008-10-27 at 10:11 +0100, Matus UHLAR - fantomas wrote:
  Write your own url rewriter helper. It's no more than a couple of lines
  perl..

 shouldn't that be storeurl rewriter?

 No, since the backend server is not interested in this dummy query
 string an url rewriter is better.

 Regards
 Henrik




Re: [squid-users] Ignoring query string from url

2008-10-27 Thread nitesh naik
Henrik,

What if I use following code ?  logic is same as your program ?


#!/usr/bin/perl
$|=1;
while () {
s|(.*)\?(.*$)|$1|;
print;
next;
}

Regards
Nitesh

On Mon, Oct 27, 2008 at 4:25 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:

 Sorry, forgot the following important line in both

 BEGIN { $|=1; }

 should be inserted as the second line in each script (just after the #! line)


 On mån, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:

  Example script removing query strings from any file ending in .ext:
 
  #!/usr/bin/perl -an
  $id = $F[0];
  $url = $F[1];
  if ($url =~ m#\.ext\?#) {
  $url =~ s/\?.*//;
  print $id $url\n;
  next;
  }
  print $id\n;
  next;
 
 
  Or if you want to keep it real simple:
 
  #!/usr/bin/perl -p
  s%\.ext\?.*%.ext%;
 
  but doesn't illustrate the principle that well, and causes a bit more
  work for Squid.. (but not much)
 
   I am still not clear as how to write
   help program which will process requests in parallel using perl ? Do
   you think squirm with 1500 child processes  works differently
   compared to the solution you are talking about ?
 
  Yes.
 
  Regards
  Henrik


Re: [squid-users] Ignoring query string from url

2008-10-25 Thread Nitesh Naik
Squid should give out same object for different query string.
Basically it should strip query string and cache the object so that
same object is delivered to the client browser for different query
string.

I used squirm which is better I could see some performance improvement
but  I am getting errors on tier 1 cache such as

TIMEOUT_FIRST_PARENT_MISS/10.0.0.169

I have configure hierarchical cache and following are rules set of tire 1 cache.

cache_peer 10.0.0.169 parent 80 3130  connect-timeout=2  round-robin
cache_peer 10.0.0.171 parent 80 3130  connect-timeout=2  round-robin


squirm.patterns contains following rule to strip query string.

regex  (.*)\?(.*)  \1

Regards
Nitesh

On Fri, Oct 24, 2008 at 6:57 PM, Matus UHLAR - fantomas
[EMAIL PROTECTED] wrote:

 On 24.10.08 13:40, nitesh naik wrote:
  Is there way to ignore query string in url so that objects are cached
  without query string ?  I am using external perl program to strip them query
  string from url which is slowing down response time. I have started 1500
  processes of redirect program.
 
  If I run squid without redirect program to strip query string , the squid
  response is much faster but all the requests goes to the origin server.

 Pardon? Different query strings can lead to different responses. Do you want
 squid to produce still the same page of results when you google fort
 different things?

 --
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 10 GOTO 10 : REM (C) Bill Gates 1998, All Rights Reserved!



--
Regards
Nitesh


[squid-users] Ignoring query string from url

2008-10-24 Thread nitesh naik
Hi All,

Is there way to ignore query string in url so that objects are cached
without query string ?  I am using external perl program to strip them query
string from url which is slowing down response time. I have started 1500
processes of redirect program.

If I run squid without redirect program to strip query string , the squid
response is much faster but all the requests goes to the origin server.

Perl program to strip query string is.

#!/usr/bin/perl -p
BEGIN { $|=1 }
s|(.*)\?(.*)|$1|;

Regards
Nitesh


[squid-users] Did Anyone used ESI with squid ?

2005-03-08 Thread Nitesh Naik


Hi,

I am having problem with configuring squid with ESI parsing. Did anyone
implemented it ?


Regards
Nitesh Naik






Re: [squid-users] Did Anyone used ESI with squid ?

2005-03-08 Thread Nitesh Naik

Dear Michal,

Thanks for your reply.

Let me send you some more information about settings that I am using.

We are using squid squid-3.0-PRE3-20041220 for parsing ESI.  squid is
compiled with esi ( --enable-esi ) but for some reason esi is not getting
parsed and we get following error in the browser.

The following error was encountered:

ESI Processing failed.
The ESI processor returned:
esiProcess: Parse error at line 2: junk after document element
This means that:
 The surrogate was not able to process the ESI template. Please report this
error to the webmaster

ESI example used
esi:assign name=date_string value=$strftime($time(), '%a, %d %B %Y
%H:%M:%S %Z')/
esi:vars
$(date_string)
/esi:vars


 squid.conf settings

httpd_accel_surrogate_id unset-id
http_accel_surrogate_remote on
esi_parser libxml2
cache_peer xyz.com parent 80 0 no-query originserver

Apache configuration at origin server
   Directory /esi/
 Header add Surrogate-Control max-age=60,content=ESI/1.0
 ExpiresActive On
 ExpiresByType text/html now plus 1 minutes
 /Directory


When we hit origin server the Surrogate-Control is added to header

HTTP/1.1 200 OK
Date: Fri, 04 Mar 2005 13:30:03 GMT
Surrogate-Control: max-age=60,content=ESI/1.0
P3P: CP=NOI DSP COR CURa ADMa DEVa PSDa OUR BUS UNI COM NAV OTC,
policyref=/w3c/p3p.xml
Last-Modified: Fri, 04 Mar 2005 12:50:06 GMT
ETag: 13c8a1-133-4228597e
Accept-Ranges: bytes
Content-Length: 307
Connection: close
Content-Type: text/html

Regards
Nitesh Naik



- Original Message - 
From: Michal Pietrusinski [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Tuesday, March 08, 2005 5:26 PM
Subject: Re: [squid-users] Did Anyone used ESI with squid ?


 Dear Nitesh,

 I'm also trying to use ESI with squid - I installed Squid 3, (remember
 to use --enable-esi with configure) and pages are composed fine (I use
 esi:include), but templates and fragments are not cached.

 Remember that your pages must have appropriate HTTP headers in order to
 make squid parsing it as ESI templates.

 I hope you are more lucky and will have your pages cached.

 Regards,
 Michal Pietrusinski



 Nitesh Naik napisa(a):
 
  Hi,
 
  I am having problem with configuring squid with ESI parsing. Did anyone
  implemented it ?
 
 
  Regards
  Nitesh Naik
 
 
 
 




Re: [squid-users] Did Anyone used ESI with squid ?

2005-03-08 Thread Nitesh Naik


Michal,

Thanks for your suggestion.

Changed parser to custom and used following sample ESI code.

esi:assign name=test_string value=This is test/
esi:vars $(test_string) /esi:vars

Its Working perfectly fine.  Is squid not supporting all ESI tags ?

Regards
Nitesh Naik


- Original Message - 
From: Michal Pietrusinski [EMAIL PROTECTED]
To: Nitesh Naik [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Tuesday, March 08, 2005 5:57 PM
Subject: Re: [squid-users] Did Anyone used ESI with squid ?


 Dear Nitesh,

 It looks like the header is ok, since ESI processing started. I also had
   problems with parser 'libxml2' - it was constantly reporting some
 parsing errors even on simple pages which were validated with W3C
validator.

 So finally I changed to 'custom' and 'expat' parsers.

 I suggest you first try some really simple ESI constructs with 'custom'
 parser.

 Regards,
 Michal



 Nitesh Naik napisa(a):
  Dear Michal,
 
  Thanks for your reply.
 
  Let me send you some more information about settings that I am using.
 
  We are using squid squid-3.0-PRE3-20041220 for parsing ESI.  squid is
  compiled with esi ( --enable-esi ) but for some reason esi is not
getting
  parsed and we get following error in the browser.
 
  The following error was encountered:
 
  ESI Processing failed.
  The ESI processor returned:
  esiProcess: Parse error at line 2: junk after document element
  This means that:
   The surrogate was not able to process the ESI template. Please report
this
  error to the webmaster
 
  ESI example used
  esi:assign name=date_string value=$strftime($time(), '%a, %d %B %Y
  %H:%M:%S %Z')/
  esi:vars
  $(date_string)
  /esi:vars
 
 
   squid.conf settings
 
  httpd_accel_surrogate_id unset-id
  http_accel_surrogate_remote on
  esi_parser libxml2
  cache_peer xyz.com parent 80 0 no-query originserver
 
  Apache configuration at origin server
 Directory /esi/
   Header add Surrogate-Control max-age=60,content=ESI/1.0
   ExpiresActive On
   ExpiresByType text/html now plus 1 minutes
   /Directory
 
 
  When we hit origin server the Surrogate-Control is added to header
 
  HTTP/1.1 200 OK
  Date: Fri, 04 Mar 2005 13:30:03 GMT
  Surrogate-Control: max-age=60,content=ESI/1.0
  P3P: CP=NOI DSP COR CURa ADMa DEVa PSDa OUR BUS UNI COM NAV OTC,
  policyref=/w3c/p3p.xml
  Last-Modified: Fri, 04 Mar 2005 12:50:06 GMT
  ETag: 13c8a1-133-4228597e
  Accept-Ranges: bytes
  Content-Length: 307
  Connection: close
  Content-Type: text/html
 
  Regards
  Nitesh Naik
 
 
 
  - Original Message - 
  From: Michal Pietrusinski [EMAIL PROTECTED]
  To: squid-users@squid-cache.org
  Sent: Tuesday, March 08, 2005 5:26 PM
  Subject: Re: [squid-users] Did Anyone used ESI with squid ?
 
 
 
 Dear Nitesh,
 
 I'm also trying to use ESI with squid - I installed Squid 3, (remember
 to use --enable-esi with configure) and pages are composed fine (I use
 esi:include), but templates and fragments are not cached.
 
 Remember that your pages must have appropriate HTTP headers in order to
 make squid parsing it as ESI templates.
 
 I hope you are more lucky and will have your pages cached.
 
 Regards,
 Michal Pietrusinski
 
 
 
 Nitesh Naik napisa(a):
 
 Hi,
 
 I am having problem with configuring squid with ESI parsing. Did anyone
 implemented it ?
 
 
 Regards
 Nitesh Naik
 
 
 
 
 
 




Re: [squid-users] Did Anyone used ESI with squid ?

2005-03-08 Thread Nitesh Naik
Michal,

Here is ESI code that I used .

table
tr
td colspan=2
esi:try
esi:attempt
esi:include src=http://www.yahoo.com/
/esi:attempt
esi:except
!--esi This spot is reserved for your company.s advertising. For more info
a href=www.yahoo.com click here /a --
/esi:except
/esi:try
/td /tr
/table
esi:assign name=date_string value=This is test/
esi:vars $(date_string) /esi:vars


In access log of squid I get following error.

1110289050.099  0 255.255.255.255 TCP_DENIED/403 0 GET
http://www.yahoo.com - NONE/- text/html

Enabled access to all in squid.conf now I am getting following error.

1110351386.705541 255.255.255.255 TCP_MISS/403 0 GET
http://www.yahoo.com - ANY_PARENT/originserver text/html


Is esi:vars$set_redirect('http://www.yahoo.com')/esi:vars works for you
?

Regards
Nitesh Naik




- Original Message - 
From: Michal Pietrusinski [EMAIL PROTECTED]
To: Nitesh Naik [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Tuesday, March 08, 2005 8:34 PM
Subject: Re: [squid-users] Did Anyone used ESI with squid ?


 Hi Nitesh,

 I don't know if squid supports all ESI tags. I try to use only the basic
 esi:include tag and have problems.

 Could you, please, check, if esi:include works with your installation?

 If it works fine, you should see the page properly composed, and in the
 squid_installation/var/logs/access.log there should be entries, that the
 template and included pages where taken from the cache.

 I would be very gratefull if you could do that test.

 Regards,
 Michal

 Nitesh Naik napisa(a):
 
  Michal,
 
  Thanks for your suggestion.
 
  Changed parser to custom and used following sample ESI code.
 
  esi:assign name=test_string value=This is test/
  esi:vars $(test_string) /esi:vars
 
  Its Working perfectly fine.  Is squid not supporting all ESI tags ?
 
  Regards
  Nitesh Naik
 
 
  - Original Message - 
  From: Michal Pietrusinski [EMAIL PROTECTED]
  To: Nitesh Naik [EMAIL PROTECTED]
  Cc: squid-users@squid-cache.org
  Sent: Tuesday, March 08, 2005 5:57 PM
  Subject: Re: [squid-users] Did Anyone used ESI with squid ?
 
 
 
 Dear Nitesh,
 
 It looks like the header is ok, since ESI processing started. I also had
   problems with parser 'libxml2' - it was constantly reporting some
 parsing errors even on simple pages which were validated with W3C
 
  validator.
 
 So finally I changed to 'custom' and 'expat' parsers.
 
 I suggest you first try some really simple ESI constructs with 'custom'
 parser.
 
 Regards,
 Michal
 
 
 
 Nitesh Naik napisa(a):
 
 Dear Michal,
 
 Thanks for your reply.
 
 Let me send you some more information about settings that I am using.
 
 We are using squid squid-3.0-PRE3-20041220 for parsing ESI.  squid is
 compiled with esi ( --enable-esi ) but for some reason esi is not
 
  getting
 
 parsed and we get following error in the browser.
 
 The following error was encountered:
 
 ESI Processing failed.
 The ESI processor returned:
 esiProcess: Parse error at line 2: junk after document element
 This means that:
  The surrogate was not able to process the ESI template. Please report
 
  this
 
 error to the webmaster
 
 ESI example used
 esi:assign name=date_string value=$strftime($time(), '%a, %d %B %Y
 %H:%M:%S %Z')/
 esi:vars
 $(date_string)
 /esi:vars
 
 
  squid.conf settings
 
 httpd_accel_surrogate_id unset-id
 http_accel_surrogate_remote on
 esi_parser libxml2
 cache_peer xyz.com parent 80 0 no-query originserver
 
 Apache configuration at origin server
Directory /esi/
  Header add Surrogate-Control max-age=60,content=ESI/1.0
  ExpiresActive On
  ExpiresByType text/html now plus 1 minutes
  /Directory
 
 
 When we hit origin server the Surrogate-Control is added to header
 
 HTTP/1.1 200 OK
 Date: Fri, 04 Mar 2005 13:30:03 GMT
 Surrogate-Control: max-age=60,content=ESI/1.0
 P3P: CP=NOI DSP COR CURa ADMa DEVa PSDa OUR BUS UNI COM NAV OTC,
 policyref=/w3c/p3p.xml
 Last-Modified: Fri, 04 Mar 2005 12:50:06 GMT
 ETag: 13c8a1-133-4228597e
 Accept-Ranges: bytes
 Content-Length: 307
 Connection: close
 Content-Type: text/html
 
 Regards
 Nitesh Naik
 
 
 
 - Original Message - 
 From: Michal Pietrusinski [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Sent: Tuesday, March 08, 2005 5:26 PM
 Subject: Re: [squid-users] Did Anyone used ESI with squid ?
 
 
 
 
 Dear Nitesh,
 
 I'm also trying to use ESI with squid - I installed Squid 3, (remember
 to use --enable-esi with configure) and pages are composed fine (I use
 esi:include), but templates and fragments are not cached.
 
 Remember that your pages must have appropriate HTTP headers in order
to
 make squid parsing it as ESI templates.
 
 I hope you are more lucky and will have your pages cached.
 
 Regards,
 Michal Pietrusinski
 
 
 
 Nitesh Naik napisa(a):
 
 
 Hi,
 
 I am having problem with configuring squid with ESI parsing. Did
anyone
 implemented it ?
 
 
 Regards
 Nitesh Naik
 
 
 
 
 
 




[squid-users] Problem with parsing ESI

2005-03-06 Thread Nitesh Naik

Hi,


We are using squid squid-3.0-PRE3-20041220 for parsing ESI.  squid is
compiled with esi ( --enable-esi ) but for some reason esi is not getting
parsed and we get following error.
The following error was encountered:
ESI Processing failed.
The ESI processor returned:
esiProcess: Parse error at line 2: junk after document element
This means that:
 The surrogate was not able to process the ESI template. Please report this
error to the webmaster

esi example used
esi:assign name=date_string value=$strftime($time(), '%a, %d %B %Y
%H:%M:%S %Z')/
esi:vars
$(date_string)
/esi:vars


 squid.conf settings

httpd_accel_surrogate_id unset-id
http_accel_surrogate_remote on
esi_parser libxml2
cache_peer xyz.com parent 80 0 no-query originserver

Apache configuration at origin server
   Directory /esi/
 Header add Surrogate-Control max-age=60,content=ESI/1.0
 ExpiresActive On
 ExpiresByType text/html now plus 1 minutes
 /Directory

When we hit origin server the Surrogate-Control is added to header

HTTP/1.1 200 OK
Date: Fri, 04 Mar 2005 13:30:03 GMT
Surrogate-Control: max-age=60,content=ESI/1.0
P3P: CP=NOI DSP COR CURa ADMa DEVa PSDa OUR BUS UNI COM NAV OTC,
policyref=/w3c/p3p.xml
Last-Modified: Fri, 04 Mar 2005 12:50:06 GMT
ETag: 13c8a1-133-4228597e
Accept-Ranges: bytes
Content-Length: 307
Connection: close
Content-Type: text/html


Can any one tell us why ESI is not getting parsed ?

Regards

Nitesh Naik