Re: [PATCH] add support for write I/O timeout
Tsantilas Christos wrote: This patch add support for write timeouts. The patch also exist for squid3.1 branch The development sponsored by the Measurement Factory Description: The write I/O timeout should trigger if Squid has data to write but the connection is not ready to accept more data for the specified time. If the write times out, the Comm caller's write handler is called with an ETIMEDOUT COMM_ERROR error. Comm may process a single write request in several chunks, without caller's knowledge. The waiting time is reset internally by Comm after each chunk is written. Default timeout value is 15 minutes. The implementation requires no changes in Comm callers but it adds write timeouts to all connections, including the connections that have context-specific write timeouts. I think that is fine. - Christos +1. ... and applied. Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20 Current Beta Squid 3.1.0.15
Re: Features/SmpScale
Sachin Malave wrote: Dear Amos, Thank you for updating wiki.squid-cache.org/Features/SmpScale, now ideas are more clear, I have already started working on the architecture that is proposed... Thank you for your thanks. Which part and which layer are you working towards now? (the earlier work you have already done was towards mid-layer) If it was top-layer there are a few speed bumps we've already identified and not yet documented: Missing Pieces (most to least difficult): * Mechanism to replace SquidConf:cache_dir (proposal cache_disk /media/path SIZE ) - a set of disk entries. Master instance to load and portion off each child instance with one or more disks. The child instance to determine what types of storage (old SquidConf:cache_dir types) to be stored there and what sizes will fit in the disk max given. * Mechanism to handle many SquidConf:http_port and other listening port directives - Master instance doing accept and passing to children? or all children listening on all ports on a first-accept-wins basis? * Mechanism needed to replace existing method of master instance monitoring for a dead child instance. Perhapse listening socket based? * Mechanism to split cache_mem between all child instances. Or do we leave this as-is and call it a per-instance allocation? * Mechanism to keep unique_hostname unique per instance? do we even need to? is it safer to consider the whole bunch of instances one unique host and leave the existing code preventing complicated loops paths between instances?) Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20 Current Beta Squid 3.1.0.15
Re: Features/SmpScale
On Mon, Dec 21, 2009 at 6:21 PM, Amos Jeffries squ...@treenet.co.nz wrote: Sachin Malave wrote: Dear Amos, Thank you for updating wiki.squid-cache.org/Features/SmpScale, now ideas are more clear, I have already started working on the architecture that is proposed... Thank you for your thanks. Which part and which layer are you working towards now? (the earlier work you have already done was towards mid-layer) If it was top-layer there are a few speed bumps we've already identified and not yet documented: YES, IT IS. Missing Pieces (most to least difficult): * Mechanism to replace SquidConf:cache_dir (proposal cache_disk /media/path SIZE ) - a set of disk entries. Master instance to load and portion off each child instance with one or more disks. The child instance to determine what types of storage (old SquidConf:cache_dir types) to be stored there and what sizes will fit in the disk max given.come * Mechanism to handle many SquidConf:http_port and other listening port directives - Master instance doing accept and passing to children? or all children listening on all ports on a first-accept-wins basis? * Mechanism needed to replace existing method of master instance monitoring for a dead child instance. Perhapse listening socket based? * Mechanism to split cache_mem between all child instances. Or do we leave this as-is and call it a per-instance allocation? * Mechanism to keep unique_hostname unique per instance? do we even need to? is it safer to consider the whole bunch of instances one unique host and leave the existing code preventing complicated loops paths between instances?) Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20 Current Beta Squid 3.1.0.15 I THINK YOU HAVE GIVEN ME ENOUGH MATERIAL FOR THIS ACADEMIC SEMESTER :) Before everything i will concentrate on all above mentioned points... I need some time for analysis. will be back soon with solutions ... -- Mr. S. H. Malave Computer Science Engineering Department, Walchand College of Engineering,Sangli. sachinmal...@wce.org.in
Re: Features/SmpScale
Sachin Malave wrote: On Mon, Dec 21, 2009 at 6:21 PM, Amos Jeffries squ...@treenet.co.nz wrote: Sachin Malave wrote: Dear Amos, Thank you for updating wiki.squid-cache.org/Features/SmpScale, now ideas are more clear, I have already started working on the architecture that is proposed... Thank you for your thanks. Which part and which layer are you working towards now? (the earlier work you have already done was towards mid-layer) If it was top-layer there are a few speed bumps we've already identified and not yet documented: YES, IT IS. Missing Pieces (most to least difficult): * Mechanism to replace SquidConf:cache_dir (proposal cache_disk /media/path SIZE ) - a set of disk entries. Master instance to load and portion off each child instance with one or more disks. The child instance to determine what types of storage (old SquidConf:cache_dir types) to be stored there and what sizes will fit in the disk max given.come * Mechanism to handle many SquidConf:http_port and other listening port directives - Master instance doing accept and passing to children? or all children listening on all ports on a first-accept-wins basis? * Mechanism needed to replace existing method of master instance monitoring for a dead child instance. Perhapse listening socket based? * Mechanism to split cache_mem between all child instances. Or do we leave this as-is and call it a per-instance allocation? * Mechanism to keep unique_hostname unique per instance? do we even need to? is it safer to consider the whole bunch of instances one unique host and leave the existing code preventing complicated loops paths between instances?) Amos I THINK YOU HAVE GIVEN ME ENOUGH MATERIAL FOR THIS ACADEMIC SEMESTER :) Before everything i will concentrate on all above mentioned points... I need some time for analysis. will be back soon with solutions :) hehe. Just pick one that appears easy. Or you may find something else entirely we have not thought of. Any one of those bumps may take a semester or several to get done and working. It will be around a year or two before the top SMP layer is fully working. Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20 Current Beta Squid 3.1.0.15
Build failed in Hudson: 3.0-i386-opensolaris #5
See http://build.squid-cache.org/job/3.0-i386-opensolaris/5/changes Changes: [Amos Jeffries squ...@treenet.co.nz] Bug 2395: FTP errors not displayed * Fix PUT and other errors hanging * Fix assertion entry-store_status == STORE_PENDING caused by FTP * Several variable-shadowing cases resolved for the fix. [Amos Jeffries squ...@treenet.co.nz] Author: Jochen Voss v...@seehuhn.de Fix failure to reset MD5 context buffer [Amos Jeffries squ...@treenet.co.nz] Bug 2830: clarify where NULL byte is in headers. Debug printing used to naturally stop string output at the null byte. This should show the first segment of headers up to the NULL and the segment of headers after it. So that its clear to admin that there are more headers _after_ the portion that used to be logged. [Automatic source maintenance squid...@squid-cache.org] Bootstrapped -- [...truncated 890 lines...] mv -f $depbase.Tpo $depbase.Po rm -f libdigest.a /usr/gnu/bin/ar cru libdigest.a digest/auth_digest.o ranlib libdigest.a depbase=`echo negotiate/auth_negotiate.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -I. -I../../.././test-suite/../src/auth -I../../include -I. -I../../include -I../../.././test-suite/../include -I../../.././test-suite/../src -I/usr/include/libxml2 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -g -O2 -MT negotiate/auth_negotiate.o -MD -MP -MF $depbase.Tpo -c -o negotiate/auth_negotiate.o ../../.././test-suite/../src/auth/negotiate/auth_negotiate.cc \ mv -f $depbase.Tpo $depbase.Po depbase=`echo negotiate/negotiateScheme.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -I. -I../../.././test-suite/../src/auth -I../../include -I. -I../../include -I../../.././test-suite/../include -I../../.././test-suite/../src -I/usr/include/libxml2 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -g -O2 -MT negotiate/negotiateScheme.o -MD -MP -MF $depbase.Tpo -c -o negotiate/negotiateScheme.o ../../.././test-suite/../src/auth/negotiate/negotiateScheme.cc \ mv -f $depbase.Tpo $depbase.Po rm -f libnegotiate.a /usr/gnu/bin/ar cru libnegotiate.a negotiate/auth_negotiate.o negotiate/negotiateScheme.o ranlib libnegotiate.a make[3]: Leaving directory `http://build.squid-cache.org/job/3.0-i386-opensolaris/ws/btlayer-00-default/src/auth' make[3]: Entering directory `http://build.squid-cache.org/job/3.0-i386-opensolaris/ws/btlayer-00-default/src' depbase=`echo DiskIO/Blocking/BlockingFile.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I../.././test-suite/../src -I../include -I. -I../.././test-suite/../src -I../include -I../.././test-suite/../include -I../.././test-suite/../lib/libTrie/include -I/usr/local/include -I/usr/include/libxml2 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -g -O2 -MT DiskIO/Blocking/BlockingFile.o -MD -MP -MF $depbase.Tpo -c -o DiskIO/Blocking/BlockingFile.o ../.././test-suite/../src/DiskIO/Blocking/BlockingFile.cc \ mv -f $depbase.Tpo $depbase.Po depbase=`echo DiskIO/Blocking/BlockingIOStrategy.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I../.././test-suite/../src -I../include -I. -I../.././test-suite/../src -I../include -I../.././test-suite/../include -I../.././test-suite/../lib/libTrie/include -I/usr/local/include -I/usr/include/libxml2 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -g -O2 -MT DiskIO/Blocking/BlockingIOStrategy.o -MD -MP -MF $depbase.Tpo -c -o DiskIO/Blocking/BlockingIOStrategy.o ../.././test-suite/../src/DiskIO/Blocking/BlockingIOStrategy.cc \ mv -f $depbase.Tpo $depbase.Po rm -f libBlocking.a /usr/gnu/bin/ar cru libBlocking.a DiskIO/Blocking/BlockingFile.o DiskIO/Blocking/BlockingIOStrategy.o ranlib libBlocking.a depbase=`echo unlinkd_daemon.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I../.././test-suite/../src -I../include -I. -I../.././test-suite/../src -I../include -I../.././test-suite/../include -I../.././test-suite/../lib/libTrie/include -I/usr/local/include -I/usr/include/libxml2 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -g -O2 -MT unlinkd_daemon.o -MD -MP -MF $depbase.Tpo -c -o unlinkd_daemon.o ../.././test-suite/../src/unlinkd_daemon.cc \ mv -f $depbase.Tpo $depbase.Po depbase=`echo SquidNew.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I../.././test-suite/../src -I../include -I. -I../.././test-suite/../src -I../include -I../.././test-suite/../include -I../.././test-suite/../lib/libTrie/include -I/usr/local/include -I/usr/include/libxml2 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -g -O2 -MT SquidNew.o -MD
[RFC] Micro Benchmarking
I've been giving some thought to how we can do this in an automated way. If anyone has a better way please mention it... So far what I'm thinking is to leverage cppunit to building benchmark-units which output some stats about how long it took to run some operation/function N times. Starting with a basic one to be designed, which can be used as a baseline for the CPU/machine doing the run. The scope of these micro operations would be individual function/method call sequences with no async sub-steps. Such as measuring and ranking the raw speed of the fast ACL types, clientDB lookups, memPool allocate/free/garbage-collection, etc. The tricky bit appears to be recovering the benchmark output and handling it after a run. Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20 Current Beta Squid 3.1.0.15
Re: [RFC] Micro Benchmarking
On Tue, 2009-12-22 at 16:05 +1300, Amos Jeffries wrote: The tricky bit appears to be recovering the benchmark output and handling it after a run. If you make each thing you want to get a measurement on a separate test, you could trivially install libcppunit-subunit-dev and use subunits timing and reporting mechanisms; saving the stream provides a simple persistence mechanism. -Rob signature.asc Description: This is a digitally signed message part
Hudson build is back to normal: 3.0-i386-opensolaris #6
See http://build.squid-cache.org/job/3.0-i386-opensolaris/6/changes
Re: [RFC] Micro Benchmarking
Robert Collins wrote: On Tue, 2009-12-22 at 16:05 +1300, Amos Jeffries wrote: The tricky bit appears to be recovering the benchmark output and handling it after a run. If you make each thing you want to get a measurement on a separate test, you could trivially install libcppunit-subunit-dev and use subunits timing and reporting mechanisms; saving the stream provides a simple persistence mechanism. -Rob Yes. Thinking of subunit. could you re-base your LP branch for that update please? I keep getting this: ## bzr co lp:~lifeless/squid/subunit bzr: ERROR: Not a branch: bzr+ssh://bazaar.launchpad.net/~squid/squid/squid3-trunk/. I think kinkies muxer branch may be encountering the same problem but on push. Amos -- Please be using Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20 Current Beta Squid 3.1.0.15
Build failed in Hudson: 3.0-i386-opensolaris #7
See http://build.squid-cache.org/job/3.0-i386-opensolaris/7/ -- Started by upstream project 3.0-amd64-CentOS-5.3 build number 36 Building remotely on osol-x86 http://bzr.squid-cache.org/bzr/squid3/branches/SQUID_3_0 is permanently redirected to http://bzr.squid-cache.org/bzr/squid3/branches/SQUID_3_0/ bzr: ERROR: socket.error: (131, 'Connection reset by peer') Traceback (most recent call last): File /usr/lib/python2.4/site-packages/bzrlib/commands.py, line 842, in exception_to_return_code return the_callable(*args, **kwargs) File /usr/lib/python2.4/site-packages/bzrlib/commands.py, line 1037, in run_bzr ret = run(*run_argv) File /usr/lib/python2.4/site-packages/bzrlib/commands.py, line 654, in run_argv_aliases return self.run(**all_cmd_args) File /usr/lib/python2.4/site-packages/bzrlib/builtins.py, line 1017, in run possible_transports=possible_transports, local=local) File /usr/lib/python2.4/site-packages/bzrlib/decorators.py, line 192, in write_locked result = unbound(self, *args, **kwargs) File /usr/lib/python2.4/site-packages/bzrlib/workingtree.py, line 1611, in pull local=local) File /usr/lib/python2.4/site-packages/bzrlib/decorators.py, line 192, in write_locked result = unbound(self, *args, **kwargs) File /usr/lib/python2.4/site-packages/bzrlib/branch.py, line 948, in pull possible_transports=possible_transports, *args, **kwargs) File /usr/lib/python2.4/site-packages/bzrlib/branch.py, line 3194, in pull _override_hook_target=_override_hook_target) File /usr/lib/python2.4/site-packages/bzrlib/branch.py, line 3071, in pull overwrite=overwrite, graph=graph) File /usr/lib/python2.4/site-packages/bzrlib/decorators.py, line 192, in write_locked result = unbound(self, *args, **kwargs) File /usr/lib/python2.4/site-packages/bzrlib/branch.py, line 896, in update_revisions overwrite, graph) File /usr/lib/python2.4/site-packages/bzrlib/branch.py, line 3014, in update_revisions self.target.fetch(self.source, stop_revision) File /usr/lib/python2.4/site-packages/bzrlib/decorators.py, line 192, in write_locked result = unbound(self, *args, **kwargs) File /usr/lib/python2.4/site-packages/bzrlib/branch.py, line 579, in fetch pb=pb) File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 1695, in fetch find_ghosts=find_ghosts, fetch_spec=fetch_spec) File /usr/lib/python2.4/site-packages/bzrlib/decorators.py, line 192, in write_locked result = unbound(self, *args, **kwargs) File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 3413, in fetch pb=pb, find_ghosts=find_ghosts) File /usr/lib/python2.4/site-packages/bzrlib/fetch.py, line 81, in __init__ self.__fetch() File /usr/lib/python2.4/site-packages/bzrlib/fetch.py, line 107, in __fetch self._fetch_everything_for_search(search) File /usr/lib/python2.4/site-packages/bzrlib/fetch.py, line 134, in _fetch_everything_for_search resume_tokens, missing_keys = self.sink.insert_stream( File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 4238, in insert_stream return self._locked_insert_stream(stream, src_format, is_resume) File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 4267, in _locked_insert_stream for substream_type, substream in stream: File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 4448, in get_stream for knit_kind, file_id, revisions in data_to_fetch: File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 2312, in item_keys_introduced_by for result in self._find_file_keys_to_fetch(revision_ids, _files_pb): File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 2327, in _find_file_keys_to_fetch file_ids = self.fileids_altered_by_revision_ids(revision_ids, inv_w) File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 2163, in fileids_altered_by_revision_ids selected_keys) File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 2107, in _find_file_ids_from_xml_inventory_lines seen = set(self._find_text_key_references_from_xml_inventory_lines( File /usr/lib/python2.4/site-packages/bzrlib/repository.py, line 2031, in _find_text_key_references_from_xml_inventory_lines for line, line_key in line_iterator: File /usr/lib/python2.4/site-packages/bzrlib/knit.py, line 1765, in iter_lines_added_or_present_in_keys build_details = self._index.get_build_details(keys) File /usr/lib/python2.4/site-packages/bzrlib/knit.py, line 3049, in get_build_details for entry in entries: File /usr/lib/python2.4/site-packages/bzrlib/knit.py, line 3077, in _get_entries for node in self._graph_index.iter_entries(keys): File /usr/lib/python2.4/site-packages/bzrlib/index.py, line 1290, in iter_entries for node in index.iter_entries(keys): File /usr/lib/python2.4/site-packages/bzrlib/index.py, line 640, in iter_entries return