Elizer,
Thanks for all this advice and indeed your arguments are valid between
opening a socket, sending data, receiving data and closing the socket
unlike direct access to a regex or a memory entry even if the
calculation has already been done.
But what surprises me the most is that we have produced a python plugin
in thread which I provide you a code below.
The php code is like your mentioned example ( No thread, just a loop and
output OK )
Results are after 6k requests, squid freeze and no surf can be made as
with PHP code we can up to 10K requests and squid is happy
really, we did not understand why python is so low.
Here a python code using threads
#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb
class ClienThread():
def __init__(self):
self._exiting = False
self._cache = {}
def exit(self):
self._exiting = True
def stdout(self, lineToSend):
try:
sys.stdout.write(lineToSend)
sys.stdout.flush()
except IOError as e:
if e.errno==32:
# Error Broken PIPE!"
pass
except:
# other execpt
pass
def run(self):
while not self._exiting:
if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
line = sys.stdin.readline()
LenOfline=len(line)
if LenOfline==0:
self._exiting=True
break
if line[-1] == '\n':line = line[:-1]
channel = None
options = line.split()
try:
if options[0].isdigit(): channel = options.pop(0)
except IndexError:
self.stdout("0 OK first=ERROR\n")
continue
# Processing here
try:
self.stdout("%s OK\n" % channel)
except:
self.stdout("%s ERROR first=ERROR\n" % channel)
class Main(object):
def __init__(self):
self._threads = []
self._exiting = False
self._reload = False
self._config = ""
for sig, action in (
(signal.SIGINT, self.shutdown),
(signal.SIGQUIT, self.shutdown),
(signal.SIGTERM, self.shutdown),
(signal.SIGHUP, lambda s, f: setattr(self, '_reload', True)),
(signal.SIGPIPE, signal.SIG_IGN),
):
try:
signal.signal(sig, action)
except AttributeError:
pass
def shutdown(self, sig = None, frame = None):
self._exiting = True
self.stop_threads()
def start_threads(self):
sThread = ClienThread()
t = threading.Thread(target = sThread.run)
t.start()
self._threads.append((sThread, t))
def stop_threads(self):
for p, t in self._threads:
p.exit()
for p, t in self._threads:
t.join(timeout = 1.0)
self._threads = []
def run(self):
""" main loop """
ret = 0
self.start_threads()
return ret
if __name__ == '__main__':
# set C locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
ret = 0
try:
main = Main()
ret = main.run()
except SystemExit:
pass
except KeyboardInterrupt:
ret = 4
except:
sys.exit(ret)
Le 04/02/2022 à 07:06, Eliezer Croitoru a écrit :
And about the cache of each helpers, the cost of a cache on a single
helper is not much in terms of memory comparing to some network access.
Again it’s possible to test and verify this on a loaded system to get
results. The delay itself can be seen from squid side in the cache
manager statistics.
You can also try to compare the next ruby helper:
https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper
About a shared “base” which allows helpers to avoid computation of the
query…. It’s a good argument, however it depends what is the cost of
pulling from the cache compared to calculating the answer.
A very simple string comparison or regex matching would probably be
faster than reaching a shared storage in many cases.
Also take into account the “concurrency” support from the helper side.
A helper that supports parallel processing of requests/lines can do
better then many single helpers in more than once use case.
In any case I would suggest to enable requests concurrency from squid
side since the STDIN buffer will emulate some level of concurrency
by itself and will allow squid to keep going forward faster.
Just to mention that SquidGuard have used a single helper cache for a
very long time, ie every single SquidGuard helper has it’s own copy of
the whole
configuration and database files in memory.
And again, if you do have any option to implement a server service
model and that the helpers will contact this main service you will be
able to implement
much faster internal in-memory cache compared to a redis/memcahe/other
external daemon(need to be tested).
A good example for this is ufdbguard which has helpers that are
clients of the main service which does the whole heavy lifting and
also holds
one copy of the DB.
I have implemented SquidBlocker this way and have seen that it
out-performs any other service I have tried until now.
_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users