Hi! I have a need of adding more features to the existing ns_log facility.
We have tons of code emitting ns_log messages into the server logfile. But as everything ends up in one single file, it is somethimes extremely difficult to analyze it as you may end up in staring at 100'000s of lines. Our application generates jobs. Each jobs uses modules and modules log things (i.e calls ns_log). What we'd need is a way to "teach" the ns_log command that it is emitting log in the context of the job X and perhaps route log data to a specific log file. This way we could group the log entries that correspond to one job into one per-job file. This is far easier to analyze and debug, of course. What I have in mind is a kind of push/pop mechansim where I could do something like: ns_logctl push <mylogprocedure1> ... ns_logctl push <mylogprocedure2> ... ns_logctl pop ... ns_logctl pop This would instruct the ns_log facility to invoke registered callbacks one by one. Each callback will do something and all will eventually end up at the low-level common-denominator being the current log file. Every callback might/could signalize two states: continue or break. According to that, the next registered call will be called (continue) or not (break). The callbacks themselves would be just Tcl code which will be prepared by the caller and "ns_logctl push"'ed on the stack of registered handlers. Now, in order to do that I will have to modify ns_logctl command and modify internal operation of ns_log to walk in LIFO fashion list of registered handlers and invoke each one of them in succession. Are there any better ideas how to achieve this? If not, is everybody OK with the proposed change? Cheers, Zoran