On 06.10.2006, at 21:25, Stephen Deasey wrote:
But what I'm wondering is, why you need to do this with proxy slaves?
It seems like they don't have the same state problem that a series of
database statements do.
It's possible to send multiple Tcl commands to a single proxy slave at
the same time:
ns_proxy eval {
set f [do_foo ...]
set b [do_bar ...]
return [list $f $b]
}
(You can't do that with databases because they often forbid using a
statement separating semicolon when using prepared statements and/or
bind variables. But Tcl doesn't have that restriction.)
I do not need it. As, as you say, I can simply send all of them
in one script. That's trivial of course.
I give you another reason pro-handle: I'd like to allocate
the handle in order to have it always (for a longer time)
available. I "reserve" it, so to speak. If I can't do that
than my code is less "predictive" in terms that I might wait
potentially for a long time to get the chance to do something
as all proxies might be doing something else. Understand?
For example, any time budget you have for the statements as a whole
must be split between each. So, if you decide each call should take
only 30 secs, and the first call takes 1 sec but the second takes 31
secs, you will get a timeout error with almost half your total time
budget still to spend. But you would have been perfectly happy to wait
58 seconds in total, spread evenly between the two calls.
Another problem might be the code you run in between the two eval
calls. In the case of explicit handle management or the withhandle
command, the handle is kept open even when it is not being used. If
the code that runs between the two eval calls takes some time --
perhaps because it blocks on IO, which may not be apparent to the
caller -- then other threads may be prevented from using a proxy slave
because the pool is empty. Handles which could have been returned to
the pool and used by other callers are sitting idle in threads busy
doing other things. This is a (temporary, mostly) resource leak.
Yes. This is true. And this is obviously a contra-handle as it
may lead to starvation.
So, apart from state (if this is needed), the withhandle command is a
speed optimisation. If you know for a fact that you're going to have
to make sequential calls to the proxy system, you can take the pool
lock just once. Otherwise, with implicit handle management, you take
and release the pool lock for each evaluation.
Regardless, the withhandle command allows implicit handle management
in the common case, and a nice, clear syntax for when you explicitly
want to manage the handle for performance or state reasons.
I must think this over...
For b.
I do not care how we call it. We can call it ns_cocacola if you like.
The name contest is open...
It's an annoying thing to have to even bother about... But it is
important. If it's not clear, people will be confused, we've seen a
lot of that. Confused people take longer to get up to speed. People
like new hires, which costs real money.
So, how would we call the baby? Why not just simply ns_exec?
Actually, we can build this into the core server and not as
module... The ns_exec could exec the same executable with
different command-line args that would select other main
function and not Ns_Main, for example. I'm just thinking "loud"...
For c.
I'd rather stick to explicit pool naming. I'd leave this "default"
to the programmer. The programmer might do something like
(not 100% right but serves the illustration purpose):
ns_proxy config default
rename ::exec tcl::exec
proc ::exec args {
ns_proxy eval default $args
}
This covers overloading of Tcl exec command. If you can convince me
that there are other benefits of having the default pool I can
think about them. I just do not see any at this point.
A default only makes sense if it's built in. If it isn't, no on can
rely on it and all code that uses proxy pools will have to create
their own.
OK.
Even with a built-in default, user code can certainly still create
it's own proxy pool(s). Nothing is being taken away.
True.
If it was unlikely that the default would work for much/most code,
then it would be wrong to have one. That would be hiding something
from programmers that they should be paying attention to. It looks to
me though like a default pool would work for most code. But this is
just a convenience.
To manage resources efficiently you need the default pool. I can
imagine 'exec' using the default pool, and 3rd party modules doing
something like:
ns_proxy eval -pool [ns_configvalue ... pool default] {
....
}
Which will just work. The site administrator can then allocate
resources in a more fine grained way, if needed, according to the
local situation.
And if we put the ns_exec into the server and make it like this:
ns_exec eval arg ?arg...?
ns_exec eval -pool thepool arg ?arg ...?
ns_exec config thepool option ?value option value ...?
etc ? Or ns_slave?
ns_slave eval arg
ns_slave config thepool option ?value option value ...?
Hm... that's not bad. What do you think?
I think ns_slave would be "opportune". The people will of course
immediately ask: where is ns_master? But I guess you can't dance
on all weddings...
Still, I will have to think about the "handle" issue for some time
(over the weekend) as I'm still not 100% convinced...
Would you integrate that in the core code or would you leave this
as a module? Actually, I keeep asking mylself why we still stick to
modules when some fuctionality is obviously needed all the time
(nslog or nscp for example)...
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
naviserver-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/naviserver-devel