I have a little AOLserver data structure modeling problem to solve,
which I suspect may be obvious to those with more AOLserver
experience.

I have an AOLserver loadable module that is issuing API requests to a
remote server, to gather financial data.  Responses come back from the
remote server asynchronously, to another thread I have listening for
them.  Each time I make a request, I need to track some info about it,
like the request ID, the timestamp, and enough other various bits of
information to re-create the request if necessary.  My question is,
what's the best way to track this reqeust info?

Most importantly, I need to track my own requests as they go out so
that when I get a response back from the remote server (possibly a few
minutes later), I'm able to match up the incoming responses with the
requests I sent - that's simple.  BUT I also will need to be able to
write Tcl scripts that ask questions like, "how many requests are
outstanding", or maybe, "how many outstanding requests for security
IBM are older than 2 minutes?", in order to help control my overall
application.  But I don't really know what sort of questions like that
I'll want ask down the road, so I'm attempting to pick a
general-purpose way to store and access this info.

Now, all this tracking of request info stuff would be very
straightforward to model in RDBMS fashion.  Thing is, this is all
transient data.  I neither need nor particularly want to save it
permanently in an RDBMS.  If my AOLserver dies, all my outstanding
request info becomes junk anyway, so I don't have any real need to
write it to persistant storage.

So the question becomes, which in-memory AOLserver structures are most
appropriate for this task?

Some sort of little in-memory RDBMS inside AOLserver would be cool for
this, just to make the conceptual job of tracking and then using this
request info easier.  But no such tool exists for AOLserver, does it?

I'm kind of tempted to write this request info out to a persistant
RDBMS just so I don't have to deal with the housekeeping inside
AOLserver.  But that really doesn't seem like the right way to solve
this, to me.

At any given time, I'll probably never have more than 10,000 or so
requests outstanding (probably less).  Maximum throughput of requests
is rather harder to guess at, but I figure it's probably around 1-10
per second max, probably closer to 1 than 10.  But you never know, so
I'd rather not stick in artificial limits or dependencies (like on an
external persistant RDBMS when I don't really need one) - particularly
if I can avoid these dependencies/limitations without too much
trouble, or if they'd be nasty to correct later.

So, my initial thought is to create an ns_set (with the -persist flag)
for each of my requests, and stuff all the info about each request
into the ns_set.  I delete each ns_set once I no longer care about
that request.  And to track and search my ns_sets, I use an nsv where
the value is the ns_set id with the detailed info for that request,
and the key is a glommed together string of all the info I might want
to search on, so I can use something like [nsv_array names $array
$pattern] to conveniently suck out lists of all the ns_set ids I want
to look at.  (Pretty much the way the "simple database" described in
the Tcl'ers Wiki works - "http://mini.net/tcl/1598.html";)

Thoughts or advice on this?

--
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com

Reply via email to