Here's the situation.  We process transactions in one Database and add them
to our main processing database.  To pull them in in batches, we run the
pull process once for each merchant in our main system.  This gives us one
big, slow loop that loops through each merchant.  Then we have a second one
that runs short and fast that transfers each transaction into the main db
from an Oterro 3.1 connected Table for that merchant, does some
calculations, and then writes a record for that batch.

This worked well with 20 or 30 clients in the system.  Now we have 100 or so
and it's stopped working sometimes.  We're starting to see that on days with
a lot of transactions (10,000), we are not processing the merchants at the
end of the list.  We have to re-run the process which skips those already
pulled and starts where the big slow loop apparently died.  That's lucky for
us.  From watching this happen, I have only been able to guess that we're
running out of resources in some way.  My best guess is that this resource
is time.  The big loop times out on a merchant with a lot of transactions
and just breaks out of the loop.

Any other guesses?

Here's my plan of attack if that still makes sense in a couple days.

1) I'm installing a new massive server to do this processing.  I need to
start using it anyway, but this gives me a reason to up the priority.  It's
a dual Xeon 3.0 GHz with 4 GB RAM and 80Gb SCSI HDD in RAID1.  (I know, I
had you at 4Gb of RAM.)  with GB Ethernet to the DB Server.

2) We'll re-write this process to be more of a push.  The Oterro Connected
DB will run in a zipped procedure to write a command file to import all the
transactions for each merchant and that file will be run against the main DB
to import them.

I'm not sure that these will solve the big problem though.

Any suggestions?

Ben Fogt

Reply via email to