We have rather high transaction volume, especially around the market open/close 
times, so, yes, we did have duplicates before and sometimes had to retry 
multiple times [based on the bumped sequence number] to provide for uniqueness.

I can't go into specifics, but the whole idea was to spread the input queues, 
containing a dynamic mix of transactions, into logical sub-queues related  to 
their "types", all of which are to be still processed in parallel.  However, 
within a given "type" each distinct transaction "originator" has to be 
processed sequentially, so that a request to "cancel" an order is delayed until 
the original order has actually been seen and processed.        

-Victor-

On 2015-11-17, at 10:04, Tony Harminc wrote:
> 
> It seems to me that duplicates are extremely unlikely, given the
> relative speed of CPUs and I/O devices these days. Sure, I realize
> that each record doesn't imply an I/O operation; there will be
> blocking going on. But how long does it take to write out a VSAM
> block? Less than a microsecond? The clock values from STCKF on any
> modern machine surely have much greater precision than that.
>  
Birthday Paradox, given that the OP has multiple threads.

> And in any case, doesn't it make sense to let VSAM catch the duplicate
> key and obtain a new one only then?
>  
But perhaps the OP envisions a performance advantage in not specifying
the key as unique (does VSAM support that?  SQL does, sorta) and
guaranteeing uniqueness externally.

Are the keys in the DB displayable or in STCK format?  Is the performance
bottleneck in inserting records or extracting them?

-- gil

Reply via email to