----- Original Message -----

> From: "Frank Millman" <fr...@chagford.com>
> To: pgsql-general@postgresql.org
> Sent: Wednesday, October 26, 2016 4:42:29 AM
> Subject: [GENERAL] Locking question

> Hi all
> I am designing an inventory application, and I want to ensure that the stock
> level of any item cannot go negative.
> Instead of maintaining a running balance per item, I store the original
> quantities received in one table (call it ‘inv_rec’), and any amounts
> removed in another table (call it ‘inv_alloc’).
> CREATE TABLE inv_rec
> (row_id SERIAL PRIMARY KEY,
> product_id INT REFERENCES inv_products,
> qty INT);
> CREATE TABLE inv_alloc
> (row_id SERIAL PRIMARY KEY,
> rec_id INT REFERENCES inv_rec,
> qty INT);
> To get the balance of a particular item -
> SELECT SUM(
> a.qty + COALESCE(
> (SELECT SUM(b.qty) FROM inv_alloc b
> WHERE b.rec_id = a.row_id), 0))
> FROM inv_rec a
> WHERE a.product_id = 99;
> To remove a quantity from a particular item -
> INSERT INTO inv_alloc (rec_id, qty)
> VALUES (23, -1);
> I want the application to check that there is sufficient quantity before
> attempting to execute the INSERT command.
> If ok, it will look for a suitable row in ‘inv_rec’ to allocate against.
> The danger of course is that, in a multi-user system, another user might have
> removed an additional quantity from the same item in between the SELECT and
> the INSERT.
> I *think* that the solution is to BEGIN the transaction, then perform SELECT
> ... WITH UPDATE, then proceed with INSERT and COMMIT if ok, else ROLLBACK.
> Is this the correct approach, or am I missing something?
> Thanks
> Frank Millman

Is it necessary to have the two separate tables for received and allocated? I 
would record the receipt and allocation transactions in a single table. 

Also, and then if there is no need for the high concurrency performance of 
SERIAL (which there probably is not, I'm guessing, since you are considering 
locking), I would make a keyed sequence by recording the last-used row_id as a 
column in the inv_products table. 

Then, apply a strategy such as described in 

https://www.postgresql.org/message-id/44e376f6.7010...@seaworthysys.com 

That pattern employs a trigger for convenience, but you could do without. Then, 
the first step in your BEGIN ... COMMIT block is to update the last-used value 
in the corresponding inv_products row to compute the next-to-be-used row_id 
value (i.e., UPDATE first, then SELECT it back out, or use UPDATE 
...RETURNING). 

That initial UPDATE transaction will block other transactions attempting to 
initiate inventory updates on that particular inventory item and effectively 
serialize your concurrent inventory activity, per inventory item. 

After you add the inventory transaction, then check the net balance and throw 
an exception if negative. That rolls back everything back to the initial row_id 
update in inv_products for the product 

Assuming a well-managed inventory organization, the exception throwing should 
be relatively infrequent. 

Me personally, depending on application specifics, might make the deliberate 
de-normalization decision and layout the transaction table to model an 
accounting balance sheet, having separate columns for inventory additions, 
subtractions, and a running total. 

--B 

Reply via email to