On 4/13/15 7:45 PM, David G. Johnston wrote:
On Mon, Apr 13, 2015 at 3:05 PM, Pawel Veselov <pawel.vese...@gmail.com
<mailto:pawel.vese...@gmail.com>>wrote:

    Hi.

    If I have a table created as:

    CREATE TABLE xq_agr (
       id                BIGSERIAL PRIMARY KEY,
       node              text not null
    );

    and that multiple applications insert into. The applications never
    explicitly specify the value for 'id'.
    Is it safe to, on a single connection, do:

    - open transaction (default transaction isolation)
    - Open cursor for select * from xq_agr order by id asc
    - do something with current record
    - advance the cursor (and repeat something), but stop at some point
    (id = LAST_ID), and
    - delete from xq_agr where id <= LAST_ID;
    - commit

    "safe to" means - whether the cursor will not miss any records that
    were deleted at the end.

    I'm suspecting that depending on the commit order, I may have
    situations when:
    - TX1 insert ID 1
    - TX2 insert ID 2
    - TX2 commits
    - TX3 scans 2
    - TX1 commits
    - TX3 deletes <= 2
    - record ID1 is deleted, but never processed.


​Going to ignore the MVC question for the moment and describe a better
"state transition mechanism" to consider.

pending -> active -> completed

If you ensure you never delete (i.e., transition to completed) something
that isn't active then you can never delete an item in pending.

​Limit the locking to the state transitions only.

The downside is the need to deal with "active" items that have been
abandoned by whatever process marked them active.

Another option is DELETE RETURNING. Instead of an initial SELECT to find records to work on, you would do DELETE FROM WHERE RETURNING * and deal with those records. I don't know if that's safe with a cursor though; I believe the DELETE fully materializes before records start coming back. So you need to handle all the rows from the SELECT or abort.

Back to your question: you should probably not use "<=" in your where
clause.  However, in READ COMMITTED TX3 cannot see ID1 since the
snapshot it took out was created before TX1 committed.  I am not fluent

Actually, that's not necessarily true. It depends when TX3 actually takes it's snapshot, which is NOT when it runs BEGIN. I believe there's other problems you'd run into as well. Basically, READ COMMITTED does nothing to protect you from "phantom reads".

REPEATABLE READ should protect you from phantom reads, but it won't help you if someone changes the data. If you're going to try and go this route, SERIALIZABLE is your best bet.

enough to work through the entire scenario in my head.  I'd suggest you
actually open up 3 psql sessions and play with them to see how things
really behave.

That's really not safe enough. There's just too many different race conditions you can encounter, and I'd bet that you couldn't even reproduce some of them from a client.

For me, a simply "SELECT FOR UPDATE / UPDATE WHERE" command in a
function solves the problem as small scale with minimal performance
degradation.  The transition from "pending" to "active" is effectively
serialized and the transition from "active" to "completed" only occurs
when the process has been performed and it is not possible to have two
client simultaneously processing the same work.

Note that that isn't safe from repeatable reads. What you're describing is only safe if the WHERE clause on the update is guaranteed to always find only one row (ie, in this example, by using xq_agr.id = something). Anything other than that is asking for trouble.

BTW, since it looks like you're just looking for a queue, you should take a look at PgQ (https://wiki.postgresql.org/wiki/PGQ_Tutorial). It uses something more efficient than bulk deletes to handle it's queue, and you can set it up so it will handle a large amount of queued items rather well (increase the number of tables). The one downside you may run into is you MUST consume every event in a single batch. There used to be support for the concept of "retrying" an event, but that may have been removed. It would be safe for you to put failed events into a second queue.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to