On Ter, Jul 7, 2015 at 2:51 , Adrian Klaver
wrote:
On 07/07/2015 06:44 AM, Filipe Pina wrote:
On Ter, Jul 7, 2015 at 2:31 , Alban Hertroys
wrote:
On 7 July 2015 at 12:55, Filipe Pina
wrote:
On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver
wrote:
Still not sure what is you ar
On 07/07/2015 06:44 AM, Filipe Pina wrote:
On Ter, Jul 7, 2015 at 2:31 , Alban Hertroys wrote:
On 7 July 2015 at 12:55, Filipe Pina wrote:
On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver
wrote:
Still not sure what is you are trying to accomplish. Is it
really necessary t
On Ter, Jul 7, 2015 at 2:31 , Alban Hertroys wrote:
On 7 July 2015 at 12:55, Filipe Pina
wrote:
On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver
wrote:
Still not sure what is you are trying to accomplish. Is it really
necessary
that every transaction be serialized? Or to put it another way,
On 7 July 2015 at 12:55, Filipe Pina wrote:
> On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver
> wrote:
>> Still not sure what is you are trying to accomplish. Is it really necessary
>> that every transaction be serialized? Or to put it another way, why are you
>> running in serializable by default? O
Exactly, that's the twist I've decided after some tests yesterday (the
"lock all tables on last try" degraded performance in an obscene way):
giving up on serializable by default.
I wanted to use serializable so developers wouldn't have to worry about
properly using locks, but implementing thi
On 07/06/2015 07:15 AM, Filipe Pina wrote:
Yes, I've tried to come up with guideline to enumerate tables used in
each process, but it's not simple because it's python application
calling pgsql functions that use other functions, so it's tricky for a
developer re-using existing functions to enumer
On 07/06/2015 07:10 AM, Filipe Pina wrote:
It's not necessary to commit at all costs, it can fail, just not due to
serialization..
And the transaction can be something as simple as updating a field or
inserting a record (with foreign keys which is one the serialization
checks).
Not following,
It's not necessary to commit at all costs, it can fail, just not due to
serialization..
And the transaction can be something as simple as updating a field or
inserting a record (with foreign keys which is one the serialization
checks).
On Sáb, Jul 4, 2015 at 7:23 , Adrian Klaver
wrote:
On
Yes, I've tried to come up with guideline to enumerate tables used in
each process, but it's not simple because it's python application
calling pgsql functions that use other functions, so it's tricky for a
developer re-using existing functions to enumerate the tables used for
those. Even if ev
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Filipe Pina said:
> I really can't find any other solution for what I need (in short: make sure
> no transactions are left out due to serialization failures)
I think you may have been too quick to rule out advisory locks as a solution.
Yes, y
On 07/04/2015 10:49 AM, Filipe Pina wrote:
Thanks for the suggestion. I read that some people do use that strategy
for maintenance sometimes but it's no feasible in this scenario.
I would have to disallow new connections AND kill all existing
connections (as there would be an existing connection
On 7/4/2015 10:49 AM, Filipe Pina wrote:
Terminating all sessions will break every other transaction (except
for the one doing it). Locking database will put all the other on hold.
As we're talking about quick/instant operations on hold will have
impact on performance but won't cause anything
Thanks for the suggestion. I read that some people do use that strategy for
maintenance sometimes but it's no feasible in this scenario.
I would have to disallow new connections AND kill all existing connections
(as there would be an existing connection pool), but this won't have the
same impact a
Wouldn't it be easier just to prevent connections to the database
while your transaction is executed?
EG:
UPDATE pg_database
SET datallowconn FALSE
WHERE datname = 'your_database' ;
START TRANSACTION;
COMMIT;
UPDATE pg_database
SET datallowconn TRUE
WHERE datname = 'your_database' ;
O
So, as database level locks do not exist (as per
https://wiki.postgresql.org/wiki/Lock_database), I've thought of having
a function that would lock *every* table in the database (far from the
same, but would probably work for me).
Something like:
CREATE OR REPLACE FUNCTION lockdown()
RETUR
Exactly, that’s why there’s a limit on the retry number. On the last try I
wanted something like full lockdown to make sure the transaction will not fail
due to serialiazation failure (if no other processes are touching the database,
it can’t happen).
So if two transactions were retrying over a
Filipe Pina writes:
> It will try 5 times to execute each instruction (in case of
> OperationError) and in the last one it will raise the last error it
> received, aborting.
> Now my problem is that aborting for the last try (on a restartable
> error - OperationalError code 40001) is not an op
[ original question posted in
http://stackoverflow.com/questions/30789279/django-postgresql-retry-transaction-last-try-must-go-in
for easier read ]
I have a Django+PostgreSQL.
For data integrity pg is setup for serializable transactions, so I'm
retrying the transaction (in a generic class) as:
18 matches
Mail list logo