On Thu, Oct 27, 2016 at 1:37 AM, Frank Millman wrote:
> As mentioned previously, I am trying to avoid using PostgreSQL-specific
> techniques, as I need to support sqlite3 and SQL Server as well.
The SERIALIZABLE transaction isolation level is portable. It it
part of the SQL standard (and has be
From: Frank Millman
Sent: Wednesday, October 26, 2016 10:42 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Locking question
> I am designing an inventory application, and I want to ensure that the stock
> level of any item cannot go negative.
Thanks to all for some really
On Wed, Oct 26, 2016 at 3:42 AM, Frank Millman wrote:
> I am designing an inventory application, and I want to ensure
> that the stock level of any item cannot go negative.
One way to do this is to use only transactions at the SERIALIZABLE
transaction isolation level to maintain and query this d
On Wed, 2016-10-26 at 12:18 +0200, Frank Millman wrote:
>
> From: hubert depesz lubaczewski
> Sent: Wednesday, October 26, 2016 10:46 AM
> To: Frank Millman
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Locking question
>
> On Wed, Oct 26, 2016 at 10:42:2
- Original Message -
> From: "Frank Millman"
> To: pgsql-general@postgresql.org
> Sent: Wednesday, October 26, 2016 4:42:29 AM
> Subject: [GENERAL] Locking question
> Hi all
> I am designing an inventory application, and I want to ensure that the stock
&
> *From:* hubert depesz lubaczewski
> *Sent:* Wednesday, October 26, 2016 10:46 AM
> *To:* Frank Millman
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] Locking question
>
> On Wed, Oct 26, 2016 at 10:42:29AM +0200, Frank Millman wrote:
> > Hi all
>
From: hubert depesz lubaczewski
Sent: Wednesday, October 26, 2016 10:46 AM
To: Frank Millman
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Locking question
On Wed, Oct 26, 2016 at 10:42:29AM +0200, Frank Millman wrote:
> Hi all
> >
> > I am designing an inventory app
On Wed, Oct 26, 2016 at 10:42:29AM +0200, Frank Millman wrote:
> Hi all
>
> I am designing an inventory application, and I want to ensure that the stock
> level of any item cannot go negative.
>
> Instead of maintaining a running balance per item, I store the original
> quantities received in o
Hi all
I am designing an inventory application, and I want to ensure that the stock
level of any item cannot go negative.
Instead of maintaining a running balance per item, I store the original
quantities received in one table (call it ‘inv_rec’), and any amounts removed
in another table (call
On Thu, Jul 28, 2016 at 10:18 AM, Josh Berkus wrote:
> On 07/26/2016 01:16 AM, David Harrison wrote:
> where sps.sessionid = ses and
> sps.rankid = rank and
> ...
> pg_try_advisory_lock(seatid)
> order by s.row_number, s.seat_number_in_row
> limit 1
> for update of sps;
Don'
On 07/26/2016 01:16 AM, David Harrison wrote:
> Hi Josh,
>
> Attached is the function and below the query that calls it, below that the
> result of SELECT version();
>
> SELECT tl_guest_list('13313880', '174880', null, '151094636600', null, null);
>
>
>
>
>
> "PostgreSQL 8.4.9 on x86_64-unk
=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= writes:
> given a query like this:
> select *
> from account a
> cross join lateral (
>select rate
> from exchange
> where target='USD'
> and source=a.currency
> order by date desc
> limit 1) e
> where a.i
Hi,
given a query like this:
select *
from account a
cross join lateral (
select rate
from exchange
where target='USD'
and source=a.currency
order by date desc
limit 1) e
where a.id=19
for update;
If I understand the documentation correctl
On 2/25/15 10:49 AM, Cenkar, Maciej wrote:
> Given PostgreSQL 9.3.5 what is locking strategy when executing query
> such as:
>
> UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM
> expensive_query_with_joins).
>
> Is this starting to lock rows after it executed sub-select or is it
Hi All,
I wonder if anyone can explain something I cannot easily find on google.
Given PostgreSQL 9.3.5 what is locking strategy when executing query such as:
UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM
expensive_query_with_joins).
Is this starting to lock rows after it ex
Hi,
when a row is updated a RowExclusiveLock is obtained on the updated row
and on every related index.
Is the order in which these locks are acquired defined in some way?
Thanks,
Torsten
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscriptio
Hi everyone,
I am running 'PostgreSQL 9.1.4, compiled by Visual
C++ build 1500, 64-bit' on a Windows Server and I am having some Locking
issues. Maybe anyone can let me know what is wrong with my example:
Imagine that we have two tables (t_users and t_records)
t_users contains 1 row per
On Sunday, May 20, 2012, Martijn van Oosterhout wrote:
> On Sun, May 20, 2012 at 12:26:26AM -0700, Ian Harding wrote:
> > I have a situation where an increase in volume of inserts into the
> > main transaction table causes a huge slowdown. The table has lots of
> > indexes and foreign keys and a
On Sun, May 20, 2012 at 12:26:26AM -0700, Ian Harding wrote:
> I have a situation where an increase in volume of inserts into the
> main transaction table causes a huge slowdown. The table has lots of
> indexes and foreign keys and a trigger.
>
> Clearly, something is causing a resource contentio
I have a situation where an increase in volume of inserts into the
main transaction table causes a huge slowdown. The table has lots of
indexes and foreign keys and a trigger.
Clearly, something is causing a resource contention issue, but here's
my main question:
I have log_lock_waits = on and
Carlos Mennens wrote:
>> 0 4 * * * /usr/bin/pg_dumpall> pg_dumpall.$DATE.sql
>>
>> that'll run at 4am every day.
> When I run the command in my shell (not in Cron), I'm prompted for my
> login password. Should I change the permissions in pg_hba.conf and
> enable INHERIT grants on my user? Should
On 12/14/2011 12:54 PM, Carlos Mennens wrote:
On Wed, Dec 14, 2011 at 1:38 PM, Andy Colson wrote:
this'll run every hour.
0 * * * * /usr/bin/pg_dumpall>pg_dumpall.$DATE.sql
Thank you!
try:
0 4 * * * /usr/bin/pg_dumpall>pg_dumpall.$DATE.sql
that'll run at 4am every day.
When I
On Wed, Dec 14, 2011 at 1:38 PM, Andy Colson wrote:
> this'll run every hour.
>
>> 0 * * * * /usr/bin/pg_dumpall> pg_dumpall.$DATE.sql
Thank you!
> try:
>
> 0 4 * * * /usr/bin/pg_dumpall> pg_dumpall.$DATE.sql
>
> that'll run at 4am every day.
When I run the command in my shell (not in Cron),
On 12/14/2011 12:26 PM, Carlos Mennens wrote:
On Wed, Dec 14, 2011 at 1:15 PM, Andy Colson wrote:
Yep, you simply cron a pg_dump. (dumpall if you want users/roles and all
databases). No locking needed.
So how would one put this in cron if I wanted to run this everyday?
0 * * * * /usr/bin/p
On Wed, Dec 14, 2011 at 1:15 PM, Andy Colson wrote:
> Yep, you simply cron a pg_dump. (dumpall if you want users/roles and all
> databases). No locking needed.
So how would one put this in cron if I wanted to run this everyday?
0 * * * * /usr/bin/pg_dumpall > pg_dumpall.$DATE.sql
Will that wo
On 12/14/2011 11:52 AM, Carlos Mennens wrote:
I'm wanted to find out why is it recommended or even an option to lock
tables during a backup of a database? I've never experimented with
database backups so I'm only guessing it locks / freezes the data so
no changes can be made while the backup is
I'm wanted to find out why is it recommended or even an option to lock
tables during a backup of a database? I've never experimented with
database backups so I'm only guessing it locks / freezes the data so
no changes can be made while the backup is in process, correct? Just
curious and wasn't abl
Thank you, Scott. I wondered about it, actually.
I've found perfect (I believe) solution. It uses pg_advisory_lock() as
I guessed before, but I thought wrong about it. For some reason I
thought session is some kind of scope for advisory lock, so one
program can't see locking made by another progra
On Fri, Jul 22, 2011 at 9:45 AM, Ilia Lilov wrote:
> There are two places from which my database can be accessed:
> 1) PHP code, which only read data from db and sends it to users' browsers;
> 2) C++ code, which writes data to db one time per 15 minutes (one huge
> transaction which affects all th
There are two places from which my database can be accessed:
1) PHP code, which only read data from db and sends it to users' browsers;
2) C++ code, which writes data to db one time per 15 minutes (one huge
transaction which affects all the tables in db);
Both pieces of code use local socket to acc
Le 15/07/2010 16:21, Shoaib Mir a écrit :
> On Thu, Jul 15, 2010 at 10:53 PM, Terry Lee Tucker
> mailto:te...@chosen-ones.org>> wrote:
>
> Greetings:
>
> I occasionally find the need to perform some maintenance on one or
> more of
> thirteen different databases. Typically, due to
On Thu, Jul 15, 2010 at 10:53 PM, Terry Lee Tucker wrote:
> Greetings:
>
> I occasionally find the need to perform some maintenance on one or more of
> thirteen different databases. Typically, due to the interaction between the
> databases, I need to lock down the databases for a short period of t
Greetings:
I occasionally find the need to perform some maintenance on one or more of
thirteen different databases. Typically, due to the interaction between the
databases, I need to lock down the databases for a short period of time so
that no updates are being performed anywhere. I do not wan
On Tue, Feb 2, 2010 at 12:19 AM, Tom Lane wrote:
> frank joerdens writes:
>> It seems that whenever I create a new empty table with a foreign key
>> constraint, the transaction will acquire an exclusive lock on the
>> referenced table, locking out other writers (not sure if even readers
>> as wel
frank joerdens writes:
> It seems that whenever I create a new empty table with a foreign key
> constraint, the transaction will acquire an exclusive lock on the
> referenced table, locking out other writers (not sure if even readers
> as well), and I don't quite see why that is necessary
It invo
It seems that whenever I create a new empty table with a foreign key
constraint, the transaction will acquire an exclusive lock on the
referenced table, locking out other writers (not sure if even readers
as well), and I don't quite see why that is necessary if the new
entity does not contain any r
Hi all ;
were seeing a backlog of queries in pg_stat_activity. The system has slowed
big time.
I see many many queries where waiting = 't'
I want to find out for each query which query they are waiting on (who's doing
the blocking).
What's the best way to find this, I looked at pg_locks for r
Just realised I didn't cc: the list on this. Was obviously having a good
evening.
Richard Huxton wrote:
Glen Parker wrote:
> I still don't think it will work. Two concurrent transactions could
still screw the data up.
Before: 9 rows.
Tx 1: See 9 rows, issue insert, see 10 rows.
Tx 2: See 9
--
From: "Richard Huxton"
Of course, if you're going to have a separate table then you might as well
store the count in there and actually update it on every
insert/update/delete. Assuming you might find the count of some use
somewhere. Set the
Richard Huxton wrote:
Glen Parker wrote:
Richard Huxton wrote:
#3 won't work unless the other transactions have all committed by the
time you do the check. It is guaranteed to fail at some point.
If it's in an AFTER INSERT/UPDATE trigger then whatever transaction
takes you beyond 10 rows y
Richard Huxton wrote:
Shakil Shaikh wrote:
Hi,
Consider the following scenario:
CREATE FUNCTION test(name)
select into cnt count(id) from items where owner = name;
--suppose someone inserts during this point? then next check will
succeed when it should not.
if (cnt < 10) then
insert into
Glen Parker wrote:
Richard Huxton wrote:
3. Check after an insert on the items table and raise an exception if
there are 11+ items.
I'd be tempted by #3 - assuming most of the time you won't breach this
limit.
#3 won't work unless the other transactions have all committed by the
time you
Shakil Shaikh wrote:
Hi,
Consider the following scenario:
CREATE FUNCTION test(name)
select into cnt count(id) from items where owner = name;
--suppose someone inserts during this point? then next check will
succeed when it should not.
if (cnt < 10) then
insert into items values ('new item
Hi,
Consider the following scenario:
CREATE FUNCTION test(name)
select into cnt count(id) from items where owner = name;
--suppose someone inserts during this point? then next check will succeed
when it should not.
if (cnt < 10) then
insert into items values ('new item', name);
end;
end;
W
On Jan 15, 2008 12:03 AM, Adam Rich <[EMAIL PROTECTED]> wrote:
> > Advisory locks would work here (better that than table lock), but I
> > don't think that's the right approach. Transaction 2 should simply do
> > a
> > select * from parent_tbl
> > where id=1 for update;
> >
> > at the start of the
> Advisory locks would work here (better that than table lock), but I
> don't think that's the right approach. Transaction 2 should simply do
> a
> select * from parent_tbl
> where id=1 for update;
>
> at the start of the transaction.
That's actually what I'm doing (just forgot to include it in
On Jan 14, 2008 5:57 PM, Adam Rich <[EMAIL PROTECTED]> wrote:
> >
> > From what I can tell, this kind of roll-your-own application level
> > locking system is exactly what advisory locks are for. Search the
> > archives for the last couple of weeks as I remember someone posting
> > some really he
On Jan 14, 2008 4:31 PM, Adam Rich <[EMAIL PROTECTED]> wrote:
> > You should be able to do "select for update" on both parent and child
> > records and get the effect you desire.
> >
>
> I don't think that will work. Let me demonstrate:
> (this is simplified, but sufficient to make my point)
>
> -
On Jan 14, 2008, at 4:57 PM, Adam Rich wrote:
From what I can tell, this kind of roll-your-own application level
locking system is exactly what advisory locks are for. Search the
archives for the last couple of weeks as I remember someone posting
some really helpful functions to assist in us
>
> From what I can tell, this kind of roll-your-own application level
> locking system is exactly what advisory locks are for. Search the
> archives for the last couple of weeks as I remember someone posting
> some really helpful functions to assist in using advisory locks.
>
> Erik Jones
Yes
On Jan 14, 2008, at 3:54 PM, andy wrote:
In our program we wrote the locking into the program, and created a
modulelock table like:
create table moduelock(
userid int,
module int,
primary key (userid, module)
)
The program then locks things before it uses them... but we also
have pr
Adam Rich wrote:
I have a "parent_tbl" and dozens of data tables, with foreign keys
referencing the PK of "parent_tbl" (one-to-many). There are 100+
users accessing the application, usually (but not always) each user
is working on a different record in parent_tbl. (this would seem like a pret
On Jan 14, 2008 3:31 PM, Adam Rich <[EMAIL PROTECTED]> wrote:
> > You should be able to do "select for update" on both parent and child
> > records and get the effect you desire.
> >
>
> I don't think that will work. Let me demonstrate:
> (this is simplified, but sufficient to make my point)
>
> -
> You should be able to do "select for update" on both parent and child
> records and get the effect you desire.
>
I don't think that will work. Let me demonstrate:
(this is simplified, but sufficient to make my point)
-- Connection 1 --
begin trans;
select * from parent_tbl
where id=1 for upda
On Jan 14, 2008 2:43 PM, Adam Rich <[EMAIL PROTECTED]> wrote:
>
> I have a "parent_tbl" and dozens of data tables, with foreign keys
> referencing the PK of "parent_tbl" (one-to-many). There are 100+
> users accessing the application, usually (but not always) each user
> is working on a different
I have a "parent_tbl" and dozens of data tables, with foreign keys
referencing the PK of "parent_tbl" (one-to-many). There are 100+
users accessing the application, usually (but not always) each user
is working on a different record in parent_tbl. (this would seem like a pretty
standard scena
On Sun, Sep 16, 2007 at 01:46:44PM +0300, Panagiwths Pediadiths wrote:
> Many transactions doing this in parallel end up inserting the value many
> times. Could i be missing something regarding how to set up the isolation
> level?
> Thanks!
No, the only easy way you can guarentee you won't insert
On 9/15/07, Panagiwths Pediadiths <[EMAIL PROTECTED]> wrote:
> Shouldn't the serializable level prevent these duplicates? As I understand
> it serializable
> should give the same result as if the transactions were performed the one
> after the other.
http://www.postgresql.org/docs/8.2/static/tran
On Sat, 15 Sep 2007, Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/15/07 19:59, Panagiwths Pediadiths wrote:
> >
> > On Sat, 15 Sep 2007, Ron Johnson wrote:
> >
> > On 09/15/07 03:28, Panagiwths Pediadiths wrote:
> Thats the fun part, I actually need to allo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 09/15/07 19:59, Panagiwths Pediadiths wrote:
>
> On Sat, 15 Sep 2007, Ron Johnson wrote:
>
> On 09/15/07 03:28, Panagiwths Pediadiths wrote:
Thats the fun part, I actually need to allow duplicates in specific cases
but not in this one :)
On Sat, 15 Sep 2007, Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/15/07 03:28, Panagiwths Pediadiths wrote:
> > Thats the fun part, I actually need to allow duplicates in specific cases
> > but not in this one :)
>
> Same table?
Yup
>
> > Shouldn't the serializ
On 9/15/07, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
> On Fri, Sep 14, 2007 at 04:48:08PM +0300, Panagiotis Pediaditis wrote:
> > Is there some way of locking all database tables in a transaction
> > without knowing their names
> > or even better just locking the entire database? I know
On Fri, Sep 14, 2007 at 04:48:08PM +0300, Panagiotis Pediaditis wrote:
> Is there some way of locking all database tables in a transaction
> without knowing their names
> or even better just locking the entire database? I know this is bad
> tactics but there is a specific
> case where i need it.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 09/15/07 03:28, Panagiwths Pediadiths wrote:
> Thats the fun part, I actually need to allow duplicates in specific cases
> but not in this one :)
Same table?
> Shouldn't the serializable level prevent these duplicates? As I understand
> it seriali
Thats the fun part, I actually need to allow duplicates in specific cases
but not in this one :)
Shouldn't the serializable level prevent these duplicates? As I understand
it serializable
should give the same result as if the transactions were performed the one
after the other.
Thnx
Panagiotis
On
On 9/14/07, Panagiotis Pediaditis <[EMAIL PROTECTED]> wrote:
> A simpler example,
> In the context of one transaction i do many queries of the form
> INSERT INTO table value WHERE value NOT IN TABLE;
>
> If i have 2 processes running the same 100s of these at the same time i
> end up
A simpler example,
In the context of one transaction i do many queries of the form
INSERT INTO table value WHERE value NOT IN TABLE;
If i have 2 processes running the same 100s of these at the same time i
end up with duplicates.
Even with isolation set to serializable
any ideas?
thn
Panagiotis Pediaditis, 14.09.2007 16:45:
Well the problem is I am working on rdf query engine for persistent RDF
data. The data is stored/structured in a specific way in the database.
When i perform updates in parallel, because there are cross table
dependencies, I end up with inconsistencies,
On Fri, Sep 14, 2007 at 05:45:07PM +0300, Panagiotis Pediaditis wrote:
> Well the problem is I am working on rdf query engine for persistent RDF
> data. The data is stored/structured in a specific way in the database.
> When i perform updates in parallel, because there are cross table
> dependen
On 9/14/07, Panagiotis Pediaditis <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Is there some way of locking all database tables in a transaction
> without knowing their names
> or even better just locking the entire database? I know this is bad
> tactics but there is a specific
> case where i need it. C
Well the problem is I am working on rdf query engine for persistent RDF
data. The data is stored/structured in a specific way in the database.
When i perform updates in parallel, because there are cross table
dependencies, I end up with inconsistencies, For example One transaction
reads to see
On 9/14/07, Panagiotis Pediaditis <[EMAIL PROTECTED]> wrote:
> ... there is a specific case where i need it.
Don't really know, but, explain what the case is, and maybe someone
could help you.
---(end of broadcast)---
TIP 4: Have you searched our li
Hello,
Is there some way of locking all database tables in a transaction
without knowing their names
or even better just locking the entire database? I know this is bad
tactics but there is a specific
case where i need it. Can it be done?
Thank you
Panagiotis
---(end
Well... if the application crashes then normally the TCP connection
would drop as well.
The problem is that in many environments it can take a *long* time
for the backend to realize that the client went away. The
tcp_keepalives_* settings are intended to try and reduce that time to
a more
There is such timeout from the database server for the idle connections but
yes you can always use firewall settings in order to do that and kill idle
connections.
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 2/6/07, Gurjeet Singh <[EMAIL PROTECTED]> wrote:
On 1/30/07, Alvaro Herrera <
On 1/30/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Shoaib Mir wrote:
> While debugging an application, I just wanted to confirm from the list
here:
>
> Suppose I have a long running transaction which has a few updates and
> inserts running on some specific tables which means it has acquired
>
Thank you Alvaro :)
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/30/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Shoaib Mir wrote:
> While debugging an application, I just wanted to confirm from the list
here:
>
> Suppose I have a long running transaction which has a
Shoaib Mir wrote:
> While debugging an application, I just wanted to confirm from the list here:
>
> Suppose I have a long running transaction which has a few updates and
> inserts running on some specific tables which means it has acquired
> Exclusive locks too during the transaction on specific
While debugging an application, I just wanted to confirm from the list here:
Suppose I have a long running transaction which has a few updates and
inserts running on some specific tables which means it has acquired
Exclusive locks too during the transaction on specific table but if just
before co
Hi,
Joost Kraaijeveld wrote:
Does PostgreSQL lock the entire row in a table if I update only 1
column?
Yes. In PostgreSQL, an update is much like a delete + insert. A
concurrent transaction will still see the old row. Thus the lock only
prevents other writing transactions, not readers.
Reg
Does PostgreSQL lock the entire row in a table if I update only 1
column?
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl
---(end of broadcast)---
TI
Richard Huxton wrote:
>
> Mark Morgan Lloyd wrote:
> >
> > If I might ask a related question- assuming that a client has grabbed a
> > restrictive lock during a transaction that e.g. is create/replacing
> > functions,
> > what happens to other sessions that attempt to run a select or update- will
Mark Morgan Lloyd wrote:
If I might ask a related question- assuming that a client has grabbed a
restrictive lock during a transaction that e.g. is create/replacing functions,
what happens to other sessions that attempt to run a select or update- will they
fail (i.e. an implicit NOWAIT) or will
Richard Huxton wrote:
> Was it "tuple concurrently updated"? You can reproduce this fairly
> simply by issuing BEGIN...CREATE OR REPLACE FUNCTION f... in two
> different psql sessions and delaying COMMIT appropriately. AFAIK it's
> harmless, but does abort your transaction.
I /think/ so, but it w
Mark Morgan Lloyd wrote:
Richard Huxton wrote:
Mark Morgan Lloyd wrote:
If there's a risk that multiple clients will try to execute a 'create or
replace function' simultaneously, what's the recommended practice for
putting it in a transaction and/or locking it? If a lock's incolved what
should
Richard Huxton wrote:
>
> Mark Morgan Lloyd wrote:
> > If there's a risk that multiple clients will try to execute a 'create or
> > replace function' simultaneously, what's the recommended practice for
> > putting it in a transaction and/or locking it? If a lock's incolved what
> > should this be
Mark Morgan Lloyd wrote:
If there's a risk that multiple clients will try to execute a 'create or replace
function' simultaneously, what's the recommended practice for putting it in a
transaction and/or locking it? If a lock's incolved what should this be applied
to- the table that the function i
If there's a risk that multiple clients will try to execute a 'create or replace
function' simultaneously, what's the recommended practice for putting it in a
transaction and/or locking it? If a lock's incolved what should this be applied
to- the table that the function is most likely to be involve
David J N Begley <[EMAIL PROTECTED]> writes:
> My apologies for what may be an obvious answer to others but the documentation
> (for v8.1) leaves me concerned (enough not to trust "expected behaviour" of
> some commands).
You're confusing table-level locks with row-level locks. They are
entirely
On Jun 7, 2005, at 1:15 PM, Alvaro Herrera wrote:
I don't think it's a bug, nor a deadlock situation. The problem is
the
"idle in transaction" server process, which holds some lock but isn't
doing anything useful with it. Probably work would continue if the
transaction was closed.
This view
On Tue, Jun 07, 2005 at 10:26:00AM -0400, peter royal wrote:
> i am using PostgreSQL 7.4.1 (i am aware of my need to upgrade :)
>
> i have a situation where i *believe* postgresql should be detecting a
> deadlock, but it is not. in the locks view you see below, all of the
> processes waiting o
i am using PostgreSQL 7.4.1 (i am aware of my need to upgrade :)
i have a situation where i *believe* postgresql should be detecting a
deadlock, but it is not. in the locks view you see below, all of the
processes waiting on the 'numberfactory' table are blocked. (which
smells like a deadlo
"gabriele zelasco" <[EMAIL PROTECTED]> writes:
> I would like to start a transaction with a sql function.
> When user press "edit" button on my form, i would lock the current row.
> After user has modified data on form, pressing "save" button I would save t=
> he modified row by sql update function
Hi.I'm using postgresql 8.0.3 under win2000 and developing with VS2003
(npgsql net provider).I would like to start a transaction with a sql
function.When user press "edit" button on my form, i would lock the current
row.After user has modified data on form, pressing "save" button I would
sa
sid tow wrote:
I am using a 7.2 version of postgreSQL and here if I have to copy
data in a particular table then I cannot specify the columns I want
to update, but rather i will have to copy data into all the columns
present. Now the problem is that I have a auto generated sequence
starting from 1
I am using a 7.2 version of postgreSQL and here if I have to copy data in a particular table then I cannot specify the columns I want to update, but rather i will have to copy data into all the columns present. Now the problem is that I have a auto generated sequence starting from 1 in one of the c
sid tow wrote:
Hi All,
Is there a way to stop a auto generated sequence from incrementing
for some time and then activate it to do the increments.
You might be able to do something by revoking permissions to the
sequence, and wrapping nextval/currval with security=owner functions. If
you can desc
Hi All,
Is there a way to stop a auto generated sequence from incrementing for some time and then activate it to do the increments.
Regards
Do you Yahoo!?
Yahoo! Search presents - Jib Jab's 'Second Term'
Will 7.5 fix the issue of the over-zealous locking with foreign key
constraints - where two separate transactions can't simultaneously insert
records that reference the same parent record?
Wes
---(end of broadcast)---
TIP 1: subscribe and unsubscri
I have a question about locks.
I have a stats table that get updated when some other table changes.
Sometimes that other table is updated a 2nd time before the first stats
update is finished which causes an error. I've tried using 'SET
TRANSACTION ISOLATION LEVEL SERIALIZABLE' but get 'could
1 - 100 of 112 matches
Mail list logo