Re: [GENERAL] [Q] Cluster design for geographically separated dbs

2009-03-08 Thread Scott Marlowe
On Sat, Mar 7, 2009 at 2:03 PM, V S P  wrote:

> And wanted to ask what would be the main challenges I am facing with --
> from the experience of the users on this list.
>
> Especially I am not sure how to for example manage 'overlapping unique
> IDs' data.

I'm not expert on a lot of what you're doing, but the standard trick
here is to partition your bigserials.

The max value for the underlying sequence is 9223372036854775807 which
should give you plenty of space to work in.  So, When creating your
bigserials, you can then alter the sequence underneath them to use a
different range on each machine.

smarlowe=# create table a1 (id bigserial, info text);
NOTICE:  CREATE TABLE will create implicit sequence "a1_id_seq" for
serial column "a1.id"
smarlowe=# create table a2 (id bigserial, info text);
NOTICE:  CREATE TABLE will create implicit sequence "a2_id_seq" for
serial column "a2.id"
smarlowe=# alter sequence a1_id_seq minvalue  maxvalue 199
start 100;
ALTER SEQUENCE
smarlowe=# alter sequence a2_id_seq minvalue 200 maxvalue
299 start 200;
ALTER SEQUENCE

Now those two sequences can't run into each other, and if you move a
record from one machine to another it won't bump into what's already
there.  Partitioning by 10billion gives you 922337203 possible
partitions, so if you need bigger but fewer partitions there's plenty
of wiggle room to play with.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] open up firewall from "anywhere" to postgres ports?

2009-03-08 Thread Willy-Bas Loos
Hi,

Tom, thanks! I'll take that advise.

> but if you don't allow access to ports 5432 and 5433
> in the firewall the packets will never get to ...
Adrian, i was talking about opening up the firewall for "the world" to
my postgres ports, instead of granting access to individual ip
addresses.

Cheers!

WBL

-- 
"Patriotism is the conviction that your country is superior to all
others because you were born in it." -- George Bernard Shaw

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enable user access from remote host

2009-03-08 Thread Piotre Ugrumov
On 8 Mar, 02:08, pie...@hogranch.com (John R Pierce) wrote:
> Martin Gainty wrote:
> > postgresql.conf :
> > change listen_address to a real ip
>
> change it to '*' or you won't be able to use localhost...  alternately,
> youc could specify ip.of.net.iface,localhost   if you wanted to be specific.
>
> --
> Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
> To make changes to your 
> subscription:http://www.postgresql.org/mailpref/pgsql-general

Hi,
I inserted this line at the end of the pg_hba.conf

hosttestangelo  ""  ident sameuser

I inserted the "" to allow to everyone to access to the db.
But if I try to access, from another host, I receive an error. What is
the error in that line?
Thanks, bye bye.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Carl Sopchak
Well, the upgrade to 8.3 seemed to rid me of the command limit, but now I'm 
running out of memory.  I have 2Gb physical and 8Gb swap (after adding 4Gb).

Is there a way for me to run this outside of one huge transaction?  This 
really shouldn't be using more than a few hundred megs of RAM (assuming 
cursor records are all stored in memory)...

Thanks for the help,

Carl

On Saturday, March 07, 2009, Tom Lane wrote:
> Carl Sopchak  writes:
> > I have written a PL/pgSQL function that performs these calculations by
> > reading the needed data, calculating, and saving the results.  When run
> > over a smaller set of data, it works fine.  But when I tried to run it
> > over this larger set of data, I got the error message "ERROR:  cannot
> > have more than 2^32-1 commands in a transaction".
> >
> > I have looked into trying to control the transaction within my function,
> > but apparently this results in nested transactions, which is not
> > supported by pgsql 8.2 (my current version).
>
> Try updating to 8.3 --- it only counts plpgsql statements as separate
> "commands" if they actually modified something on-disk.  The 2^32 limit
> is still there but it's a lot harder to hit.
>
>   regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Gregory Stark
Carl Sopchak  writes:

> Well, the upgrade to 8.3 seemed to rid me of the command limit, but now I'm 
> running out of memory.  I have 2Gb physical and 8Gb swap (after adding 4Gb).

What do you mean you're running out of memory? For most part of Postgres
that's only a problem if you've configured it to use more memory than your
system can handle -- such as setting work_mem or shared_buffers too large.

One area that can cause problems is having too many trigger executions queued
up. I don't know if that's what you're running into though.

> Is there a way for me to run this outside of one huge transaction?  This 
> really shouldn't be using more than a few hundred megs of RAM (assuming 
> cursor records are all stored in memory)...

Personally I find it much more flexible to implement these types of jobs as
external scripts connecting as a client. That lets you stop/start transactions
freely. It also allows you to open multiple connections or run the client-side
code on a separate machine which can have different resources available.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Alvaro Herrera
Carl Sopchak wrote:
> Well, the upgrade to 8.3 seemed to rid me of the command limit, but now I'm 
> running out of memory.  I have 2Gb physical and 8Gb swap (after adding 4Gb).

Do you have AFTER triggers on the involved tables?  They are recorded on
memory and we have no mechanism to spill to disk, so it's frequent that
those cause out-of-memory.  If that's the explanation, your workaround
would be to get rid of them.

> Is there a way for me to run this outside of one huge transaction?  This 
> really shouldn't be using more than a few hundred megs of RAM (assuming 
> cursor records are all stored in memory)...

Hmm, maybe you're holding too many cursors open and not closing them
timely?  Did you post your function for review?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enable user access from remote host

2009-03-08 Thread Andreas Wenk



Piotre Ugrumov schrieb:

On 8 Mar, 02:08, pie...@hogranch.com (John R Pierce) wrote:

Martin Gainty wrote:

postgresql.conf :
change listen_address to a real ip

change it to '*' or you won't be able to use localhost...  alternately,
youc could specify ip.of.net.iface,localhost   if you wanted to be specific.

--
Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
To make changes to your 
subscription:http://www.postgresql.org/mailpref/pgsql-general


Hi,
I inserted this line at the end of the pg_hba.conf

hosttestangelo  ""  ident sameuser

I inserted the "" to allow to everyone to access to the db.
But if I try to access, from another host, I receive an error. What is
the error in that line?


just leave it blank ... no "" signs ...

Cheers

Andy

--
Andreas Wenk
Hamburg - Germany


Thanks, bye bye.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Carl Sopchak
On Sunday, March 08, 2009, Gregory Stark wrote:
> Carl Sopchak  writes:
> > Well, the upgrade to 8.3 seemed to rid me of the command limit, but now
> > I'm running out of memory.  I have 2Gb physical and 8Gb swap (after
> > adding 4Gb).
>
> What do you mean you're running out of memory? For most part of Postgres
> that's only a problem if you've configured it to use more memory than your
> system can handle -- such as setting work_mem or shared_buffers too large.
"ERROR: Out of Memory" is what I meant when I said I was running out of 
memory!  :-)  This is returned by psql, but it is the postmaster process that 
is hitting the wall.

I haven't touched the configuration, so whatever the default in 8.3 is, is 
what these are set at.  I'll look and bring the numbers down if necessary.  
Thanks for the pointer.

>
> One area that can cause problems is having too many trigger executions
> queued up. I don't know if that's what you're running into though.
There are no triggers on any of the tables in the database...

>
> > Is there a way for me to run this outside of one huge transaction?  This
> > really shouldn't be using more than a few hundred megs of RAM (assuming
> > cursor records are all stored in memory)...
>
> Personally I find it much more flexible to implement these types of jobs as
> external scripts connecting as a client. That lets you stop/start
> transactions freely. It also allows you to open multiple connections or run
> the client-side code on a separate machine which can have different
> resources available.
I suppose I could go that route...  I was just trying to keep things simple, 
and all in the database.  This was supposed to be a "quick and dirty" way to 
calculate, store and access these numbers...  So much for "quick"... If there 
were stored procedures as in other databases that I've used, or a way to run 
a function outside an implicit transaction, then I'd be all set.  Guess I'll 
have to add a layer to the setup.  Got any suggestions as to a good, fast, 
language to code the external script in?

Thanks for the help,

Carl

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Tom Lane
Carl Sopchak  writes:
> On Sunday, March 08, 2009, Gregory Stark wrote:
>> What do you mean you're running out of memory?

> "ERROR: Out of Memory" is what I meant when I said I was running out of 
> memory!  :-)  This is returned by psql, but it is the postmaster process that 
> is hitting the wall.

Oh?  Postgres doesn't spell its out-of-memory errors that way.  Please
quote the *exact* message you got, not some approximation.

Also, if this was a server-side out-of-memory problem, there should be a
memory usage dump appearing in the postmaster log, which would help
identify exactly what the problem is.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Thomas Kellerer

Carl Sopchak wrote on 08.03.2009 17:37:

or a way to run a function outside an implicit transaction


No sensible DBMS will let you do _anything_ outside a transaction

Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Carl Sopchak
On Sunday, March 08, 2009, Alvaro Herrera wrote:
> Carl Sopchak wrote:
> > Well, the upgrade to 8.3 seemed to rid me of the command limit, but now
> > I'm running out of memory.  I have 2Gb physical and 8Gb swap (after
> > adding 4Gb).
>
> Do you have AFTER triggers on the involved tables?  They are recorded on
> memory and we have no mechanism to spill to disk, so it's frequent that
> those cause out-of-memory.  If that's the explanation, your workaround
> would be to get rid of them.
No triggers have been defined on any tables in the database...

>
> > Is there a way for me to run this outside of one huge transaction?  This
> > really shouldn't be using more than a few hundred megs of RAM (assuming
> > cursor records are all stored in memory)...
>
> Hmm, maybe you're holding too many cursors open and not closing them
> timely?  Did you post your function for review?
I am only using one cursor, which is opened and closed repeatedly.  It pulls 
the base data from the database for the calculations.

I have not posted the function for review yet because its function is 
proprietary.  However, I have stripped out the proprietary stuff, and include 
the code below.  I marked everything stripped out by placing a brief 
description enclosed within {{ and }}.  I left all of the places that the 
database is accessed in the code.  I changed some of the line wrapping to fit 
a reasonable width (which I mention in case you see syntax type errors).  

There is a few lines that save the calculation details based on a flag in the 
trial_header table.  This flag is set to N for the run that I am having 
issues with, so these records are not being created.  I left that code in 
below for completeness...

I realize this isn't probably the cleanest code out there (I'm sure using 
prepared statements would help speed), but it was really meant to be a "quick 
and dirty" way to calculate the data I need.  Any comments or suggestions on 
improving the code is welcome.

create or replace function Run_Trial (tid integer) returns void as $proc$

declare
  {{ declarations }}
begin
   -- Set start time...
   program_version := '1.16';
   update trial_header set start_timestamp = clock_timestamp(), 
  run_version = program_version, end_timestamp = null 
  where trial_id = tid;

   -- get rid of prior run, if any:
   delete from trial_results where trial_id = tid;
   delete from trial_calc_detail where trial_id = tid;

   -- Get the trial parameters:
   select * into trial_hdr from trial_header where trial_id = tid;

   {{ Do some calculations  }}

   -- Create temp table of data.  This simplifies the coding below A LOT.
   {{ conditional calc }}

   -- (I can't figure out how to do this with a dynamic select and 
   -- "insert into trial_data select" and I get an error
   -- if I make the dynamic SQL a "select into temp table trial_data"...)
   -- Do it by brute force, I guess...
   drop table if exists trial_data;
   create temp table trial_data(
  {{ fields }}
   );
   for row in execute 
  'select {{ select statement }}' loop
 execute 'insert into trial_data values(' || {{ fields }} || ')';
   end loop;

   create index trial_data_idx on trial_data (data_mm, data_date);
   create index trial_data_idx2 on trial_data (data_date);

   -- Get date range for the data set we're using
   for row in execute 'select min(data_date) as min_date, 
  max(data_date) as max_date from trial_data' loop
  low_data_date := row.min_date;
  high_data_date := row.max_date;
   end loop;

   -- Calculate maximum number of years that data covers
   max_years = floor((high_data_date - low_data_date) / 365.25);

   --  Loop through all possible "x year" periods
   for cur_years in 1 .. max_years loop

  -- start from the first period on file:
  next_iteration_start := low_data_date;  

  num_periods := trial_hdr.periods_per_year * cur_years 
 + trial_hdr.{{ field }};

  for row in execute 'select count(*) as cnt from (
   select data_date from trial_data where data_date >= ' ||
 quote_literal(next_iteration_start) ||
   ' Limit ' || to_char(num_periods, '99') || ') a' loop
 data_periods := row.cnt;
  end loop;

  -- Do each "x year" period in data
  while data_periods = num_periods loop

 -- Initialize calculation 

 -- used to set sucessive values for next_iteration_start:
 iteration_counter := 0;   

 {{ some calculations }}

 for row in execute 'select max(data_date) as max_date' ||
  ' from ( select data_date from trial_data' ||
  ' where data_date >= ' ||
   quote_literal(next_iteration_start) ||
  ' order by data_date' || 
  ' Limit ' || to_char(num_periods, '9') || ') a' loop
per_end_date := row.max_date;
 end loop;

 -- Get dat

Re: [GENERAL] Performance of subselects

2009-03-08 Thread Christian Schröder

Tom Lane wrote:

No, they're not the same; NOT IN has different semantics for nulls.
  
But in this case the column in the subselect has a not-null constraint. 
Does the planner recognize this constraint?

You're probably at the threshold where it doesn't think the hashtable
would fit in work_mem.
  
I have read in the docs that the "work_mem" value should be increased 
carefully because the total memory used can be many times the value of 
"work_mem". Is there any statistics available about how many concurrent 
sort or hash operations are running and how much memory they consume? 
This would help to find out if the value can be changed without running 
out of memory.


Regards,
   Christian Schröder

--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax:  +49 551 489500-91
Hans-Böckler-Straße 2  http://www.deriva.de
D-37079 Göttingen

Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of subselects

2009-03-08 Thread Grzegorz Jaśkiewicz
On Sun, Mar 8, 2009 at 6:37 PM, Christian Schröder  wrote:
> Tom Lane wrote:
>>
>> No, they're not the same; NOT IN has different semantics for nulls.
>>
>
> But in this case the column in the subselect has a not-null constraint. Does
> the planner recognize this constraint?
not in this case, afaik


>> You're probably at the threshold where it doesn't think the hashtable
>> would fit in work_mem.

work_mem constraints amount of memory allocated per connection, hence
you can run out of memory if too many connections try to use too much
of it at the same time, that's why it is advisable to set work_mem per
connection/query, should the connection/query require more.


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Carl Sopchak
On Sunday, March 08, 2009, Tom Lane wrote:
> Carl Sopchak  writes:
> > On Sunday, March 08, 2009, Gregory Stark wrote:
> >> What do you mean you're running out of memory?
> >
> > "ERROR: Out of Memory" is what I meant when I said I was running out of
> > memory!  :-)  This is returned by psql, but it is the postmaster process
> > that is hitting the wall.
>
> Oh?  Postgres doesn't spell its out-of-memory errors that way.  Please
> quote the *exact* message you got, not some approximation.
Sorry.  I didn't think capitalization would matter.  It takes hours for this 
thing to run, so I did type the message from memory.  Here's a cut and 
paste: "ERROR:  out of memory" followed by "DETAIL:  Failed on request of 
size 14."  (quotes mine).
>
> Also, if this was a server-side out-of-memory problem, there should be a
> memory usage dump appearing in the postmaster log, which would help
> identify exactly what the problem is.

Here's what's around the error message in the log:

TopMemoryContext: 1007952 total in 14 blocks; 8568 free (10 chunks); 999384 
used
  Local Buffer Lookup Table: 8192 total in 1 blocks; 744 free (0 chunks); 7448 
used
  Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks); 6392 
used
  Operator lookup cache: 24576 total in 2 blocks; 14072 free (6 chunks); 10504 
used
  PL/PgSQL function context: 122880 total in 4 blocks; 21040 free (45 chunks); 
101840 used
  Operator class cache: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 
used
  CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
  Rendezvous variable hash: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 
used
  PLpgSQL function cache: 24328 total in 2 blocks; 5904 free (0 chunks); 18424 
used
  TopTransactionContext: 8192 total in 1 blocks; 680 free (0 chunks); 7512 
used
ExecutorState: 253952 total in 5 blocks; 111296 free (8 chunks); 142656 
used
  ExprContext: 8192 total in 1 blocks; 8112 free (0 chunks); 80 used
SPI Exec: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
SPI Proc: 3154128080 total in 398 blocks; 13664 free (178 chunks); 
3154114416 used
  MessageContext: 8192 total in 1 blocks; 4616 free (1 chunks); 3576 used
  smgr relation table: 8192 total in 1 blocks; 1776 free (0 chunks); 6416 used
  TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks); 16 
used
  Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
  PortalMemory: 8192 total in 1 blocks; 7904 free (0 chunks); 288 used
PortalHeapMemory: 7168 total in 3 blocks; 2688 free (0 chunks); 4480 used
  ExecutorState: 32832 total in 3 blocks; 15672 free (3 chunks); 17160 
used
TIDBitmap: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
TupleSort: 319512 total in 6 blocks; 47720 free (4 chunks); 271792 
used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
PortalHeapMemory: 1024 total in 1 blocks; 880 free (0 chunks); 144 used
  ExecutorState: 8192 total in 1 blocks; 5984 free (4 chunks); 2208 used
ExprContext: 8192 total in 1 blocks; 8176 free (7 chunks); 16 used
  Relcache by OID: 8192 total in 1 blocks; 2856 free (0 chunks); 5336 used
  CacheMemoryContext: 667472 total in 20 blocks; 71416 free (0 chunks); 596056 
used
CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
CachedPlanSource: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
CachedPlan: 3072 total in 2 blocks; 1712 free (0 chunks); 1360 used
CachedPlanSource: 3072 total in 2 blocks; 1056 free (0 chunks); 2016 used
SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
CachedPlanSource: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
CachedPlan: 3072 total in 2 blocks; 520 free (0 chunks); 2552 used
CachedPlanSource: 1024 total in 1 blocks; 144 free (0 chunks); 880 used
SPI Plan: 1024 total in 1 blocks; 888 free (0 chunks); 136 used
CachedPlan: 1024 total in 1 blocks; 56 free (0 chunks); 968 used
CachedPlanSource: 3072 total in 2 blocks; 1696 free (0 chunks); 1376 used
SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
CachedPlan: 1024 total in 1 blocks; 240 free (0 chunks); 784 used
CachedPlanSource: 3072 total in 2 blocks; 1872 free (2 chunks); 1200 used
SPI Plan: 1024 total in 1 blocks; 904 free (0 chunks); 120 used
CachedPlan: 1024 total in 1 blocks; 232 free (0 chunks); 792 used
CachedPlanSource: 3072 total in 2 blocks; 1872 free (2 chunks); 1200 used
SPI Plan: 1024 total in 1 blocks; 904 free (0 chunks); 120 used
CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
CachedPlanSource: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
   

Re: [GENERAL] [Q] string to int hash function for small range

2009-03-08 Thread V S P
Ok, thank you

using   \df *hash*  from psql prompt
I can see that is how I have to access this function

select pg_catalog.hashtext('myusername')

I will also play with other suggestions of get_byte 
of the MD5 result casted to a string.

thanks again for all the replies,
Vlad





On Sat, 07 Mar 2009 21:17 -0700, "Scott Marlowe"
 wrote:
> On Sat, Mar 7, 2009 at 7:33 PM, V S P  wrote:
> > I would like to have a function
> > given a user id varchar(64) to get a hash number between 0 and 255.
> >
> > I first tried md5 but the number it returns is very big and cannot
> > be converted to an INT
> >
> > there is likely a way to simply add ascii values of the userId together
> > to get a small integer, but wanted to know if there are any other
> > 'built-in' ways
> >
> > this is to get the database id based on user id.
> 
> In psql type this:
> 
> \df *hash*
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - The professional email service


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Tom Lane
Carl Sopchak  writes:
> Here's what's around the error message in the log:

> SPI Proc: 3154128080 total in 398 blocks; 13664 free (178 chunks); 
> 3154114416 used

Hmm, so apparently some internal leak within the plpgsql engine.  I'd be
willing to look into this if you can provide a self-contained test case.
(I don't wish to spend time trying to reverse engineer suitable tables
and data from the fragmentary function you posted, even assuming that it
would show the leak ...)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Random Deadlock on DROP CONSTRAINT

2009-03-08 Thread Mark Mandel
Just a note on this -

I have tried it on both 8.2 and 8.3

Maybe 'deadlock' isn't the right word, so much as 'hang'.. in that
once it decided to stop at that point, it goes no further, unless I
restart the app server that has the connecton to the database.

It seems to be completely random, and I'm got no idea what is going on.

Thanks in advance.

Mark

On Sun, Mar 8, 2009 at 11:39 AM, Mark Mandel  wrote:
> All,
>
> I seem to have a weird one here.
>
> I'm using Hibernate on my application to generate my DDL for postGres
> whenever it starts up, so when doing automated testing, the database
> tends to get tables and constrains dropped quite regularly, so as to
> have stable test data to work from in each test.
>
> Only occasionally, I get a deadlock on the command:
>    alter table Doctor
>        drop constraint FK7A547D3FE8829FE8
>
> Which means I have to restart my unit tests all over again.
>
> Looking into the postgres statistics tables I can see that:
>
> The statement:
> "alter table Doctor drop constraint FK7A547D3FE8829FE8" Is 'waiting' to be 
> true.
>
> (from pg_stat_activity)
>
> Delving deeper, in pg_locks, there are several records for the database,
>
> The ones that jump out at me are:
> On table: "doctor" there is a "AccessExclusiveLock" for "alter table
> Doctor drop constraint", which has not been granted.
>
> Everything else is 'AccessShareLock', or Exclusive locks that don't
> seem to match up to a table.
>
> (Going through locking tables in a new one to me, so bare with me on this one)
>
> This is all on my local machine, so there is noone else accessing the
> database at the time I'm writing these tests.
>
> Any help on this would be greatly appreciated.
>
> Mark
>
>
> --
> E: mark.man...@gmail.com
> W: www.compoundtheory.com
>



-- 
E: mark.man...@gmail.com
W: www.compoundtheory.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] Cluster design for geographically separated dbs

2009-03-08 Thread V S P
Thank you,
Is there a way, in the same  idea,
to make postgresql 'skip' say every 100 numbers when generating 
a 'next' in bigserial?
(or to insure that every number generated is evenly divisible by 100,
and then another db would be 99 and so on)

In oracle, if I remember right, there was something called a 'Step'
for the sequence values.



Vlad



On Sun, 08 Mar 2009 01:13 -0700, "Scott Marlowe"
 wrote:
> On Sat, Mar 7, 2009 at 2:03 PM, V S P  wrote:
> 
> > And wanted to ask what would be the main challenges I am facing with --
> > from the experience of the users on this list.
> >
> > Especially I am not sure how to for example manage 'overlapping unique
> > IDs' data.
> 
> I'm not expert on a lot of what you're doing, but the standard trick
> here is to partition your bigserials.
> 
> The max value for the underlying sequence is 9223372036854775807 which
> should give you plenty of space to work in.  So, When creating your
> bigserials, you can then alter the sequence underneath them to use a
> different range on each machine.
> 
> smarlowe=# create table a1 (id bigserial, info text);
> NOTICE:  CREATE TABLE will create implicit sequence "a1_id_seq" for
> serial column "a1.id"
> smarlowe=# create table a2 (id bigserial, info text);
> NOTICE:  CREATE TABLE will create implicit sequence "a2_id_seq" for
> serial column "a2.id"
> smarlowe=# alter sequence a1_id_seq minvalue  maxvalue 199
> start 100;
> ALTER SEQUENCE
> smarlowe=# alter sequence a2_id_seq minvalue 200 maxvalue
> 299 start 200;
> ALTER SEQUENCE
> 
> Now those two sequences can't run into each other, and if you move a
> record from one machine to another it won't bump into what's already
> there.  Partitioning by 10billion gives you 922337203 possible
> partitions, so if you need bigger but fewer partitions there's plenty
> of wiggle room to play with.
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - IMAP accessible web-mail


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] Cluster design for geographically separated dbs

2009-03-08 Thread Scott Marlowe
On Sun, Mar 8, 2009 at 1:01 PM, V S P  wrote:
> Thank you,
> Is there a way, in the same  idea,
> to make postgresql 'skip' say every 100 numbers when generating
> a 'next' in bigserial?
> (or to insure that every number generated is evenly divisible by 100,
> and then another db would be 99 and so on)
>
> In oracle, if I remember right, there was something called a 'Step'
> for the sequence values.

To find this stuff in pgsql you can either wander through the manual,
or fire up psql and ask it:

smarlowe=# \h create sequence
Command: CREATE SEQUENCE
Description: define a new sequence generator
Syntax:
CREATE [ TEMPORARY | TEMP ] SEQUENCE name [ INCREMENT [ BY ] increment ]
[ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
[ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
[ OWNED BY { table.column | NONE } ]

smarlowe=# \h alter sequence
Command: ALTER SEQUENCE
Description: change the definition of a sequence generator
Syntax:
ALTER SEQUENCE name [ INCREMENT [ BY ] increment ]
[ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
[ RESTART [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
[ OWNED BY { table.column | NONE } ]
ALTER SEQUENCE name RENAME TO new_name
ALTER SEQUENCE name SET SCHEMA new_schema

As you can see there's an increment setting that sets what you're
wanting to set.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Random Deadlock on DROP CONSTRAINT

2009-03-08 Thread Scott Marlowe
On Sun, Mar 8, 2009 at 2:54 PM, Mark Mandel  wrote:
> Just a note on this -
>
> I have tried it on both 8.2 and 8.3
>
> Maybe 'deadlock' isn't the right word, so much as 'hang'.. in that
> once it decided to stop at that point, it goes no further, unless I
> restart the app server that has the connecton to the database.
>
> It seems to be completely random, and I'm got no idea what is going on.

Well, pg_locks can tell you, and us, a lot.  Next time it's happening
grab the contents of pg_locks and attach it to an email here.  Most
likely it's some long running transaction or something blocking the
access you need.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of subselects

2009-03-08 Thread Scott Marlowe
On Sun, Mar 8, 2009 at 12:47 PM, Grzegorz Jaśkiewicz  wrote:
> work_mem constraints amount of memory allocated per connection, hence

Actually, it's per sort. And there can be > 1 sort per query.

> you can run out of memory if too many connections try to use too much
> of it at the same time, that's why it is advisable to set work_mem per
> connection/query, should the connection/query require more.

Definitely.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enable user access from remote host

2009-03-08 Thread John R Pierce

Piotre Ugrumov wrote:

Hi,
I inserted this line at the end of the pg_hba.conf

hosttestangelo  ""  ident sameuser

I inserted the "" to allow to everyone to access to the db.
But if I try to access, from another host, I receive an error. What is
the error in that line?
  


   host test angelo 0.0.0.0/0 ident sameuser

if you want to allow angelo to connect to database test from any IP... 
*HOWEVER*   "ident sameuser" should *not* be used for HOST connections, 
its only reliable for LOCAL connections.   instead, assign angelo a 
password...


   alter user angelo with encrypted password 'somepassword';

and in pg_hba.conf, specify...

   host test angelo  0.0.0.0/0  md5

note, also, if there are any pg_hba.conf records in FRONT of this one 
which would match on the connection, they will be used INSTEAD...  for 
instance...


   host all all 192.168.1.0/24   md5

that allows any user, any database on the subnet 192.168.1.0/24 to 
connect with md5 password authentication, and any following records 
would be ignored.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Random Deadlock on DROP CONSTRAINT

2009-03-08 Thread Mark Mandel
Thanks for this -

I've attached the results of the following query -
select pg_stat_activity.datname,pg_class.relname,pg_locks.transactionid,
pg_locks.mode, 
pg_locks.granted,pg_stat_activity.usename,substr(pg_stat_activity.current_query,1,30),
pg_stat_activity.query_start, age(now(),pg_stat_activity.query_start)
as "age", pg_stat_activity.procpid from pg_stat_activity,pg_locks left
outer join pg_class on (pg_locks.relation = pg_class.oid)  where
pg_locks.pid=pg_stat_activity.procpid order by query_start;

It looks to me like I have to many items that are 'idle in transaction'...

I am using JDBC connection pooling, but I was sure I was closing them
when I was done.

Mark

On Mon, Mar 9, 2009 at 8:29 AM, Scott Marlowe  wrote:
> On Sun, Mar 8, 2009 at 2:54 PM, Mark Mandel  wrote:
>> Just a note on this -
>>
>> I have tried it on both 8.2 and 8.3
>>
>> Maybe 'deadlock' isn't the right word, so much as 'hang'.. in that
>> once it decided to stop at that point, it goes no further, unless I
>> restart the app server that has the connecton to the database.
>>
>> It seems to be completely random, and I'm got no idea what is going on.
>
> Well, pg_locks can tell you, and us, a lot.  Next time it's happening
> grab the contents of pg_locks and attach it to an email here.  Most
> likely it's some long running transaction or something blocking the
> access you need.
>



-- 
E: mark.man...@gmail.com
W: www.compoundtheory.com







	
	
		
			datname
			relname
			transactionid
			mode
			granted
			usename
			substr
			query_start
			age
			procpid
		
		
			tissupath
			doctor_pkey
			
			AccessShareLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			doctor_internalid_key
			
			AccessShareLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			
			
			ExclusiveLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			title_pkey
			
			AccessShareLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			doctor_hl7id_key
			
			AccessShareLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			users_doctor
			
			AccessShareLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			doctor
			
			AccessShareLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			title
			
			AccessShareLock
			t
			postgres
			 in transaction
			2009-03-09 09:46:13.471743+11
			00:01:03.46
			8450
		
		
			tissupath
			
			
			ExclusiveLock
			t
			postgres
			alter table Doctor drop constr
			2009-03-09 09:46:49.063751+11
			00:00:27.87
			8456
		
		
			tissupath
			doctor
			
			AccessExclusiveLock
			f
			postgres
			alter table Doctor drop constr
			2009-03-09 09:46:49.063751+11
			00:00:27.87
			8456
		
		
			tissupath
			pg_class_oid_index
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_database_oid_index
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_authid_oid_index
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			
			
			ExclusiveLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_locks
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_stat_activity
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_authid
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_class_relname_nsp_index
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_database
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_database_datname_index
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_class
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47:16.93356+11
			00:00:00
			7517
		
		
			tissupath
			pg_authid_rolname_index
			
			AccessShareLock
			t
			postgres
			select pg_stat_activity.datnam
			2009-03-09 09:47

Re: [GENERAL] Random Deadlock on DROP CONSTRAINT

2009-03-08 Thread Mark Mandel
Omg... i think this one was all on me

What I thought was closing the connection at the end of my
processing... doesn't look like it was.

Pretty sure this has fixed it. Will let you know.

Mark

On Mon, Mar 9, 2009 at 9:52 AM, Mark Mandel  wrote:
> Thanks for this -
>
> I've attached the results of the following query -
> select pg_stat_activity.datname,pg_class.relname,pg_locks.transactionid,
> pg_locks.mode, 
> pg_locks.granted,pg_stat_activity.usename,substr(pg_stat_activity.current_query,1,30),
> pg_stat_activity.query_start, age(now(),pg_stat_activity.query_start)
> as "age", pg_stat_activity.procpid from pg_stat_activity,pg_locks left
> outer join pg_class on (pg_locks.relation = pg_class.oid)  where
> pg_locks.pid=pg_stat_activity.procpid order by query_start;
>
> It looks to me like I have to many items that are 'idle in transaction'...
>
> I am using JDBC connection pooling, but I was sure I was closing them
> when I was done.
>
> Mark
>
> On Mon, Mar 9, 2009 at 8:29 AM, Scott Marlowe  wrote:
>> On Sun, Mar 8, 2009 at 2:54 PM, Mark Mandel  wrote:
>>> Just a note on this -
>>>
>>> I have tried it on both 8.2 and 8.3
>>>
>>> Maybe 'deadlock' isn't the right word, so much as 'hang'.. in that
>>> once it decided to stop at that point, it goes no further, unless I
>>> restart the app server that has the connecton to the database.
>>>
>>> It seems to be completely random, and I'm got no idea what is going on.
>>
>> Well, pg_locks can tell you, and us, a lot.  Next time it's happening
>> grab the contents of pg_locks and attach it to an email here.  Most
>> likely it's some long running transaction or something blocking the
>> access you need.
>>
>
>
>
> --
> E: mark.man...@gmail.com
> W: www.compoundtheory.com
>



-- 
E: mark.man...@gmail.com
W: www.compoundtheory.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Random Deadlock on DROP CONSTRAINT

2009-03-08 Thread Mark Mandel
Well, there was an issue there, in that I wasn't closing my
connection, but I'm still getting the same log as I did before.

It seems to be when Hibernate lazy loads some data.  The lazy loading
seems to happen within the session (connection), but the transaction
doesn't get closed?

Mark

On Mon, Mar 9, 2009 at 10:05 AM, Mark Mandel  wrote:
> Omg... i think this one was all on me
>
> What I thought was closing the connection at the end of my
> processing... doesn't look like it was.
>
> Pretty sure this has fixed it. Will let you know.
>
> Mark
>
> On Mon, Mar 9, 2009 at 9:52 AM, Mark Mandel  wrote:
>> Thanks for this -
>>
>> I've attached the results of the following query -
>> select pg_stat_activity.datname,pg_class.relname,pg_locks.transactionid,
>> pg_locks.mode, 
>> pg_locks.granted,pg_stat_activity.usename,substr(pg_stat_activity.current_query,1,30),
>> pg_stat_activity.query_start, age(now(),pg_stat_activity.query_start)
>> as "age", pg_stat_activity.procpid from pg_stat_activity,pg_locks left
>> outer join pg_class on (pg_locks.relation = pg_class.oid)  where
>> pg_locks.pid=pg_stat_activity.procpid order by query_start;
>>
>> It looks to me like I have to many items that are 'idle in transaction'...
>>
>> I am using JDBC connection pooling, but I was sure I was closing them
>> when I was done.
>>
>> Mark
>>
>> On Mon, Mar 9, 2009 at 8:29 AM, Scott Marlowe  
>> wrote:
>>> On Sun, Mar 8, 2009 at 2:54 PM, Mark Mandel  wrote:
 Just a note on this -

 I have tried it on both 8.2 and 8.3

 Maybe 'deadlock' isn't the right word, so much as 'hang'.. in that
 once it decided to stop at that point, it goes no further, unless I
 restart the app server that has the connecton to the database.

 It seems to be completely random, and I'm got no idea what is going on.
>>>
>>> Well, pg_locks can tell you, and us, a lot.  Next time it's happening
>>> grab the contents of pg_locks and attach it to an email here.  Most
>>> likely it's some long running transaction or something blocking the
>>> access you need.
>>>
>>
>>
>>
>> --
>> E: mark.man...@gmail.com
>> W: www.compoundtheory.com
>>
>
>
>
> --
> E: mark.man...@gmail.com
> W: www.compoundtheory.com
>



-- 
E: mark.man...@gmail.com
W: www.compoundtheory.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enable user access from remote host

2009-03-08 Thread Tom Lane
John R Pierce  writes:
> *HOWEVER*   "ident sameuser" should *not* be used for HOST connections, 
> its only reliable for LOCAL connections.

A more accurate statement is that it's trustworthy to the extent that
you trust the owner of the other machine to be running a non-broken
identd daemon.  Within a LAN it might be perfectly reasonable to use.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie questions relating to transactions

2009-03-08 Thread Carl Sopchak
On Sunday, March 08, 2009, Tom Lane wrote:
> Carl Sopchak  writes:
> > Here's what's around the error message in the log:
> >
> > SPI Proc: 3154128080 total in 398 blocks; 13664 free (178 chunks);
> > 3154114416 used
>
> Hmm, so apparently some internal leak within the plpgsql engine.  I'd be
> willing to look into this if you can provide a self-contained test case.
> (I don't wish to spend time trying to reverse engineer suitable tables
> and data from the fragmentary function you posted, even assuming that it
> would show the leak ...)
>
>   regards, tom lane

Yeah, those numbers looked pretty big to me, too...  (Even though I didn't 
really know what they mean...)

I'll try to put together a test case, but don't hold your breath.  I'm kinda 
under a deadline on the project I'm working on, and this is a side track at 
best.  There are other ways for me to work around this (client side script or 
changing the function to do partial runs at a time) that will get me to where 
I need to be.  On the other hand, I like to help solve these types of things, 
so I should get around to it eventually...

Thanks for your help.

Carl

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enable user access from remote host

2009-03-08 Thread John R Pierce

Tom Lane wrote:

John R Pierce  writes:
  
*HOWEVER*   "ident sameuser" should *not* be used for HOST connections, 
its only reliable for LOCAL connections.



A more accurate statement is that it's trustworthy to the extent that
you trust the owner of the other machine to be running a non-broken
identd daemon.  Within a LAN it might be perfectly reasonable to use.
  


you would have to extend that trust to any machine connected to any 
network which can be routed to the server in question as he was 
specifying a wildcard IP, and that includes anything that anyone could 
plug into any network port.


no thanks.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Log SQL code before executing it

2009-03-08 Thread Chris

Alexander Farber wrote:

Dear PgSQL users,

is there please a way to display the executed SQL code?

I'm using phpBB 3.0.4 / PostgreSQL 8.2.6 / PHP 5.2.5 / OpenBSD 4.3
to develop a card game in Flash / C / Perl (at http://preferans.de )
and would like to log each game round by simply posting and replying
in one of the phpBB forums. The phpBB's code however is very hairy
and I have troubles to follow it and to reconstruct the SQL statements
needed for posting and replying.


If they have one, use their API to do it. If they change things between 
versions (even minor versions), you'll have to start from scratch to 
find why your stuff doesn't work any more.



PS: Or maybe it's possible to overload $db->sql_query($sql) in PHP?


see if it's defined in their code.

grep -nri 'function sql_query' /path/to/folder

--
Postgresql & php tutorials
http://www.designmagick.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enable user access from remote host

2009-03-08 Thread Tom Lane
John R Pierce  writes:
> Tom Lane wrote:
>> A more accurate statement is that it's trustworthy to the extent that
>> you trust the owner of the other machine to be running a non-broken
>> identd daemon.  Within a LAN it might be perfectly reasonable to use.

> you would have to extend that trust to any machine connected to any 
> network which can be routed to the server in question as he was 
> specifying a wildcard IP, and that includes anything that anyone could 
> plug into any network port.

Agreed, it's pretty stupid to use IDENT with a wildcard IP that allows
connections from untrusted networks.  I was just objecting to the
statement that it's unsafe in all cases.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Random Deadlock on DROP CONSTRAINT

2009-03-08 Thread Mark Mandel
Final words -

If you use PostGres and Hibernate, * everything * has to be in a
transaction, ALWAYS.

That and remember to close your sessions.

Thanks for listening to my rambling.

Mark

On Mon, Mar 9, 2009 at 10:31 AM, Mark Mandel  wrote:
> Well, there was an issue there, in that I wasn't closing my
> connection, but I'm still getting the same log as I did before.
>
> It seems to be when Hibernate lazy loads some data.  The lazy loading
> seems to happen within the session (connection), but the transaction
> doesn't get closed?
>
> Mark
>
> On Mon, Mar 9, 2009 at 10:05 AM, Mark Mandel  wrote:
>> Omg... i think this one was all on me
>>
>> What I thought was closing the connection at the end of my
>> processing... doesn't look like it was.
>>
>> Pretty sure this has fixed it. Will let you know.
>>
>> Mark
>>
>> On Mon, Mar 9, 2009 at 9:52 AM, Mark Mandel  wrote:
>>> Thanks for this -
>>>
>>> I've attached the results of the following query -
>>> select pg_stat_activity.datname,pg_class.relname,pg_locks.transactionid,
>>> pg_locks.mode, 
>>> pg_locks.granted,pg_stat_activity.usename,substr(pg_stat_activity.current_query,1,30),
>>> pg_stat_activity.query_start, age(now(),pg_stat_activity.query_start)
>>> as "age", pg_stat_activity.procpid from pg_stat_activity,pg_locks left
>>> outer join pg_class on (pg_locks.relation = pg_class.oid)  where
>>> pg_locks.pid=pg_stat_activity.procpid order by query_start;
>>>
>>> It looks to me like I have to many items that are 'idle in transaction'...
>>>
>>> I am using JDBC connection pooling, but I was sure I was closing them
>>> when I was done.
>>>
>>> Mark
>>>
>>> On Mon, Mar 9, 2009 at 8:29 AM, Scott Marlowe  
>>> wrote:
 On Sun, Mar 8, 2009 at 2:54 PM, Mark Mandel  wrote:
> Just a note on this -
>
> I have tried it on both 8.2 and 8.3
>
> Maybe 'deadlock' isn't the right word, so much as 'hang'.. in that
> once it decided to stop at that point, it goes no further, unless I
> restart the app server that has the connecton to the database.
>
> It seems to be completely random, and I'm got no idea what is going on.

 Well, pg_locks can tell you, and us, a lot.  Next time it's happening
 grab the contents of pg_locks and attach it to an email here.  Most
 likely it's some long running transaction or something blocking the
 access you need.

>>>
>>>
>>>
>>> --
>>> E: mark.man...@gmail.com
>>> W: www.compoundtheory.com
>>>
>>
>>
>>
>> --
>> E: mark.man...@gmail.com
>> W: www.compoundtheory.com
>>
>
>
>
> --
> E: mark.man...@gmail.com
> W: www.compoundtheory.com
>



-- 
E: mark.man...@gmail.com
W: www.compoundtheory.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general