Re: [GENERAL] how to drop function?

2011-11-16 Thread Ivan Sergio Borgonovo
On Tue, 15 Nov 2011 18:48:00 -0700
J.V. jvsr...@gmail.com wrote:

 the drop function works when running from a pgAdmin III Sql window
 
 but when I try to do from the command line and script it:
  psql -h $PGHOST -p $PGPORT -d $PGDATABASE -U $PGUSER -c
 *drop function *
 
 the above fails.

 It does however work with functions with no params or a single
 param. It seems to get hung up on the comma and the extra set of
 parenthesis

It would be nice to know how it fails and if you reposted exactly
what you wrote to make it fail and what you wrote to make it succede.

What does it mean get hung on the comma and extra set of
parenthesis?

ivan@dawn:~$ psql -h lan test -c 'drop function process_table
  (action TEXT, v_table_name varchar(100));'
DROP FUNCTION
ivan@dawn:~$

the psql command all on the same line.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to drop function?

2011-11-16 Thread Thomas Kellerer

Ivan Sergio Borgonovo, 16.11.2011 01:01:

test=# begin;
create or replace function process_table (
 action TEXT, v_table_name varchar(100)
) RETURNS BOOLEAN
AS $$
DECLARE

BEGIN
 return true;
END;
$$ LANGUAGE plpgsql;

drop function process_table (
 action TEXT, v_table_name varchar(100)
);
commit;
BEGIN
CREATE FUNCTION
DROP FUNCTION
COMMIT
test=#

Repeat just the input parameters.


You don't have to include the parameter names though
 
   drop function process_table (TEXT, varchar(100));


is just as good and bit less typing ;)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to drop function?

2011-11-16 Thread Ivan Sergio Borgonovo
On Wed, 16 Nov 2011 09:17:45 +0100
Thomas Kellerer spam_ea...@gmx.net wrote:

 Ivan Sergio Borgonovo, 16.11.2011 01:01:
  test=# begin;
  create or replace function process_table (
   action TEXT, v_table_name varchar(100)
  ) RETURNS BOOLEAN
  AS $$
  DECLARE
 
  BEGIN
   return true;
  END;
  $$ LANGUAGE plpgsql;
 
  drop function process_table (
   action TEXT, v_table_name varchar(100)
  );
  commit;
  BEGIN
  CREATE FUNCTION
  DROP FUNCTION
  COMMIT
  test=#
 
  Repeat just the input parameters.
 
 You don't have to include the parameter names though
   
 drop function process_table (TEXT, varchar(100));

 is just as good and bit less typing ;)

In psql/pgadmin you've tab completion. It will complete without the
parameters name.
If you're writing more durable code generally you can just cutpaste
the creation code.

I admit I haven't spent enough time to see if I can have tab
completion inside my IDE/editor.

When you're refactoring the function most probably you'll have to
refactor the drop code too.

I tend to refactor much more frequently the number/type of parameters
rather than the names, so skipping the names is anticipating some
work that I'll seldom take advantage of.

I admit I drop functions much more frequently in psql rather than in
my IDE/editor, but still I generally have the creation code handy.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning a dataset + employing hysteresis condition

2011-11-16 Thread Amit Dor-Shifer
On Wed, Nov 16, 2011 at 4:58 PM, David Johnston pol...@yahoo.com wrote:

 On Nov 15, 2011, at 15:28, Gavin Flower gavinflo...@archidevsys.co.nz
 wrote:

 On 14/11/11 18:35, Amit Dor-Shifer wrote:


 On Mon, Nov 14, 2011 at 4:29 PM, Amit Dor-Shifer amit.dor.shi...@gmail.com
 amit.dor.shi...@gmail.com wrote:

 Hi,
 I've got this table:
 create table phone_calls
 (
 start_time timestamp,
 device_id integer,
 term_status integer
 );

 It describes phone call events. A 'term_status' is a sort-of an exit
 status for the call, whereby a value != 0 indicates some sort of error.
 Given that, I wish to retrieve data on devices with a persisting error on
 them, of a specific type. I.E. that their last term_status was, say 2. I'd
 like to employ some hysteresis on the query: only consider a device as
 errorring if:
 1. the last good (0) term_status pre-dates a bad (2) term_status.
 2. it has at least N bad term_status events following the last good
 one.
 3. The time span between the first bad term_status event and the last
 one is = T minutes

 For instance, w/the following data set:


 Alternative thought,

 Have a Boolean field which is set to true for non-zero entries and false
 for zeros.  Upon entering a zero into the table, for a given device, set
 all currently true records to false.  Combine with a partial index on the
 true and you can quickly get a listing of all devices in error mode and all
 the recent error entries.

 David J.


Really appreciate the attention. Thanks!

Here's what I've so-far come up with:

SELECT pc.device_id, MAX(lsc.last_successful) AS last_successful,
MIN(pc.start_time) AS faulting_since
FROM (
SELECT MAX(start_time) AS last_successful, device_id FROM phone_calls
WHERE term_status IS NOT NULL AND term_status = 0
GROUP BY device_id
UNION
SELECT NULL AS last_successful, device_id FROM phone_calls
GROUP BY device_id
HAVING EVERY(term_status = 2) = TRUE
) AS lsc,
phone_calls pc
WHERE
pc.device_id=lsc.device_id
AND pc.term_status=2
AND (lsc.last_successful is NULL OR pc.start_time  lsc.last_successful)
GROUP BY pc.device_id
HAVING MIN(pc.start_time)  ?
AND COUNT(term_status) = ?

The two parameters to the query are T  N, in order, with a slight change:
T is a timestamp, so instead of specifying the a time span of 2 minutes, I
pass it as NOW() - INTERVAL '2 minute'.

with T=NOW() - INTERVAL '2 minute'  I get the following on the a/m dataset:
N=4:
40;2010-07-01 00:40:22;2010-07-01 00:41:22

N=3:
40;2010-07-01 00:40:22;2010-07-01 00:41:22
50;2010-07-01 12:00:00;2010-07-01 12:01:00
2;2011-11-16 21:56:59.52107;2011-11-16 21:57:59.52107

N=2:
40;2010-07-01 00:40:22;2010-07-01 00:41:22
2;2011-11-16 21:55:16.88869;2011-11-16 21:56:16.88869
30;2010-07-01 15:14:33;2010-07-01 15:15:33
50;2010-07-01 12:00:00;2010-07-01 12:01:00

* The bit with the union is to accommodate for devices which never see a
single successful term_status.

Thanks a lot for the helpful hints :)


[GENERAL] PostgreSQL-Slony error?

2011-11-16 Thread kyp404
Hi all,

We have a master and a slave DB server (both PostgreSQL 8.3, Slony 2.0.1).
Maybe we were hasty, because we delete ~18 million rows from a table on
master with one SQL command. Slony log and transfer jobs/transactions to
the slave node, but on the slave node the slony can't do this job. Slony
starts the transaction, but after ~7 million delete commands the server
close the connection.

We found this in the Slony log:
2011-11-15 11:39:25 CET DEBUG4 remoteHelperThread_1_1: fetch from cursor
2011-11-15 11:39:25 CET ERROR  remoteHelperThread_1_1: fetch 500 from LOG;
 server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
2011-11-15 11:39:25 CET DEBUG4 remoteHelperThread_1_1: return 50 unused
line buffers
2011-11-15 11:39:25 CET ERROR  remoteWorkerThread_1: close LOG; 
PGRES_FATAL_ERROR 2011-11-15 11:39:25 CET ERROR  remoteWorkerThread_1:
rollback transaction; set enable_seqsca
n = default; set enable_indexscan = default;  PGRES_FATAL_ERROR 2011-11-15
11:39:25 CET DEBUG1 remoteHelperThread_1_1: 2850.992 seconds until close
cursor
2011-11-15 11:39:25 CET INFO   remoteHelperThread_1_1: inserts=0 updates=0
deletes=722

We tried to tuning the PostgreSQL and now two times faster, but every
transaction stop after ~ 7 million rows. We couldn't find error message in
the PostgreSQL log.

Why stop the transaction? What should we do?

Thank you in advance,
Kyp


Re: [GENERAL] syntax highlighting in emacs after \e in psql

2011-11-16 Thread MikeW
 (add-to-list 'auto-mode-alist
              '(/psql.edit.[0-9]+\\' . sql-mode))

Thanks, Peter!
That's exactly what I was looking for.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CLONE DATABASE (with copy on write?)

2011-11-16 Thread Clark C. Evans
After this discussion and the spin-off discussion on the
hacker list, I wanted to summarize my understanding.

So, what I asked for is relatively inexpensive way to make
copies of an existing database for staging, upgrade tests,
and other activities.  There are two approaches to this
sort of replication (besides dump/restore). 

1. COPY DATABASE ... WITH TEMPLATE does a disk level
   copy of an existing database in the same cluster.  

   This approach is an order of magnitude faster than 
   a dump/load cycle.  Not only isn't there a dump/load
   and intermediate result, but write ahead log segments 
   are efficiently handled (Tom, thank you this correction).

   This approach has three downsides: (a) users must be
   booted off the system, (b) you duplicate storage, 
   and (c) shared memory of the cluster is split and 
   cache state has to be re-learned on the replica.

   Simon suggested that it may be possible to find a 
   solution for the exclusive access requirement; a way 
   to quiesce sessions without disconnection.

2. Use WALS to have a hot backup of the cluster; you 
   setup a cluster replica and then detach it.

   This approach solves the quiesce problem via 
   replication, so you don't have to boot users off the 
   system.  It also doesn't muck with the shared memory
   cache state of your production source database since
   you're making a copy to another PostgreSQL instance.

   However, it has a few disadvantages: (a) you have to 
   copy the entire cluster, (b) you must create and 
   maintain another PostgreSQL instance.
  
In a hackers thread, Thom proposed detach/attach 
feature so that you could move a database from one 
cluster to another.  This would be particularly useful,
but it looks quite infeasible since you'd have to touch
every disk block to rewrite the transaction IDs.  This
feature was requested by Yang Zhang on April 6th as well.

Best,

Clark

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How could I find the last modified procedure in the database?

2011-11-16 Thread Dhimant Patel
I have postgres *(PostgreSQL) 9.0.3 running.*
I also created several procedures/functions and now I don't remember the
last procedure I worked on! - I thought I could always get this from
metadata.

Now I'm stuck - couldn't find this details anywhere in catalog tables!

Is there anyway I can get this information?


Thanks,
DP.


Re: [GENERAL] How could I find the last modified procedure in the database?

2011-11-16 Thread John R Pierce

On 11/16/11 8:00 AM, Dhimant Patel wrote:

I have postgres *(PostgreSQL) 9.0.3 running.*
I also created several procedures/functions and now I don't remember 
the last procedure I worked on! - I thought I could always get this 
from metadata.


Now I'm stuck - couldn't find this details anywhere in catalog tables!

Is there anyway I can get this information?


as far as I know, nothing like that is stored, unless you log DDL 
operations, and timestamp your logs, in which case it would be in the 
logfiles.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres windows user

2011-11-16 Thread Christine Penner
We have a couple of clients having problems with the windows user that 
the postgres service uses to run changing passwords. They fixed the 
windows user password and it worked for a bit and then got reset again.


Client computers are all Windows XP SP3 and the server is Windows Server 
2003 standard SP2 (VMware Virtual Software)

The server has Postgres 9.0.3 (I believe)
They are also using a domain which I believe is causing the problem but 
I dont know enough about that to say for sure.


Has anyone seen this before and know what they can do?

--
Christine Penner
Ingenious Software
250-352-9495
ch...@fp2.ca



Re: [GENERAL] How could I find the last modified procedure in the database?

2011-11-16 Thread Andreas Kretschmer
Dhimant Patel drp4...@gmail.com wrote:

 I have postgres (PostgreSQL) 9.0.3 running.
 I also created several procedures/functions and now I don't remember the last
 procedure I worked on! - I thought I could always get this from metadata.
 
 Now I'm stuck - couldn't find this details anywhere in catalog tables!
 
 Is there anyway I can get this information?

Maybe with this query:

select proname from pg_proc order by oid desc limit 1;

but i'm not really sure ... tias (try it and see)


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Foreign Tables

2011-11-16 Thread Eliot Gable
I am working on a reporting project where I need to generate a report based
on data from several different network appliances. Each appliance runs a
PostgreSQL database which houses all of the information for the appliance.
Further, there are dozens of triggers in the database which fire when
various tables are touched in various different ways (insert, update,
delete). These triggers currently are used to build materialized views of
much of the data that I want to summarize in the reports.

I have read as much as I can find on 9.1's foreign table support, and it
looks almost ideal for bridging the gap between all the databases and
collecting all the data into a single report. However, I am unclear on a
few points...

1a) Can the foreign tables be written to? For example, I have server1 with
table foo and server2 which does 'create foreign table bar' where bar
references server1.foo. Can server2 write to bar and have it show in
server1.foo?

1b) If it does show in server1.foo, I assume it would also fire any
triggers on server1.foo; correct?

2) Given the example in question #1, can I put a trigger on server2.bar and
have it actually fire when server1.foo has an insert, update, or delete
operation on it?

Thanks in advance for any answers.


-- 
Eliot Gable

We do not inherit the Earth from our ancestors: we borrow it from our
children. ~David Brower

I decided the words were too conservative for me. We're not borrowing from
our children, we're stealing from them--and it's not even considered to be
a crime. ~David Brower

Esse oportet ut vivas, non vivere ut edas. (Thou shouldst eat to live;
not live to eat.) ~Marcus Tullius Cicero


Re: [GENERAL] all non-PK columns from information schema

2011-11-16 Thread Adam Cornett
On Tue, Nov 15, 2011 at 11:29 AM, Richard Broersma 
richard.broer...@gmail.com wrote:

 On Tue, Nov 15, 2011 at 8:00 AM, Tarlika Elisabeth Schmitz
 postgres...@numerixtechnology.de wrote:
  Interesting. I have previously used the information_schema for similar
  queries.
 
  What are the pros and cons for using either pg_catalog or
  information_schema?

 My understanding is that pg_catalog tables and views *can* change
 between major releases while the information_schema is expected to be
 more stable between major releases.

 Applications that depend upon the information_schema rather than
 pg_catalog are less likely to break when the PostgreSQL server is
 upgraded.

 --
 Regards,
 Richard Broersma Jr.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



The reason for this is that the pg_* tables are obviously specific to
Postgres, and do change between versions.

The information_schema tables are actually part of the SQL standard so it
is a portable  way to get information about the database structure.

-Adam Cornett


Re: [BULK] Re: [GENERAL] Streaming Replication woes

2011-11-16 Thread Sean Patronis

Konstantin,

Just back from some time off and read this reply.

This was exactly the issue.  The superuser postgres did not have this 
role since this was  a dump/restore/upgrade from postgres 8.4.  I just 
created a new Role and user, and it all worked!


Thanks!

--Sean


On 11/06/2011 06:41 PM, Konstantin Gredeskoul wrote:

Sean,

I saw your email on PostgreSQL general list about replication.

We are using 9.1.1 and were having the same issue, unable to setup
replication even though following wiki precisely.   I was able to
setup replication following wiki steps using PG 9.0.3, but not 9.1.1.
Then I found this page:
http://www.postgresql.org/docs/9.1/static/role-attributes.html

where it says:

initiating replication
A role must explicitly be given permission to initiate streaming
replication. A role used for streaming replication must always have
LOGIN permission as well. To create such a role, use CREATE ROLE name
REPLICATION LOGIN.

Does that mean postgres superuser is no longer able to also be a
replication user in 9.1?  If so this is not yet updated on the
Replication wiki.

I went to the master and created a new user specifically for replication:

CREATE ROLE REPLICATOR REPLICATION LOGIN PASSWORD '';

Then changed my master's pg_hba.conf, and slave's recovery.conf to
reference the new user replicator instead of postgres, and I no
longer get the same error, and our replication is working.

Hope this helps!

Thanks,

Konstantin Gredeskoul
Principal Ruby Engineer
ModCloth, Inc. San Francisco




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign Tables

2011-11-16 Thread Guillaume Lelarge
On Wed, 2011-11-16 at 13:38 -0500, Eliot Gable wrote:
 I am working on a reporting project where I need to generate a report based
 on data from several different network appliances. Each appliance runs a
 PostgreSQL database which houses all of the information for the appliance.
 Further, there are dozens of triggers in the database which fire when
 various tables are touched in various different ways (insert, update,
 delete). These triggers currently are used to build materialized views of
 much of the data that I want to summarize in the reports.
 
 I have read as much as I can find on 9.1's foreign table support, and it
 looks almost ideal for bridging the gap between all the databases and
 collecting all the data into a single report. However, I am unclear on a
 few points...
 
 1a) Can the foreign tables be written to? For example, I have server1 with
 table foo and server2 which does 'create foreign table bar' where bar
 references server1.foo. Can server2 write to bar and have it show in
 server1.foo?
 

No, you can't (yet?).


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction wraparound problem due to wrong datfrozenxid?

2011-11-16 Thread Arctic Toucan

I ran into a rather unusual problem today where  Postgres brought down a 
database to avoid transaction wraparound in a situation where it doesn't appear 
that it should have.

The error in the log is explicit enough...

Nov 16 04:00:03 SRP1 postgres[58101]: [1-1] FATAL:  database is not accepting 
commands to avoid wraparound data loss in database stat
Nov 16 04:00:03 SRP1 postgres[58101]: [1-2] HINT:  Stop the postmaster and use 
a standalone backend to vacuum database stat.

Yet, going back several days in the logs, there were none of the usual WARNING 
messages in the log about this situation occurring in xxx transactions.

When I query datfrozenxid value in pg_database it certainly showed a problem.


backend SELECT datname,datfrozenxid, age(datfrozenxid) FROM pg_database;
 1: datname (typeid = 19, len = 64, typmod = -1, byval = f)
 2: datfrozenxid(typeid = 28, len = 4, typmod = -1, byval = t)
 3: age (typeid = 23, len = 4, typmod = -1, byval = t)

 1: datname = postgres(typeid = 19, len = 64, typmod = -1, 
byval = f)
 2: datfrozenxid = 2699851604 (typeid = 28, len = 4, typmod = -1, 
byval = t)
 3: age = 1269165380  (typeid = 23, len = 4, typmod = -1, byval = t)

 1: datname = stat (typeid = 19, len = 64, typmod = -1, byval = f)
 2: datfrozenxid = 1822525199 (typeid = 28, len = 4, typmod = -1, 
byval = t)
*** 3: age = 2146491785  (typeid = 23, len = 4, typmod = -1, byval = 
t)

 1: datname = config   (typeid = 19, len = 64, typmod = -1, byval 
= f)
 2: datfrozenxid = 3869013990 (typeid = 28, len = 4, typmod = -1, 
byval = t)
 3: age = 12994   (typeid = 23, len = 4, typmod = -1, byval = t)

 1: datname = template1   (typeid = 19, len = 64, typmod = -1, 
byval = f)
 2: datfrozenxid = 2000352260 (typeid = 28, len = 4, typmod = -1, 
byval = t)
 3: age = 1968664724  (typeid = 23, len = 4, typmod = -1, byval = t)

 1: datname = template0   (typeid = 19, len = 64, typmod = -1, 
byval = f)
 2: datfrozenxid = 2000357564 (typeid = 28, len = 4, typmod = -1, 
byval = t)
 3: age = 1968659420  (typeid = 23, len = 4, typmod = -1, byval = t)



I then ran a query to check the age of all the relfrozenxid on each of the 
tables in the DB and it indicated that they were all OK.


backend select sum(case when age(relfrozenxid)  20 then 1 else 0 end) 
as gt_2billion, sum(case when age(relfrozenxid) between 15 and 
20 then 1 else 0 end) as gt_1_5billion, sum(case when age(relfrozenxid) 
between 10 and 15 then 1 else 0 end) as gt_1billion, sum(case 
when age(relfrozenxid) between 5 and 10 then 1 else 0 end) as 
gt_500million, sum(case when age(relfrozenxid) between 1 and 5 
then 1 else 0 end) as gt_100million, sum(case when age(relfrozenxid)  
1 then 1 else 0 end) as lt_100million from pg_class where relkind = 'r';
 1: gt_2billion (typeid = 20, len = 8, typmod = -1, byval = f)
 2: gt_1_5billion   (typeid = 20, len = 8, typmod = -1, byval = f)
 3: gt_1billion (typeid = 20, len = 8, typmod = -1, byval = f)
 4: gt_500million   (typeid = 20, len = 8, typmod = -1, byval = f)
 5: gt_100million   (typeid = 20, len = 8, typmod = -1, byval = f)
 6: lt_100million   (typeid = 20, len = 8, typmod = -1, byval = f)

 1: gt_2billion = 0   (typeid = 20, len = 8, typmod = -1, byval = f)
 2: gt_1_5billion = 0 (typeid = 20, len = 8, typmod = -1, byval = f)
 3: gt_1billion = 0   (typeid = 20, len = 8, typmod = -1, byval = f)
 4: gt_500million = 628   (typeid = 20, len = 8, typmod = -1, 
byval = f)
 5: gt_100million = 8928  (typeid = 20, len = 8, typmod = -1, 
byval = f)
 6: lt_100million = 0 (typeid = 20, len = 8, typmod = -1, byval = f)



I confirmed this with...

 backend  select relname,relfrozenxid,age(relfrozenxid) from pg_class where 
relkind in('r','t') order by 3 desc limit 1;
  1: relname (typeid = 19, len = 64, typmod = -1, byval = f)
  2: relfrozenxid(typeid = 28, len = 4, typmod = -1, byval = t)
  3: age (typeid = 23, len = 4, typmod = -1, byval = t)
 
  1: relname = qoe_flowbwidth_dist_dig1_014(typeid = 19, len = 
64, typmod = -1, byval = f)
  2: relfrozenxid = 2970264132 (typeid = 28, len = 4, typmod = -1, 
byval = t)
  3: age = 998752902   (typeid = 23, len = 4, typmod = -1, byval = t)
 

My understanding has always been that the datfrozenxid should match this 
relfrozenxid, which it evidently doesn't. 

My environment:

FreeBSD 6
PG 8.2.4(Yes, I intend to upgrade, which would be helped if someone can say 
that this problem is fixed in some future patch/release 

[GENERAL] insert locking issue for PG 9.0

2011-11-16 Thread Ben Chobot
Our application has a table that looks like:

create table jobs
(
id int,
first boolean
);


What we need is for the app to be able to shove data into jobs with an assigned 
id, and guarantee that first is only true for one id. In other words, we could 
easily enforce what we want by creating a unique index on jobs (id) where 
first=true.

The problem comes in how we deal with exceptions (and there will be many). We'd 
like to just have the database say, oh, hey, there's already a row with this 
id; I guess I'll make first=false. If we were to wrap inserts to this table 
into a stored proc it seems like that would be easy enough to handle those 
exceptions and try to re-insert with first=false, except that this is Rails and 
calling a stored proc instead of doing inserts will be difficult in this case. 
So that's pretty much out, if we can avoid it.

We could use rules to call that procedure INSTEAD OF inserts. That seems like 
it should work, but rules. Also, it requires us to keep an index that we 
don't need for anything else.

It would be nice if there was a way to have a before trigger function on jobs 
that would twiddle first as needed, but we've been unable to find a way that 
doesn't involve a race condition or lock escalation deadlocks. Advisory locks 
are released before the new row is visible, and normal locks stronger than 
what INSERT acquires leads to lock escalation deadlocks.

We've considered using rules to acquire the strict lock, then a before trigger 
to do the twiddling, but then we're back to using rules.

Does anybody have any elegant ideas to make this happen?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump does not include triggers - should it?

2011-11-16 Thread Aleksey Tsalolikhin
Hi,


  We are using Postgres 8.4.8 and it is a very fine database.

  We just noticed our pg_dumps (generated with
pg_dump -Fc -n public $our_db_name) does not
include triggers on tables in database $our_db_name.
Should it?  (We'd like it to, we are counting on pg_dump to
backup all our data, including the triggers.)

  Thanks again for a great product!

Best,
-at

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] insert locking issue for PG 9.0

2011-11-16 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ben Chobot
Sent: Wednesday, November 16, 2011 5:48 PM
To: pgsql-general General
Subject: [GENERAL] insert locking issue for PG 9.0

Our application has a table that looks like:

create table jobs
(
id int,
first boolean
);


What we need is for the app to be able to shove data into jobs with an
assigned id, and guarantee that first is only true for one id. In other
words, we could easily enforce what we want by creating a unique index on
jobs (id) where first=true.

-

Thinking outside the box...

Do records ever get inserted with first = FALSE directly or is the only
way for a record to have a false first is because another record already
exists?
Can your process re-execute the record insertion if a UNIQUE INDEX failure
occurs?

My suggestion is to forget dealing with first and instead create a unique
index on (id, job_time).  The earliest record for a given id will always be
first.  This is likely to be fast enough but if not you could always setup
the first field and update it later - and then use it as part of a partial
index.

The first question needs to be No so that you do not have a situation
where the first inserted record is one that should not be TRUE; though by
using three-valued logic you could work around this requirement.
The second question needs to be Yes in the rare circumstance that two IDs
are inserted at exactly the same milli/nano-second.

So

CREATE TABLE jobs (id int, job_time  timestamptz);
CREATE UNIQUE INDEX (id, job_time [need explicit timezone to avoid
mutability; or convert to some kind of epoch]);

CREATE VIEW first_job AS 
SELECT j1.id
FROM jobs j1
WHERE j1.record_creation_ts = (SELECT job_time FROM jobs j2 WHERE
j2.id = j1.id ORDER BY job_time ASC LIMIT 1)

;  --syntax not checked but this is the idea...

The main thing you need to decide with this approach is how you calculate
the timestamp and, if two timestamps are equal, how to resolve the
difference.  Simply adding a few time units and re-inserting should resolve
the problem in simplistic cases.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump does not include triggers - should it?

2011-11-16 Thread Tom Lane
Aleksey Tsalolikhin atsaloli.t...@gmail.com writes:
   We just noticed our pg_dumps (generated with
 pg_dump -Fc -n public $our_db_name) does not
 include triggers on tables in database $our_db_name.

-n restricts the dump to objects in the named schema.  It's impossible
to evaluate whether there is an issue here, since you have not told us
whether the trigger functions or the tables you are wondering about
are in that schema.  In general, though, you should expect that
restricting the dump leads to an incomplete dump ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How could I find the last modified procedure in the database?

2011-11-16 Thread Tom Lane
Andreas Kretschmer akretsch...@spamfence.net writes:
 Dhimant Patel drp4...@gmail.com wrote:
 I also created several procedures/functions and now I don't remember the last
 procedure I worked on! - I thought I could always get this from metadata.
 Now I'm stuck - couldn't find this details anywhere in catalog tables!
 Is there anyway I can get this information?

 Maybe with this query:
 select proname from pg_proc order by oid desc limit 1;
 but i'm not really sure ... tias (try it and see)

The OIDs would tell you the creation order, but they don't change during
CREATE OR REPLACE FUNCTION; so depending on what the OP means by worked
on, this query might not be very useful to him.

I'd try looking to see which row in pg_proc has the latest xmin.
Unfortunately you can't ORDER BY xmin ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] can I show number of records returned by a query in a log?

2011-11-16 Thread hyelluas
Hello,

I'm trying to capture amount of data moving from client app to the server 
back.
The client is executable (c#) gue on windows,  server is lunix centOS,
PostgreSQL 8.4.
We see serious  performance difference between execution via LAN  VPN.

I enabled the logging  and used  pgFouine to analyzed the logs and it looks
very strange - 2000-3000 queries in 10 min.

In one case I see the client (user is the client dbuser) query  select *
from vw_abc  sent 10 times in 1 sec ( timestamp is the same), is it realy
10 times or it was logged 10 times ?

I see 900 queries sent by 1 client in 7 min with 1 click on the screen -
does the log show the real thing?


Is it possible to log the number of records returned by that query? 


thank you.
Helen


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/can-I-show-number-of-records-returned-by-a-query-in-a-log-tp4999630p4999630.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can I show number of records returned by a query in a log?

2011-11-16 Thread Alan Hodgson
On November 16, 2011 05:07:05 PM hyelluas wrote:
 I see 900 queries sent by 1 client in 7 min with 1 click on the screen -
 does the log show the real thing?

The logs show the real thing.  Gotta love ORMs.

 
 
 Is it possible to log the number of records returned by that query?

I doubt the record count or data volume is the problem. It's more likely the 
latency cost of sending 900 queries one at a time and waiting for the replies 
at VPN latencies. 

I don't know how to log the result record count, though, maybe someone else 
does.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can I show number of records returned by a query in a log?

2011-11-16 Thread Tomas Vondra
On 17 Listopad 2011, 2:07, hyelluas wrote:
 Hello,

 I'm trying to capture amount of data moving from client app to the server
 
 back.
 The client is executable (c#) gue on windows,  server is lunix centOS,
 PostgreSQL 8.4.
 We see serious  performance difference between execution via LAN  VPN.

The VPN usually suck when it comes to a performance. Do some basic testing
at the network level - ping with/without VPN, throughput and you'll see
the difference. This is not a problem of the database, it's the
environment.

 I enabled the logging  and used  pgFouine to analyzed the logs and it
 looks
 very strange - 2000-3000 queries in 10 min.

 In one case I see the client (user is the client dbuser) query  select *
 from vw_abc  sent 10 times in 1 sec ( timestamp is the same), is it realy
 10 times or it was logged 10 times ?

 I see 900 queries sent by 1 client in 7 min with 1 click on the screen -
 does the log show the real thing?

Yes, the log shows the real thing. Have you checked the log directly or
just the output of pgfounie? Theoretically there could be a bug in
pgfounie, repeating some of the queries, but I consider that highly
unlikely.

 Is it possible to log the number of records returned by that query?

Currently there's no such option. But it's possible to write a simple
extension that would do that - actually pg_stat_statements does that.


Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can I show number of records returned by a query in a log?

2011-11-16 Thread hyelluas

Thank you all, 

I did look at the log , I enabled  pg_stat_statements , however it showed
the buffers, scans and  other info about the query execution  - not the
records number, any idea how can I get it?


I agree that the problem is in qty of the queries, 
will investigate the client.

thank you.
Helen

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/can-I-show-number-of-records-returned-by-a-query-in-a-log-tp4999630p4999782.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign Tables

2011-11-16 Thread Shigeru Hanada
Hi Eliot,

2011/11/17 Eliot Gable egable+pgsql-gene...@gmail.com:
snip
 1a) Can the foreign tables be written to? For example, I have server1 with
 table foo and server2 which does 'create foreign table bar' where bar
 references server1.foo. Can server2 write to bar and have it show in
 server1.foo?

Foreign tables in 9.1 are read-only, so you can't write to them.  Making
foreign tables writable is a TODO item, but ISTM it's difficult to
implement it for even 9.2.  So the answer to your question 1a) is No.

BTW, I'm interested in your use case very much because I'm working on
enhancement of foreign tables for 9.2.  I would appreciate it if you tell
me some details of your reporting system.  Foreign tables may suit your
reporting system.

a) Where are materialized views, triggers and source tables?  I guess all
of them are on appliances, not on PostgreSQL server for reporting.
b) Do you need to update data on appliances during making a report?  If you
do, how do you do it without foreign tables? (from reporting application,
or using dblink or something?)

If source of report are on appliances as materialized views (or ordinary
tables), and you don't need to update data on appliances, I think you can
use foreign tables to gather information on a PostgreSQL server.  In this
case, you need to define foreign tables for each materialized view (or
ordinary table).  Then,  you can execute SELECT statement using foreign
tables on the reporting server to gather information from appliances.

FDW for PostgreSQL 9.1, pgsql_fdw, is provided as a third party product[1],
though it seems not ready for production use.
# Currently you need to extract pgsql_fdw from git repository.
Incidentally, pgsql_fdw is being proposed as a contrib module of 9.2[2].

[1]https://sourceforge.net/projects/interdbconnect/
[2]http://archives.postgresql.org/pgsql-hackers/2011-10/msg01329.php

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] hi, firends, what's the difference between using PostgreSQL's share buffer and os's file buffer?

2011-11-16 Thread sunpeng
hi, firends,
I have a 64bit 128GB machine, I have two choices:
   1.  I could set PostgreSQL share_buffer to a large value, such
as 100GB, let os uses the remaining 28G memory for file system buffer
   2. I also could set PostgreSQL share_buffer to a small value,
such as 10GB, let os uses the remaining 118G memory for file system buffer
what's the difference between this two different methods in
PostgreSQL?  Which one is prefered?
Thanks!
peng


[GENERAL] Is it ever necessary to vacuum a table that only gets inserts/updates?

2011-11-16 Thread Jason Buberel
Just wondering if there is ever a reason to vacuum a very large table ( 1B
rows) containing rows that never has rows deleted.

Under what circumstance would the table benefit from a vacuum?

-jason

-- 
Jason L. Buberel
CTO, Altos Research
http://www.altosresearch.com/
650.603.0907


Re: [GENERAL] Is it ever necessary to vacuum a table that only gets inserts/updates?

2011-11-16 Thread John R Pierce

On 11/16/11 4:24 PM, Jason Buberel wrote:
Just wondering if there is ever a reason to vacuum a very large table 
( 1B rows) containing rows that never has rows deleted.


Under what circumstance would the table benefit from a vacuum?


no updates either?

you still want to do a vacuum analyze every so often to update the 
statistics used by the planner.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how could duplicate pkey exist in psql?

2011-11-16 Thread Yan Chunlu
recently I have found several tables has exactly the same pkey,  here is
the definition:
diggcontent_data_account_pkey PRIMARY KEY, btree (thing_id, key)


the data is like this:

   159292 | funnypics_link_point   | 41

| num
   159292 | funnypics_link_point   | 40

| num


I could not even update this record.

really confused about how could this happen... thanks!


Re: [GENERAL] Is it ever necessary to vacuum a table that only gets inserts/updates?

2011-11-16 Thread Tom Lane
John R Pierce pie...@hogranch.com writes:
 On 11/16/11 4:24 PM, Jason Buberel wrote:
 Just wondering if there is ever a reason to vacuum a very large table 
 ( 1B rows) containing rows that never has rows deleted.

 no updates either?

To clarify: in Postgres, an update means an insert and a delete.
So unless you mean that this table is insert-only, you certainly
still need vacuum.

 you still want to do a vacuum analyze every so often to update the 
 statistics used by the planner.

If it's purely an insert-only table, such as a logging table, then in
principle you only need periodic ANALYZEs and not any VACUUMs.

VACUUM could still be worthwhile though, because (a) it will set commit
hint bits on all pages and (b) it will set visibility-map bits on all
pages.  An ANALYZE would only do those things for the random sample of
pages that it visits.  While neither of those things are critical, they
do offload work from future queries that would otherwise have to do that
work in-line.  So if you've got a maintenance window where the database
isn't answering queries anyway, it could be worthwhile to run a VACUUM
just to get those bits set.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to lock and unlock table in postgresql

2011-11-16 Thread Siva Palanisamy
Hi ya,

I've 2 tables: One being the master (table1) and another being the slave 
(table2). I want to lock table1 until the below function completes, and it 
should unlock the same at last.
Below is my function. Pls guide me on how to apply locking table1 and unlocking 
the same finally. The scenario why I require this is: I want to add 7 
records in these tables. As I want to retrieve the last generated Id of table1 
which is needed for a foreign column in table2, I used MAX operation. If anyone 
tries to add a record manually, I suspect it might get disturbed. So, I wish to 
go for locking and unlocking the table for every record insertion such that 
other process waits till this function completes its tasks. If there is a 
better solution, kindly let me know.

CREATE OR REPLACE FUNCTION Fun()
RETURNS VOID AS '
DECLARE
Id INTEGER;
BEGIN
INSERT INTO table1 VALUES (DEFAULT, ''Sample'', DEFAULT);
SELECT MAX(id) INTO Id FROM table1;
INSERT INTO table2 VALUES(DEFAULT,Id,''sampledata'');
END;
' LANGUAGE 'plpgsql';

Regards,
Siva.


::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] How to lock and unlock table in postgresql

2011-11-16 Thread John R Pierce

On 11/16/11 10:10 PM, Siva Palanisamy wrote:


If there is a better solution, kindly let me know.



use nextval('seqname') ... full transactional integrity without any 
blocking or locking.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to lock and unlock table in postgresql

2011-11-16 Thread Siva Palanisamy
Hi John,

Thanks for the solution. If I use currval('sqlname') in a loop of 7 
records, what will happen if a record is inserted manually? I guess it will 
alter the sequences, and wrong values/chain might be introduced in foreign 
tables in the below function. Could you please clarify me on this?

Thanks and Regards,
Siva.

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Thursday, November 17, 2011 12:05 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How to lock and unlock table in postgresql

On 11/16/11 10:23 PM, John R Pierce wrote:
 use nextval('seqname')

sorry, I meant, currval('seqname')   ps.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to lock and unlock table in postgresql

2011-11-16 Thread John R Pierce

On 11/16/11 11:16 PM, Siva Palanisamy wrote:

Thanks for the solution. If I use currval('sqlname') in a loop of 7 
records, what will happen if a record is inserted manually? I guess it will 
alter the sequences, and wrong values/chain might be introduced in foreign 
tables in the below function. Could you please clarify me on this?


currval is consistent within the current transaction.  another 
transaction will see the different currval that its generated.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to lock and unlock table in postgresql

2011-11-16 Thread Alban Hertroys
On 17 Nov 2011, at 7:10, Siva Palanisamy wrote:

 If there is a better solution, kindly let me know.
  
 CREATE OR REPLACE FUNCTION Fun()
 RETURNS VOID AS '
 DECLARE
 Id INTEGER;
 BEGIN
 INSERT INTO table1 VALUES (DEFAULT, ''Sample'', DEFAULT);
 SELECT MAX(id) INTO Id FROM table1;
 INSERT INTO table2 VALUES(DEFAULT,Id,''sampledata'');
 END;
 ' LANGUAGE 'plpgsql';
  
 Regards,
 Siva.

As John says, you're re-inventing the wheel that sequences solve. You could 
also get the id using INSERT .. RETURNING.

You have another problem on your hands though. You have a naming conflict 
between your variable name and a column name in that second query: id and Id 
are the same.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general