[GENERAL] Synchronous replication + Fusion-io = waste of money OR significant performance boost? (compared to normal SATA-based SSD-disks)?

2012-03-07 Thread Joel Jacobson
My company is in the process of migrating to a new pair of servers, running 9.1.

The database performance monetary transactions, we require
synchronous_commit on for all transactions.

Fusion-io is being considered, but will it give any significant
performance gain compared to normal SATA-based SSD-disks, due to the
fact we must replicate synchronously?

To make it more complicated, what about SLC vs MLC (for synchronous
replication)?

Assume optimal conditions, both servers have less than a meter between
each other, with the best possible network link between them providing
the lowest latency possible, maxed out RAM, maxed out CPUs, etc.

I've already asked this question to one of the core members, but the
answer was basically you will have to test, I was therefore hoping
someone in the community already had some test results to avoid
wasting money.

Thank you for any advice!

Best regards,

Joel Jacobson
Trustly Group AB (former Glue Finance AB)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: could not find tuple for trigger 37463634

2012-03-07 Thread mdione.ext

  Short and simple: I'm trying to do the following:

postgres@master-db01:~$ psql pnssi_profiles_bench
psql (9.0.3)
Type help for help.

pnssi_profiles_bench=# drop SCHEMA _pnssi_slony_bench_profiles_01_110 cascade;
ERROR:  could not find tuple for trigger 37463634

  As you can see, this is a slony setup, but I'm not sure that's relevant.
Other info:

pnssi_profiles_bench=# select * from pg_depend where objid=37463634;
 classid |  objid   | objsubid | refclassid | refobjid | refobjsubid | deptype
-+--+--++--+-+-
2620 | 37463634 |0 |   1255 | 37462497 |   0 | n
2620 | 37463634 |0 |   1259 | 19914767 |   0 | a
(2 rows)

pnssi_profiles_bench=# select * from pg_class where oid=1255;
 relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode 
| reltablespace | relpages | reltuples | reltoastrelid | reltoastidxid | 
relhasindex | relisshared | relistemp | relkind | relnatts | relchecks | 
relhasoids | relhaspkey | relhasexclusion | relhasrules | relhastriggers | 
relhassubclass | relfrozenxid |relacl | reloptions
-+--+-+---+--+---+-+---+--+---+---+---+-+-+---+-+--+---+++-+-+++--+---+
 pg_proc |   11 |  81 | 0 |   10 | 0 |   0 
| 0 |  304 |  3635 |  2836 | 0 | t  
 | f   | f | r   |   25 | 0 | t  | 
f  | f   | f   | f  | f  |  
  150006110 | {=r/postgres} |
(1 row)


pnssi_profiles_bench=# select * from pg_class where oid=2620;
  relname   | relnamespace | reltype | reloftype | relowner | relam | 
relfilenode | reltablespace | relpages | reltuples | reltoastrelid | 
reltoastidxid | relhasindex | relisshared | relistemp | relkind | relnatts | 
relchecks | relhasoids | relhaspkey | relhasexclusion | relhasrules | 
relhastriggers | relhassubclass | relfrozenxid |relacl | reloptions 
+--+-+---+--+---+-+---+--+---+---+---+-+-+---+-+--+---+++-+-+++--+---+--

 pg_trigger |   11 |   10732 | 0 |   10 | 0 |   
11738 | 0 |  312 | 11814 |  2336 | 0 | 
t   | f  | f | r   |   15 | 0 | t   
   | f  | f   | f   | f  | f
  |150006329 | {=r/postgres} | 
(1 row)

  I don't have the rest of the info (the entry in pg_class with OID 1259) 
because right now I'm running a preventive VACUUM, which it's taking ages.

  What I understand is that it seems that somehow I got some references to 
objects 
that do not exist anymore, so I think I should run the equivalent of fsck I 
this 
database. My speculations apart, how to proceed to unlock this situation?

--
Marcos Dione
SysAdmin
Astek Sud-Est
pour FT/TGPF/OPF/PORTAIL/DOP/HEBEX @ Marco Polo
04 97 12 62 45 - mdione@orange.com



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorization.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange shall not be liable if this 
message was modified, changed or falsified.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex transactions without using plPgSQL Functions. It is possible?

2012-03-07 Thread Sergey Konoplev
On Tue, Mar 6, 2012 at 11:30 PM, Andre Lopes lopes80an...@gmail.com wrote:
 This is the plPgSQL code that I need to write in Python. It is
 possible to do this without using PlPgSQL?

Do you want it to be done using pure SQL or you do not want to wrap it
into a stored function?

In the second case look at the DO command
http://www.postgresql.org/docs/9.1/static/sql-do.html


-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: could not find tuple for trigger 37463634

2012-03-07 Thread mdione.ext
De : pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] De la part de mdione@orange.com
   I don't have the rest of the info (the entry in pg_class with OID 1259)
 because right now I'm running a preventive VACUUM, which it's taking ages.

  Ok, the vacuum finished, but I still get the same error. Here's the rest 
of the info:

pnssi_profiles_bench=# select * from pg_class where oid=2620 or oid=1255 or 
oid=1259;
  relname   | relnamespace | reltype | reloftype | relowner | relam | 
relfilenode | reltablespace | relpages | reltuples | reltoastrelid | 
reltoastidxid | relhasindex | relisshared | relistemp | relkind | relnatts | 
relchecks | relhasoids | relhaspkey | relhasexclusion | relhasrules | 
relhastriggers | relhassubclass | relfrozenxid |relacl | reloptions
+--+-+---+--+---+-+---+--+---+---+---+-+-+---+-+--+---+++-+-+++--+---+
 pg_proc|   11 |  81 | 0 |   10 | 0 |   
0 | 0 |  304 |  3635 |  2836 | 0 | t
   | f   | f | r   |   25 | 0 | t  
| f  | f   | f   | f  | f  
|258153645 | {=r/postgres} |
 pg_class   |   11 |  83 | 0 |   10 | 0 |   
0 | 0 |  490 | 17440 | 0 | 0 | t
   | f   | f | r   |   27 | 0 | t  
| f  | f   | f   | f  | f  
|258153672 | {=r/postgres} |
 pg_trigger |   11 |   10732 | 0 |   10 | 0 |   
11738 | 0 |  312 | 11814 |  2336 | 0 | 
t   | f   | f | r   |   15 | 0 | t  
| f  | f   | f   | f  | f   
   |258153674 | {=r/postgres} |
(3 rows)

--
Marcos Dione
SysAdmin
Astek Sud-Est
pour FT/TGPF/OPF/PORTAIL/DOP/HEBEX @ Marco Polo
04 97 12 62 45 - mdione@orange.com

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorization.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange shall not be liable if this 
message was modified, changed or falsified.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex transactions without using plPgSQL Functions. It is possible?

2012-03-07 Thread Martin Gregorie
On Wed, 2012-03-07 at 14:19 +0400, Sergey Konoplev wrote:
 On Tue, Mar 6, 2012 at 11:30 PM, Andre Lopes lopes80an...@gmail.com wrote:
  This is the plPgSQL code that I need to write in Python. It is
  possible to do this without using PlPgSQL?
 
Have you looked at pyodbc?

ODBC will usually accept statements allowing you to turn autocommit off
and to use connection.commit() to group a set of statements into a
transaction. 

Note that there's a documentation comment saying that autocommit
settings are not passed to the driver, However, as the documentation is
still talking about bytea fields it may be out of date so I'd suggest
running a test. Try writing a Python test program that turns autocommit
off and does a couple of inserts followed by a rollback. File a bug if
the inserted data is in the table after a successful run.


Martin




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SQL Normalization Thought

2012-03-07 Thread David Johnston
I know there is currently work ongoing regarding normalizing SQL statements for 
logging purposes but has anyone considered given us the ability to name our 
statements.

SELECT 
FROM ...
WHERE 
NAMEAS 'Name to track by'

If a query lacks a name the algorithm generated normalized form would be used 
instead but otherwise all queries with the same name would be treated as being 
the same for statistics purposes.

I'm sure there is more to it but the idea of letting the user name their 
queries, and thus have something to actually link the logs and the source code 
directly, has merit and at the least provides a workaround to an algorithmic 
routine.

David J.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: could not find tuple for trigger 37463634

2012-03-07 Thread mdione.ext
De : pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] De la part de mdione@orange.com
   What I understand is that it seems that somehow I got some references to 
 objects
 that do not exist anymore, so I think I should run the equivalent of fsck I 
 this
 database. My speculations apart, how to proceed to unlock this situation?

  Actually I'm quite tempted to do something like this:

http://archives.postgresql.org/pgsql-admin/2006-05/msg00084.php

  Should that be enough or as the dangling question says, there should 
be something else to do?

--
Marcos Dione
SysAdmin
Astek Sud-Est
pour FT/TGPF/OPF/PORTAIL/DOP/HEBEX @ Marco Polo
04 97 12 62 45 - mdione@orange.com

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorization.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange shall not be liable if this 
message was modified, changed or falsified.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL Normalization Thought

2012-03-07 Thread dennis jenkins
On Wed, Mar 7, 2012 at 6:34 AM, David Johnston pol...@yahoo.com wrote:
 I know there is currently work ongoing regarding normalizing SQL statements 
 for logging purposes but has anyone considered given us the ability to name 
 our statements.

 SELECT 
 FROM ...
 WHERE 
 NAMEAS 'Name to track by'

 If a query lacks a name the algorithm generated normalized form would be used 
 instead but otherwise all queries with the same name would be treated as 
 being the same for statistics purposes.

 I'm sure there is more to it but the idea of letting the user name their 
 queries, and thus have something to actually link the logs and the source 
 code directly, has merit and at the least provides a workaround to an 
 algorithmic routine.


You could place a 'C style' comment at the beginning of the statement.  Ex:

/* MagicQuery-001 */ select awsome_stuff from coolness_table where user_id=?;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Martin Gregorie
In the course of migrating from (I think) Postgres 8.4 under Fedora 12
to Postgres 9.1 under Fedora 16 I managed to loose 'template1' and
associated data. The immediate effect is that, although my schema and
data seem to be intact and are working correctly, pg_dumpall refused to
run because it can't find 'template1'.

The sequence was something like this:
1) first step was to bring my updates fully up to date by running
pg_dumpall with output to a USB hard drive and to run the incremental
backup that's part of my application (a mail archive that backs up into
a set of mbox files).

2)installed Fedora 16, including the Postgres 9.1 package, and fully
updated all packages

3)initialised the Postgres user and basic database structures, started
the server.

4)attempted to use pg_restore to recover my database. This threw lots of
errors and was obviously stupid, so I killed it and reloaded the backup
with psql. This ran OK except that some data was incorrectly restored to
my mail messages table because tabs in some messages confused the COPY
command.

5) Started again: I dropped my schema and recreated it before letting my
application restore the database from its mbox files.

6) The next scheduled backup using pg_dumpall failed immediately because
it couldn't find 'template1'.

Observations

a) could the abrupt termination of pg_restore have removed 'template1'
from the database structure?

b) I've seen the confusion between tabs in data and the tabs used as
delimiters by pg_dump cause this problem in the past. It was fixed then,
but seems to have crept back in. 

Question

Is it possible to reinstate 'template1' and all its works without
reinitialising the database from scratch. It was suggested to me that
running initdb might fix it while leaving my data in place though my
source was uncertain about its success. Would this work or is there a
better way to restore 'template1' in a populated database system?


Martin



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Gary Chambers

Martin,


6) The next scheduled backup using pg_dumpall failed immediately because
it couldn't find 'template1'.


The template1 database is the default database to which pg_dumpall attempts
to connect.  If you use the -l or --database option, you can change that and
pg_dumpall will resume functioning as you expect.

--
Gary Chambers

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Marti Raudsepp
On Wed, Mar 7, 2012 at 16:23, Martin Gregorie mar...@gregorie.org wrote:
 Is it possible to reinstate 'template1' and all its works without
 reinitialising the database from scratch. It was suggested to me that
 running initdb might fix it while leaving my data in place though my
 source was uncertain about its success. Would this work or is there a
 better way to restore 'template1' in a populated database system?

You can always re-create template1 from template0:

CREATE DATABASE template1 WITH TEMPLATE template0;

 In the course of migrating from (I think) Postgres 8.4 under Fedora 12
 to Postgres 9.1 under Fedora 16 I managed to loose 'template1' and
 associated data.

As far as I can tell, the only way to remove the template1 database is
to rename it. Maybe that's what happened. This query should tell you
the name of the database that used to be template1:

SELECT datname FROM pg_database WHERE datistemplate;

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Gary Chambers

Marti,


As far as I can tell, the only way to remove the template1 database is to
rename it.


Templates are databases with the datistemplate column set to true.  A
superuser can:

UPDATE pg_database SET datistemplate=false where datname='blah';
DROP DATABASE blah;

As far as Pg is concerned, there is no problem with removing the template1
or postgres databases.  That is not (or may not be) the case for some
utilities and Linux distributions out there that expect them to exist where
used as defaults.  I haven't yet tried removing template0.

--
Gary Chambers

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] rounding a timestamp to nearest x seconds

2012-03-07 Thread Andy Colson
Took me a while to figure this out, thought I'd paste it here for others 
to use:


create or replace function round_timestamp(timestamp, integer) returns 
timestamp as $$
	select date_trunc('minute', $1) + cast(round(date_part('seconds', 
$1)/$2)*$2 || ' seconds' as interval);

$$ language sql immutable;


If you pass 10 to the second argument, it'll round the timestamp to the 
nearest 10 seconds.  Pass 5 to round to nearest 5 seconds, etc..


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Adrian Klaver
On Wednesday, March 07, 2012 6:23:42 am Martin Gregorie wrote:

 
 Question
 
 Is it possible to reinstate 'template1' and all its works without
 reinitialising the database from scratch. It was suggested to me that
 running initdb might fix it while leaving my data in place though my
 source was uncertain about its success. Would this work or is there a
 better way to restore 'template1' in a populated database system?

So the question is, did you have your template1 customized in the 8.4 
installation and are you looking to transfer those added items to the 9.1 
installation?  Some information on the template system can be found here:

http://www.postgresql.org/docs/9.1/interactive/manage-ag-templatedbs.html

As others have said you can recreate a bare template1 from template0.  If you 
had not customized template1 that should do.  If you had customized it:

On the chance that the 8.4 cluster is still up and running you could just do a 
pg_dump of template1 on that cluster and load it into the new cluster. If the 
8.4 cluster is not running you can go to your pg_dumpall file and get the 
template1 objects from there. Search for the  \connect template1 string and 
look 
for the SQL commands until the next \connect string.


 
 
 Martin

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: could not find tuple for trigger 37463634

2012-03-07 Thread Tom Lane
mdione@orange.com writes:
   Short and simple: I'm trying to do the following:

 postgres@master-db01:~$ psql pnssi_profiles_bench
 psql (9.0.3)
 Type help for help.

 pnssi_profiles_bench=# drop SCHEMA _pnssi_slony_bench_profiles_01_110 cascade;
 ERROR:  could not find tuple for trigger 37463634

You might try reindexing pg_trigger before doing anything more invasive,
just in case the tuple is there but it's not being found because of a
messed-up index.

   As you can see, this is a slony setup, but I'm not sure that's relevant.

It could be --- Slony plays some not-very-nice games with the system
catalogs, or at least used to.  If reindex doesn't fix things I'd
suggest asking about this on the Slony lists.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Martin Gregorie
On Wed, 2012-03-07 at 09:38 -0500, Gary Chambers wrote:
 Martin,
 
  6) The next scheduled backup using pg_dumpall failed immediately because
  it couldn't find 'template1'.
 
 The template1 database is the default database to which pg_dumpall attempts
 to connect.  If you use the -l or --database option, you can change that and
 pg_dumpall will resume functioning as you expect.
 
I've just logged in under postgres and run '\l' - and the databases
postgres, template0 and template1 are still there:

postgres=# \l
  List of databases
   Name|  Owner   | Encoding |   Collate   |Ctype|   Access
privileges   
---+--+--+-+-+---
 postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
=c/postgres  +
   |  |  | | |
postgres=CTc/postgres
 template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres=CTc/postgres+
   |  |  | | |
=c/postgres

So, is this a privilege issue? I don't understand the content of that
somewhat cryptic 'privilege' column. Is it set how you'd expect?

Marti:
==
I got this output:

postgres=# SELECT datname FROM pg_database WHERE datistemplate;
  datname  
---
 template0
 template1
(2 rows)

so it doesn't look like its been renamed.
 

Martin



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Adrian Klaver
On Wednesday, March 07, 2012 6:23:42 am Martin Gregorie wrote:

 Question
 
 Is it possible to reinstate 'template1' and all its works without
 reinitialising the database from scratch. It was suggested to me that
 running initdb might fix it while leaving my data in place though my
 source was uncertain about its success. Would this work or is there a
 better way to restore 'template1' in a populated database system?


Was reading through this again and got to wondering, is pg_dumpall really 
necessary? Do you have multiple databases you want to dump at once? If not a 
plain pg_dump against a particular database would work. There is also the 
advantage that if you use pg_dump -Fc you can use pg_restore. My use case for 
pg_dumpall is pg_dumpall -g to get the global information only from the 
cluster. 
I then use pg_dump for individual databases. You would still want to recreate a 
template1 because quite a few tools expect it to be there.



-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Adrian Klaver
On Wednesday, March 07, 2012 7:19:51 am Martin Gregorie wrote:
 On Wed, 2012-03-07 at 09:38 -0500, Gary Chambers wrote:
  Martin,
  
   6) The next scheduled backup using pg_dumpall failed immediately
   because it couldn't find 'template1'.
  
  The template1 database is the default database to which pg_dumpall
  attempts to connect.  If you use the -l or --database option, you can
  change that and pg_dumpall will resume functioning as you expect.
 
 I've just logged in under postgres and run '\l' - and the databases
 postgres, template0 and template1 are still there:
 
 postgres=# \l
   List of databases
Name|  Owner   | Encoding |   Collate   |Ctype|   Access
 privileges
 ---+--+--+-+-+-
 -- postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
  template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 =c/postgres  +
 
 postgres=CTc/postgres
  template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 postgres=CTc/postgres+
 
 =c/postgres
 
 So, is this a privilege issue? I don't understand the content of that
 somewhat cryptic 'privilege' column. Is it set how you'd expect?

The privilege code can be found below, look towards bottom of page: 

http://www.postgresql.org/docs/9.1/interactive/sql-grant.html

The privileges look alright, same as mine.

So what user are you trying to restore the pg_dumpall data as?
What is the exact error message you get?

 
 Marti:
 ==
 I got this output:
 
 postgres=# SELECT datname FROM pg_database WHERE datistemplate;
   datname
 ---
  template0
  template1
 (2 rows)
 
 so it doesn't look like its been renamed.
 
 
 Martin

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Steve Clark

On 03/07/2012 09:43 AM, Marti Raudsepp wrote:

On Wed, Mar 7, 2012 at 16:23, Martin Gregoriemar...@gregorie.org  wrote:

Is it possible to reinstate 'template1' and all its works without
reinitialising the database from scratch. It was suggested to me that
running initdb might fix it while leaving my data in place though my
source was uncertain about its success. Would this work or is there a
better way to restore 'template1' in a populated database system?

You can always re-create template1 from template0:

CREATE DATABASE template1 WITH TEMPLATE template0;


This is what I have done when I lost template1.


In the course of migrating from (I think) Postgres 8.4 under Fedora 12
to Postgres 9.1 under Fedora 16 I managed to loose 'template1' and
associated data.

As far as I can tell, the only way to remove the template1 database is
to rename it. Maybe that's what happened. This query should tell you
the name of the database that used to be template1:

SELECT datname FROM pg_database WHERE datistemplate;

Regards,
Marti




--
Stephen Clark
*NetWolves*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Tom Lane
Martin Gregorie mar...@gregorie.org writes:
 I've just logged in under postgres and run '\l' - and the databases
 postgres, template0 and template1 are still there:

Oh, time for a new theory then.  What's the *exact* error message you
were getting from pg_dump?  Do you get the same from a manual attempt to
connect to template1?

 So, is this a privilege issue? I don't understand the content of that
 somewhat cryptic 'privilege' column. Is it set how you'd expect?

It looks reasonable offhand.  I'm a bit surprised that the public entry
isn't first for template1, but I think that shouldn't make any
difference.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: could not find tuple for trigger 37463634

2012-03-07 Thread mdione.ext
De : Tom Lane [mailto:t...@sss.pgh.pa.us]
  pnssi_profiles_bench=# drop SCHEMA _pnssi_slony_bench_profiles_01_110
 cascade;
  ERROR:  could not find tuple for trigger 37463634
 
 You might try reindexing pg_trigger before doing anything more invasive,
 just in case the tuple is there but it's not being found because of a
 messed-up index.

  Well, at some point we tried REINDEX DATABASE pnssi_profiles_bench; 
and REINDEX SYSTEM pnssi_profiles_bench;, would that be equivalent?

 
As you can see, this is a slony setup, but I'm not sure that's
  relevant.
 
 It could be --- Slony plays some not-very-nice games with the system
 catalogs, or at least used to.  If reindex doesn't fix things I'd
 suggest asking about this on the Slony lists.

  I'm sorry to say that I ended up taking drastic measures (several 
delete from pg_depend where refobjid=foo;) before trying something 
else (after some pressure from the-powers-that-are :-P). after that we 
managed to drop the schema. I have the details if you want.

  in any case, this is not the first time we drop a slony schema to 
rebuild everything, but it's the first time we have this kind of 
problems.

  I'm pretty sure after what I have done some objects might still be 
dangling in the db (even more, now that I remember, at some point we did 
delete from pg_trigger where tgname like '%01_110%';; yes, I know, 
it's a blunt approach). Is there any way to find those, if they exist?

--
Marcos Dione
SysAdmin
Astek Sud-Est
pour FT/TGPF/OPF/PORTAIL/DOP/HEBEX @ Marco Polo
04 97 12 62 45 - mdione@orange.com

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorization.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange is not liable for messages 
that have been modified, changed or falsified.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fixing the loss of 'template1'

2012-03-07 Thread Tom Lane
Martin Gregorie mar...@gregorie.org writes:
 On Wed, 2012-03-07 at 10:50 -0500, Tom Lane wrote:
 Oh, time for a new theory then.  What's the *exact* error message you
 were getting from pg_dump?  Do you get the same from a manual attempt to
 connect to template1?

 Yes:

 # psql -d template1 -U postgres
 psql: FATAL:  Peer authentication failed for user postgres

Oh, well, that hasn't got anything to do with the database permissions,
nor template1 for that matter.  That says you're trying to log in as
postgres and the OS reports that you're not postgres.  You either need
to su to postgres or change your authentication method.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text search ranking: ordering using index and proximiti ranking with OR queries

2012-03-07 Thread Nicolas Grilly
According to Oleg in a previous discussion, ts_rank does not use index
because index does not store enough information for ranking:
http://archives.postgresql.org/pgsql-general/2011-07/msg00351.php

On Sat, Feb 18, 2012 at 12:39, Andrey Chursin an...@danasoft.ws wrote:

 Hello

 I have two unrelated questions about fts function ts_rank:

 1) I've created GiST index on column with fts vector, but query
 SELECT * FROM table ORDER BY ts_rank(field, :query) LIMIT 20
 is perfomed with sequential table scan. Index was created on field
 column. Does it mean FTS indexes does not support order by ranking? Or
 I need somehow to create separated index for ranking?

 2) I have a misunderstanding with proximity ranking work. Given two
 vectors 'a:1 b:2' and 'a:1 b:1000', i am measuring ts_rank(vector, 'a'
 | 'b'). And it is equal! But when i am replacing query with 
 operator, e.g. asking for ts_rank(vector, 'a'  'b') i am getting
 different numbers. Why do I get proximity ranking only for AND fts
 queries? This is a problem as far as to_tsquery produces OR queries,
 so i need self-written postprocessing of query to replace OR with AND.

 --
 Regards,
 Andrey

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Nicolas Grilly
Garden / Vocation City
+33 1 45 72 48 78 - office
+33 6 03 00 25 34 - mobile
www.gardentechno.com - Développement web  reporting / *Web development 
data analytics*
www.vocationcity.com - Plateforme de recrutement sur le web / *Web
recruitment platform*


Re: [GENERAL] ERROR: could not find tuple for trigger 37463634

2012-03-07 Thread Tom Lane
mdione@orange.com writes:
 De : Tom Lane [mailto:t...@sss.pgh.pa.us]
 You might try reindexing pg_trigger before doing anything more invasive,
 just in case the tuple is there but it's not being found because of a
 messed-up index.

   Well, at some point we tried REINDEX DATABASE pnssi_profiles_bench; 
 and REINDEX SYSTEM pnssi_profiles_bench;, would that be equivalent?

Yeah, that would have covered it.

   I'm pretty sure after what I have done some objects might still be 
 dangling in the db (even more, now that I remember, at some point we did 
 delete from pg_trigger where tgname like '%01_110%';; yes, I know, 
 it's a blunt approach). Is there any way to find those, if they exist?

You could try the queries in the oidjoins regression test, which would
verify all the implied foreign key relationships in the system catalogs.
I don't think that covers pg_depend though; you'd need to gin up more
complicated queries if you wanted to look for dangling pg_depend
entries.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Mike Blackwell
Given a pair of tables:

create table a (
  id serial,
  stuff text,
  more_stuff text
);

create table a_audit (
  id serial,
  old_record a,
  new_record a
);

How can one alter the structure of table a?  Attempting

ALTER TABLE a ADD COLUMN even_more_stuff text;

results in the message:

 ERROR:  cannot alter table a because column a_audit.new_record uses
its row type

A solution that doesn't lose the existing data is preferable.

Mike


Re: [GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Merlin Moncure
On Wed, Mar 7, 2012 at 11:22 AM, Mike Blackwell mike.blackw...@rrd.com wrote:
 Given a pair of tables:

 create table a (
   id serial,
   stuff text,
   more_stuff text
 );

 create table a_audit (
   id serial,
   old_record a,
   new_record a
 );

 How can one alter the structure of table a?  Attempting

 ALTER TABLE a ADD COLUMN even_more_stuff text;

 results in the message:

  ERROR:  cannot alter table a because column a_audit.new_record uses its
 row type

 A solution that doesn't lose the existing data is preferable.

works for me -- what version are you on?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Mike Blackwell


 works for me -- what version are you on?

 merlin

 --

 [wcs1459@aclnx-cisp01 ~]$ psql --version
 psql (PostgreSQL) 9.1.1
 contains support for command-line editing


 [wcs1459@aclnx-cisp01 ~]$ cat x
 create table a (
   id serial,
   stuff text,
   more_stuff text
 );

 create table a_audit (
   id serial,
   a_old a,
   a_new a
 );

 alter table a add column even_more_stuff boolean not null default false;


 [wcs1459@aclnx-cisp01 ~]$ psql -f x
 psql:x:5: NOTICE:  CREATE TABLE will create implicit sequence a_id_seq
 for serial column a.id
 CREATE TABLE
 psql:x:11: NOTICE:  CREATE TABLE will create implicit sequence
 a_audit_id_seq for serial column a_audit.id
 CREATE TABLE
 psql:x:13: ERROR:  cannot alter table a because column a_audit.a_new
 uses its row type





Re: [GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Merlin Moncure
On Wed, Mar 7, 2012 at 11:45 AM, Mike Blackwell mike.blackw...@rrd.com wrote:

 works for me -- what version are you on?

 merlin

 --

 [wcs1459@aclnx-cisp01 ~]$ psql --version
 psql (PostgreSQL) 9.1.1
 contains support for command-line editing


 [wcs1459@aclnx-cisp01 ~]$ cat x
 create table a (
   id serial,
   stuff text,
   more_stuff text
 );

 create table a_audit (
   id serial,
   a_old a,
   a_new a
 );

 alter table a add column even_more_stuff boolean not null default false;


 [wcs1459@aclnx-cisp01 ~]$ psql -f x
 psql:x:5: NOTICE:  CREATE TABLE will create implicit sequence a_id_seq for
 serial column a.id
 CREATE TABLE
 psql:x:11: NOTICE:  CREATE TABLE will create implicit sequence
 a_audit_id_seq for serial column a_audit.id
 CREATE TABLE
 psql:x:13: ERROR:  cannot alter table a because column a_audit.a_new
 uses its row type

aha! that's not what you posted last time.  you appended 'not null
default false'; which inexplicably breaks the ALTER.

try this:
ALTER TABLE a ADD COLUMN even_more_stuff text not null;
ALTER TABLE a ALTER even_more_stuff set default false;
ALTER TABLE a DROP COLUMN even_more_stuff;
ALTER TABLE a ADD COLUMN even_more_stuff boolean not null default false;

(this really looks like a bug in postgres, cc-ing to bugs)

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Advisory transaction lock for 128-bit space

2012-03-07 Thread Andrey Chursin
Hello.
My application need to set advisory lock on UUID key, almost like it
does pg_advisory_xact_lock function. The problem is argument type of
this function - it consumes 8-byte value, not 16-byte.

I can not lock on any(hi, low or middle) 8-byte part of UUID, as far
as it can produce unexpected deadlock issues, because locking on some
ID in this way will imply locking on more wide set of ID then I
requested.

Now I am doing the 'trick' using indexing insert/delete, e.g.:
INSERT INTO table_with_uuid_pk(locking_value);
DELETE FROM table_with_uuid_pk WHERE inserted_row_above;

It works, but I did not found any description of such 'feature' of
indexes. Can u, please, help to solve this synchronization issue, and
comment the way I am dealing with it now(with index locking)

P.S. The most significant fear I know have, is that currently used
method suffers with same problem as locking for part of UUID - doest
insert/delete really locks only on the value i passed to it?

-- 
Regards,
Andrey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text search ranking: ordering using index and proximiti ranking with OR queries

2012-03-07 Thread Andrey Chursin
Is there any way to sort by ranking, avoiding seq scan?
The only way i see now is to use pg_trgm instead of ts_rank, but we
did not check yet how applicable is it for our purposes.

7 марта 2012 г. 20:53 пользователь Nicolas Grilly
nico...@gardentechno.com написал:
 According to Oleg in a previous discussion, ts_rank does not use index
 because index does not store enough information for ranking:
 http://archives.postgresql.org/pgsql-general/2011-07/msg00351.php

 On Sat, Feb 18, 2012 at 12:39, Andrey Chursin an...@danasoft.ws wrote:

 Hello

 I have two unrelated questions about fts function ts_rank:

 1) I've created GiST index on column with fts vector, but query
 SELECT * FROM table ORDER BY ts_rank(field, :query) LIMIT 20
 is perfomed with sequential table scan. Index was created on field
 column. Does it mean FTS indexes does not support order by ranking? Or
 I need somehow to create separated index for ranking?

 2) I have a misunderstanding with proximity ranking work. Given two
 vectors 'a:1 b:2' and 'a:1 b:1000', i am measuring ts_rank(vector, 'a'
 | 'b'). And it is equal! But when i am replacing query with 
 operator, e.g. asking for ts_rank(vector, 'a'  'b') i am getting
 different numbers. Why do I get proximity ranking only for AND fts
 queries? This is a problem as far as to_tsquery produces OR queries,
 so i need self-written postprocessing of query to replace OR with AND.

 --
 Regards,
 Andrey

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




 --
 Nicolas Grilly
 Garden / Vocation City
 +33 1 45 72 48 78 - office
 +33 6 03 00 25 34 - mobile
 www.gardentechno.com - Développement web  reporting / Web development 
 data analytics
 www.vocationcity.com - Plateforme de recrutement sur le web / Web
 recruitment platform



-- 
Regards,
Andrey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] corrupted table postgresql 8.3

2012-03-07 Thread Merlin Moncure
On Tue, Mar 6, 2012 at 6:34 PM, Matteo Sgalaberni sg...@sgala.com wrote:
 - Original Message -
 On 6.3.2012 21:24, Matteo Sgalaberni wrote:
  Hi people!
 
  I have a pg 8.3. Today I issued in a database that comand:

 Which minor version? The last one in this branch is 8.3.18 and if
 you're
 running an old one, there might be an important bugfix ...

 8.3.0, I read quickly all releases notes of 8.3.x and I didn't saw any fix 
 that could be directly related to what is happened to me...there are issues 
 related to ALTER TABLE but with other options like PRIMARY KEY or other 
 parameters.
 Are there fixes that could be related to what's happened to me?

hm:

* Also, a VACUUM FULL that failed partway through vacuuming a system
catalog could result in cache corruption in concurrent database
sessions.

* Fix btree index corruption from insertions concurrent with vacuuming
(Tom Lane)

* These bugs could result in index corruption after reindexing a
system catalog. They are not believed to affect user indexes.

I mostly agree, but it's possible you got hit by a fixed bug.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] corrupted table postgresql 8.3

2012-03-07 Thread Matteo Sgalaberni
- Original Message -
 I mostly agree, but it's possible you got hit by a fixed bug.
I'm lucky ;)

someone know how I can correct the catalog and drop the old/renamed table?

Or probably is simpler: 
- pg_dump database
- DROP database
- CREATE DATABASE
- psql database  dump

Or do I need to destroy and init the whole pg cluster?

I would like to be pretty sure of what can happen so I can plan the maintenance 
window...

Thanks for any hint!

Matteo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Automated Backup Script Help (Linux)

2012-03-07 Thread Carlos Mennens
I'm trying to get the automated backup scripts to work from:

http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

Currently I'm using PostgreSQL 9.1.3 and have the following three files:

carlos@db1:~/postgresql$ ls -l
total 20
drwxr-xr-x 2 carlos users 4096 Mar  7 13:13 backup
-rw-r--r-- 1 carlos users 1434 Mar  7 13:19 pg_backup.config
-rw-r--r-- 1 carlos users 4304 Mar  7 12:54 pg_backup_rotated.sh
-rw-r--r-- 1 carlos users 3379 Mar  7 12:54 pg_backup.sh

My Linux shell user 'carlos' has rwx to the entire directory as well
as the database role 'carlos' is listed as a REPLICATION role:

postgres=# \du
 List of roles
 Role name |   Attributes
 | Member of
---++---
 carlos| Superuser, No inheritance, Create role, Create DB,
Replication | {it}

Now when I attempt to run the script for the 1st time, I get the
following error:

carlos@db1:~/postgresql$ pwd
/home/carlos/postgresql

carlos@db1:~/postgresql$ ls -l
total 20
drwxr-xr-x 2 carlos users 4096 Mar  7 13:13 backup
-rw-r--r-- 1 carlos users 1435 Mar  7 13:25 pg_backup.config
-rw-r--r-- 1 carlos users 4304 Mar  7 12:54 pg_backup_rotated.sh
-rw-r--r-- 1 carlos users 3379 Mar  7 12:54 pg_backup.sh

carlos@db1:~/postgresql$ sh pg_backup.sh
pg_backup.sh: 7: cd: can't cd to pg_backup.sh
pg_backup.sh: 8: pg_backup.sh: source: not found
pg_backup.sh: 16: [: !=: unexpected operator
Making backup directory in 2012-03-07/
pg_backup.sh: 54: pg_backup.sh: Bad substitution

So the only thing this script is doing for me is creating an empty
folder with the date as the name. Any idea what I'm doing wrong? I've
attached the configuration file since this is the only thing I was
told that needs to be modified. The contents are exactly as they are
on my server.


pg_backup.config
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advisory transaction lock for 128-bit space

2012-03-07 Thread Bill Moran
In response to Andrey Chursin an...@danasoft.ws:

 Hello.
 My application need to set advisory lock on UUID key, almost like it
 does pg_advisory_xact_lock function. The problem is argument type of
 this function - it consumes 8-byte value, not 16-byte.
 
 I can not lock on any(hi, low or middle) 8-byte part of UUID, as far
 as it can produce unexpected deadlock issues, because locking on some
 ID in this way will imply locking on more wide set of ID then I
 requested.
 
 Now I am doing the 'trick' using indexing insert/delete, e.g.:
 INSERT INTO table_with_uuid_pk(locking_value);
 DELETE FROM table_with_uuid_pk WHERE inserted_row_above;
 
 It works, but I did not found any description of such 'feature' of
 indexes. Can u, please, help to solve this synchronization issue, and
 comment the way I am dealing with it now(with index locking)
 
 P.S. The most significant fear I know have, is that currently used
 method suffers with same problem as locking for part of UUID - doest
 insert/delete really locks only on the value i passed to it?

Have you considered using row locking?  If you need the lock to
extend across multiple transactions, then row locking won't work.

The problem with what you're doing is that if a process crashes, or
is otherwise uncleanly disconnected, the lock table is polluted.
My recommendation would be to add another column to the table with
the UUID key that keeps a unique 8 byte number that you can use
the advisory lock on.  You can use a sequence to automatically
generate unique values for it, and as long as you don't exceed
2 billion rows, you'll be fine.  Of course, if you expect that
you might exceed 2 billion rows, that won't work either.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Converting stored procedures from SQL Anywhere to PostGres.

2012-03-07 Thread mgould
I am trying to convert stored procedures from SQL Anywhere to Postgres. 
I am getting error

ERROR:  syntax error at or near return
LINE 2: return integer AS 
^


** Error **

ERROR: syntax error at or near return
SQL state: 42601
Character: 81

when I run this.  I'm not sure what the problem is here.  Any help would
be appreciated.


CREATE OR REPLACE FUNCTION iss.ConfigAddKey (in
isscontrib.citext,pkeyname)
return integer AS 
$BODY$
begin

declare l:retval integer;
declare l:id integer;

if exists(select id into l:id from iss.configkeys where keyname =
pkeyname) then
  l:retval := l:id
else
  insert into iss.configkeys(keyname) values (pKeyname);
end if;
return retval

end;
$BODY$
LANGUAGE 'plpgsql';
 
Michael Gould
Intermodal Software Solutions, LLC
904-226-0978


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ts_rank seems very slow (140 ranked documents / second on my machine)

2012-03-07 Thread Nicolas Grilly
Reopening a six months old discussion... Is there any plans to improve
ts_rank performance by making it able to use GIN/GIST indices?

From re-reading Oleg answers, I understand FTS indices do not contain
enough information for ts_rank. Because of that, ts_rank has to read the
ts_vector, which is stored in TOAST table, which triggers a random read for
each ranked document.

Cheers,

Nicolas Grilly

On Wed, Jul 13, 2011 at 18:55, Nicolas Grilly nico...@gardentechno.comwrote:

 The first query ran in 347 seconds; the second one in 374 seconds.
 Conclusion: There is no significant overhead in the ts_rank function
 itself. It's slow because ts_rank has to read in random order 40 000
 ts_vector stored in TOAST table. The   slow execution time looks like
 a direct consequence of storing ts_vector in TOAST table...

   :( The only solution I see is to store enough information for ranking
 in index.

 Is it the expected behavior? How can I improve that?



Re: [GENERAL] Automated Backup Script Help (Linux)

2012-03-07 Thread Andrew Gould
On Wed, Mar 7, 2012 at 12:33 PM, Carlos Mennens
carlos.menn...@gmail.com wrote:
 I'm trying to get the automated backup scripts to work from:

 http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

 Currently I'm using PostgreSQL 9.1.3 and have the following three files:

 carlos@db1:~/postgresql$ ls -l
 total 20
 drwxr-xr-x 2 carlos users 4096 Mar  7 13:13 backup
 -rw-r--r-- 1 carlos users 1434 Mar  7 13:19 pg_backup.config
 -rw-r--r-- 1 carlos users 4304 Mar  7 12:54 pg_backup_rotated.sh
 -rw-r--r-- 1 carlos users 3379 Mar  7 12:54 pg_backup.sh

 My Linux shell user 'carlos' has rwx to the entire directory as well
 as the database role 'carlos' is listed as a REPLICATION role:

 postgres=# \du
                                     List of roles
  Role name |                           Attributes
     | Member of
 ---++---
  carlos    | Superuser, No inheritance, Create role, Create DB,
 Replication | {it}

 Now when I attempt to run the script for the 1st time, I get the
 following error:

 carlos@db1:~/postgresql$ pwd
 /home/carlos/postgresql

 carlos@db1:~/postgresql$ ls -l
 total 20
 drwxr-xr-x 2 carlos users 4096 Mar  7 13:13 backup
 -rw-r--r-- 1 carlos users 1435 Mar  7 13:25 pg_backup.config
 -rw-r--r-- 1 carlos users 4304 Mar  7 12:54 pg_backup_rotated.sh
 -rw-r--r-- 1 carlos users 3379 Mar  7 12:54 pg_backup.sh

 carlos@db1:~/postgresql$ sh pg_backup.sh
 pg_backup.sh: 7: cd: can't cd to pg_backup.sh

Doesn't this mean it's trying to cd to a file instead of a directory?

Andrew

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text search ranking: ordering using index and proximiti ranking with OR queries

2012-03-07 Thread Nicolas Grilly
In a previous discussion thread, Oleg suggested that ts_rank is unable to
use GIN indices:
http://archives.postgresql.org/pgsql-general/2011-07/msg00351.php

This is the only information I have about this.

On Wed, Mar 7, 2012 at 18:59, Andrey Chursin an...@danasoft.ws wrote:

 Is there any way to sort by ranking, avoiding seq scan?
 The only way i see now is to use pg_trgm instead of ts_rank, but we
 did not check yet how applicable is it for our purposes.



Re: [GENERAL] Automated Backup Script Help (Linux)

2012-03-07 Thread John R Pierce

On 03/07/12 10:33 AM, Carlos Mennens wrote:

carlos@db1:~/postgresql$ sh pg_backup.sh
pg_backup.sh: 7: cd: can't cd to pg_backup.sh
pg_backup.sh: 8: pg_backup.sh: source: not found
pg_backup.sh: 16: [: !=: unexpected operator
Making backup directory in 2012-03-07/
pg_backup.sh: 54: pg_backup.sh: Bad substitution

So the only thing this script is doing for me is creating an empty
folder with the date as the name. Any idea what I'm doing wrong? I've
attached the configuration file since this is the only thing I was
told that needs to be modified. The contents are exactly as they are
on my server.


whats that pg_backup.sh script look like?  you're getting shell errors 
on line 7 and 8 of the script.


this has nothing to do with postgresql.

--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automated Backup Script Help (Linux)

2012-03-07 Thread Carlos Mennens
On Wed, Mar 7, 2012 at 2:08 PM, John R Pierce pie...@hogranch.com wrote:
 whats that pg_backup.sh script look like?  you're getting shell errors on
 line 7 and 8 of the script.

I didn't attach it since it's the same as the link I referenced in my
original post. I'll attach it in here for those that don't or can't
download it:

 this has nothing to do with postgresql.

It's a scrip to backup and interact with the database so I think it
does. It may not be specifically what this mailing list was intended
for but it's the best suited that I could find not to mention the
script is posted on the main PostgreSQL wiki site so it appears to be
legit and supported...

http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

Thanks for any info!
#!/bin/bash
 
###
### LOAD CONFIG ###
###
 
SCRIPTPATH=$(cd ${0%/*}  pwd -P)
source $SCRIPTPATH/pg_backup.config
 
 
###
 PRE-BACKUP CHECKS 
###
 
# Make sure we're running as the required backup user
if [ $BACKUP_USER !=  -a $(id -un) != $BACKUP_USER ]; then
echo This script must be run as $BACKUP_USER. Exiting.
exit 1;
fi;
 
 
###
### INITIALISE DEFAULTS ###
###
 
if [ ! $HOSTNAME ]; then
HOSTNAME=localhost
fi;
 
if [ ! $USERNAME ]; then
USERNAME=postgres
fi;
 
 
###
 START THE BACKUPS 
###
 
 
FINAL_BACKUP_DIR=$BACKUP_DIR`date +\%Y-\%m-\%d`/
 
echo Making backup directory in $FINAL_BACKUP_DIR
 
if ! mkdir -p $FINAL_BACKUP_DIR; then
echo Cannot create backup directory in $FINAL_BACKUP_DIR. Go and fix 
it!
exit 1;
fi;
 
 
###
### SCHEMA-ONLY BACKUPS ###
###
 
for SCHEMA_ONLY_DB in ${SCHEMA_ONLY_LIST//,/ }
do
SCHEMA_ONLY_CLAUSE=$SCHEMA_ONLY_CLAUSE or datname ~ '$SCHEMA_ONLY_DB'
done
 
SCHEMA_ONLY_QUERY=select datname from pg_database where false 
$SCHEMA_ONLY_CLAUSE order by datname;
 
echo -e \n\nPerforming schema-only backups
echo -e \n
 
SCHEMA_ONLY_DB_LIST=`psql -h $HOSTNAME -U $USERNAME -At -c 
$SCHEMA_ONLY_QUERY postgres`
 
echo -e The following databases were matched for schema-only 
backup:\n${SCHEMA_ONLY_DB_LIST}\n
 
for DATABASE in $SCHEMA_ONLY_DB_LIST
do
echo Schema-only backup of $DATABASE
 
if ! pg_dump -Fp -s -h $HOSTNAME -U $USERNAME $DATABASE | gzip  
$FINAL_BACKUP_DIR$DATABASE_SCHEMA.sql.gz.in_progress; then
echo [!!ERROR!!] Failed to backup database schema of $DATABASE
else
mv $FINAL_BACKUP_DIR$DATABASE_SCHEMA.sql.gz.in_progress 
$FINAL_BACKUP_DIR$DATABASE_SCHEMA.sql.gz
fi
done
 
 
###
## FULL BACKUPS ###
###
 
for SCHEMA_ONLY_DB in ${SCHEMA_ONLY_LIST//,/ }
do
EXCLUDE_SCHEMA_ONLY_CLAUSE=$EXCLUDE_SCHEMA_ONLY_CLAUSE and datname !~ 
'$SCHEMA_ONLY_DB'
done
 
FULL_BACKUP_QUERY=select datname from pg_database where not datistemplate and 
datallowconn $EXCLUDE_SCHEMA_ONLY_CLAUSE order by datname;
 
echo -e \n\nPerforming full backups
echo -e \n
 
for DATABASE in `psql -h $HOSTNAME -U $USERNAME -At -c $FULL_BACKUP_QUERY 
postgres`
do
if [ $ENABLE_PLAIN_BACKUPS = yes ]
then
echo Plain backup of $DATABASE
 
if ! pg_dump -Fp -h $HOSTNAME -U $USERNAME $DATABASE | 
gzip  $FINAL_BACKUP_DIR$DATABASE.sql.gz.in_progress; then
echo [!!ERROR!!] Failed to produce plain backup 
database $DATABASE
else
mv $FINAL_BACKUP_DIR$DATABASE.sql.gz.in_progress 
$FINAL_BACKUP_DIR$DATABASE.sql.gz
fi
fi
 
if [ $ENABLE_CUSTOM_BACKUPS = yes ]
then
echo Custom backup of $DATABASE
 
if ! pg_dump -Fc -h $HOSTNAME -U $USERNAME $DATABASE -f 
$FINAL_BACKUP_DIR$DATABASE.custom.in_progress; then
echo [!!ERROR!!] Failed to produce custom backup 
database $DATABASE
else
mv $FINAL_BACKUP_DIR$DATABASE.custom.in_progress 
$FINAL_BACKUP_DIR$DATABASE.custom
fi
fi
 
done
 
echo -e \nAll database backups complete!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting stored procedures from SQL Anywhere to PostGres.

2012-03-07 Thread Merlin Moncure
On Wed, Mar 7, 2012 at 12:51 PM,  mgo...@isstrucksoftware.net wrote:
 I am trying to convert stored procedures from SQL Anywhere to Postgres.
 I am getting error

 ERROR:  syntax error at or near return
 LINE 2: return integer AS
        ^


 ** Error **

 ERROR: syntax error at or near return
 SQL state: 42601
 Character: 81

 when I run this.  I'm not sure what the problem is here.  Any help would
 be appreciated.


 CREATE OR REPLACE FUNCTION iss.ConfigAddKey (in
 isscontrib.citext,pkeyname)
 return integer AS
 $BODY$
 begin

 declare l:retval integer;
 declare l:id integer;

 if exists(select id into l:id from iss.configkeys where keyname =
 pkeyname) then
  l:retval := l:id
 else
  insert into iss.configkeys(keyname) values (pKeyname);
 end if;
 return retval

 end;
 $BODY$
 LANGUAGE 'plpgsql';

Well, the languages are obviously not compatible.  You're going to
have to convert them all and to do that you're going to have a decent
understanding of both languages.  I doubt there are any automatic
tools to do it.

start here:
http://www.postgresql.org/docs/current/static/plpgsql.html

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Show Databases via ODBC

2012-03-07 Thread Bret Stern
Is it possible through ODBC to connect to a PostgreSql
server and query for the available databases?

When our application upgrades, we typically create a
new database. I want to design a form which allows
the user to select the old database, which then
migrates data to the new (currently connected)
database.

Regards


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Show Databases via ODBC

2012-03-07 Thread Szymon Guz
On 7 March 2012 20:36, Bret Stern bret_st...@machinemanagement.com wrote:

 Is it possible through ODBC to connect to a PostgreSql
 server and query for the available databases?

 When our application upgrades, we typically create a
 new database. I want to design a form which allows
 the user to select the old database, which then
 migrates data to the new (currently connected)
 database.

 Regards



Hi,
I think this query would be helpful:

  select datname from pg_database;

regards
Szymon


Re: [GENERAL] Automated Backup Script Help (Linux)

2012-03-07 Thread John R Pierce

On 03/07/12 11:20 AM, Carlos Mennens wrote:

SCRIPTPATH=$(cd ${0%/*}  pwd -P)


thats line 7 by my count.


thats some very strange stuff there.  AFAIK ${0%/*}  means, take $0 
which is the name of the invoking command ('sh' in the invocation you 
gave), match and remove the results of the pattern  /*


I think you're NOT supposed to invoke this script with `sh scriptname` 
but instead invoke it as 'path/to/scriptname' (thereforce, it has to be 
chmod +x).  if it was invoked as ./script, then it would cd ., and set 
SCRIPTPATH to the full path of .


again, this is a linux shell issue, has nothing to do with postgres, 
regardless of the rest of the script.








--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Merlin Moncure
On Wed, Mar 7, 2012 at 1:17 PM, Mike Blackwell mike.blackw...@rrd.com wrote:
 As a followup, the workaround fails if there is data in the source table due
 to the initial null value placed in the existing data rows.

 [wcs1459@aclnx-cisp01 ~]$ psql --port=5433 -e -f x
 begin;
 BEGIN
 create table a (
   id serial,
   stuff text,
   more_stuff text
 );
 psql:x:6: NOTICE:  CREATE TABLE will create implicit sequence a_id_seq for
 ser
     ial column a.id
 CREATE TABLE
 create table a_audit (
   id serial,
   a_old a,
   a_new a
 );
 psql:x:12: NOTICE:  CREATE TABLE will create implicit sequence
 a_audit_id_seq
                  for serial column a_audit.id
 CREATE TABLE
 insert into a (stuff, more_stuff) values ('some', 'thing');
 INSERT 0 1
 ALTER TABLE a ADD COLUMN even_more_stuff boolean not null;
 psql:x:17: ERROR:  column even_more_stuff contains null values
 ALTER TABLE a ALTER even_more_stuff set default false;
 psql:x:18: ERROR:  current transaction is aborted, commands ignored until
 end of
        transaction block
 ALTER TABLE a DROP COLUMN even_more_stuff;
 psql:x:19: ERROR:  current transaction is aborted, commands ignored until
 end of
        transaction block
 ALTER TABLE a ADD COLUMN even_more_stuff boolean not null default false;
 psql:x:20: ERROR:  current transaction is aborted, commands ignored until
 end of
        transaction block
 rollback;
 ROLLBACK

yup (please respond to the list) -- you can workaround the workaround
by UPDATEing the table to set the field before applying the not null
bit.  Note that if you did this, the foreign table containing the type
would have the new column all as null.

IMO, the server is being too strict on the dependency check.  Perhaps
there are some defenses here that are an early form of trying to get
field constraints to pass through to the foreign column, or it's just
a plain old bug.  I took a quick look at tablecmds.c to see if I could
find an easy fix, but it wasn't clear why the default was forcing an
dependency error and I punted.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [BUGS] [GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Tom Lane
Merlin Moncure mmonc...@gmail.com writes:
 On Wed, Mar 7, 2012 at 11:45 AM, Mike Blackwell mike.blackw...@rrd.com 
 wrote:
 alter table a add column even_more_stuff boolean not null default false;

 aha! that's not what you posted last time.  you appended 'not null
 default false'; which inexplicably breaks the ALTER.

 try this:
 ALTER TABLE a ADD COLUMN even_more_stuff text not null;
 ALTER TABLE a ALTER even_more_stuff set default false;
 ALTER TABLE a DROP COLUMN even_more_stuff;
 ALTER TABLE a ADD COLUMN even_more_stuff boolean not null default false;

 (this really looks like a bug in postgres, cc-ing to bugs)

It is not a bug.  The ALTER ADD ... DEFAULT ... form implies rewriting
every existing tuple of the rowtype to insert a non-null value in the
added column, and we don't have support for doing that to rowtype
columns, only to the target table and descendants.  Without a default,
it's just a catalog adjustment and doesn't involve rewriting any data.
(This stems from the fact that columns beyond a tuple's natts value are
presumed null, so we can let ADD COLUMN without a default just change
the catalogs and a null column effectively springs into existence for
every existing tuple.  ALTER ADD ... DEFAULT is specified to have a
different result, and it's not free.)

This probably could be done for rowtype columns as well, but nobody has
collected the necessary round tuits.  I think there was some fear of
locking/deadlock issues, too.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [BUGS] [GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Alvaro Herrera

Excerpts from Tom Lane's message of mié mar 07 17:31:32 -0300 2012:

 This probably could be done for rowtype columns as well, but nobody has
 collected the necessary round tuits.  I think there was some fear of
 locking/deadlock issues, too.

It's probably easy to do if you require it to be marked INVALID
initially and then validate the tables using it one by one.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [BUGS] [GENERAL] Altering a table with a rowtype column

2012-03-07 Thread Merlin Moncure
On Wed, Mar 7, 2012 at 2:31 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Merlin Moncure mmonc...@gmail.com writes:
 On Wed, Mar 7, 2012 at 11:45 AM, Mike Blackwell mike.blackw...@rrd.com 
 wrote:
 alter table a add column even_more_stuff boolean not null default false;

 aha! that's not what you posted last time.  you appended 'not null
 default false'; which inexplicably breaks the ALTER.

 try this:
 ALTER TABLE a ADD COLUMN even_more_stuff text not null;
 ALTER TABLE a ALTER even_more_stuff set default false;
 ALTER TABLE a DROP COLUMN even_more_stuff;
 ALTER TABLE a ADD COLUMN even_more_stuff boolean not null default false;

 (this really looks like a bug in postgres, cc-ing to bugs)

 It is not a bug.  The ALTER ADD ... DEFAULT ... form implies rewriting
 every existing tuple of the rowtype to insert a non-null value in the
 added column, and we don't have support for doing that to rowtype
 columns, only to the target table and descendants.

I'm not buying that..it implies no such thing.  In particular, for
table-as-rowtype columns, there's no way that I can see to have
default values be generated.  So why does it follow that the dependent
table has to be rewritten?  Column constraints are not enforced on the
rowtype, so it follows that default shouldn't be either considering
there's no way to get the default to fire.  Composite type (or table
based composite) defaults are applied to the composite as a whole, not
to specific fields.

On a practical level, the error blocks nothing -- you can bypass it
trivially.   It's just an annoyance that prevents things that users
would like to be able to do with table row types.  So I'd argue to
remove the check, although I can kinda see the argument that it's not
a bug unless the check was recently introduced so that it broke older
code.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Show Databases via ODBC

2012-03-07 Thread Bret Stern
Works perfect. I'll trap and show only our databases to the user.
all thanks
On Wed, 2012-03-07 at 21:00 +0100, Szymon Guz wrote:
 On 7 March 2012 20:36, Bret Stern bret_st...@machinemanagement.com
 wrote:
 Is it possible through ODBC to connect to a PostgreSql
 server and query for the available databases?
 
 When our application upgrades, we typically create a
 new database. I want to design a form which allows
 the user to select the old database, which then
 migrates data to the new (currently connected)
 database.
 
 Regards
 
 
 
 
 Hi,
 I think this query would be helpful: 
 
 
   select datname from pg_database;
 
 
 regards
 Szymon



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting stored procedures from SQL Anywhere to PostGres.

2012-03-07 Thread Adrian Klaver

On 03/07/2012 10:51 AM, mgo...@isstrucksoftware.net wrote:

I am trying to convert stored procedures from SQL Anywhere to Postgres.
I am getting error

ERROR:  syntax error at or near return
LINE 2: return integer AS
 ^


** Error **

ERROR: syntax error at or near return
SQL state: 42601
Character: 81

when I run this.  I'm not sure what the problem is here.  Any help would
be appreciated.


CREATE OR REPLACE FUNCTION iss.ConfigAddKey (in
isscontrib.citext,pkeyname)
return integer AS

  ^^^ Should be returns

For a good overview see:
http://www.postgresql.org/docs/9.0/interactive/plpgsql-structure.html




$BODY$
begin

declare l:retval integer;
declare l:id integer;

if exists(select id into l:id from iss.configkeys where keyname =
pkeyname) then
   l:retval := l:id
else
   insert into iss.configkeys(keyname) values (pKeyname);
end if;
return retval

end;
$BODY$
LANGUAGE 'plpgsql';

Michael Gould
Intermodal Software Solutions, LLC
904-226-0978





--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] autovacuum and transaction id wraparound

2012-03-07 Thread pawel_kukawski
Hi,

I have a question regarding the following snippet in the official Postgres
documentation:

If for some reason autovacuum fails to clear old XIDs from a table, the
system will begin to emit warning messages like this when the database's
oldest XIDs reach ten million transactions from the wraparound point...

Do you know any real reason why the autovacuum may fail to clear old XIDs?

Is this highly probable ?

Thanks,
Pawel

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/autovacuum-and-transaction-id-wraparound-tp5545412p5545412.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FDWs, foreign servers and user mappings

2012-03-07 Thread Joe Abbate
Hi,

We've been discussing the subject in the pyrseas-general ML, but I think
it would be beneficial to get feedback from a broader audience.

The Pyrseas dbtoyaml utility outputs the objects in YAML, which ends up
looking like a tree (see
http://pyrseas.readthedocs.org/en/latest/dbtoyaml.html ) and similar to
the tree in pgAdmin's Object Browser panel.

Any object that is owned by a schema (tables, functions, etc.) is listed
naturally under the schema.  The objects (columns, constraints, etc.)
that belong to a table are listed under the table, and so forth.  Only a
few object types fall outside the schema tree, e.g., casts, languages
(extensions, collations in 9.1).

Because FDW's, foreign servers and user mappings are not directly tied
to a schema and their identifiers must be unique within a given
database, I first added them at the top level, e.g.,

foreign data wrapper fdw1:
  ...
schema public:
  ...
server fs1:
  wrapper: fdw1
user mapping for PUBLIC server fs1:
  options:
  - xxx=yyy

A Pyrseas user suggested that servers ought to be listed under the
associated FDW, e.g.,

foreign data wrapper fdw1:
  server fs1:
  ...

The question is whether user mappings should also be listed under the
server, i.e.,

foreign data wrapper fdw1:
  server fs1:
user mapping for PUBLIC:
  options:
  - xxx=yyy

Does that make sense?  And if so, will it make sense in the future
(considering potential FDW developments)?

A related question was whether user mapping options, which may include
sensitive data such as passwords, should be output by default.  I'm not
sure if this should extend to other FDW-related options, since a server
option could presumably be a URI that includes logon information.

Thanks in advance.

Joe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] phantom long-running query (check_postgres.pl reports a long-running query, but the database logs don't show it)

2012-03-07 Thread Aleksey Tsalolikhin
On Thu, Feb 9, 2012 at 12:57 AM, Achilleas Mantzios
ach...@smadev.internal.net wrote:
 On Πεμ 09 Φεβ 2012 02:32:37 Aleksey Tsalolikhin wrote:
 I got an alert from check_postgres.pl today on a long-running query on
 our production database, but our PostgreSQL 8.4.9 server log, which is
 configured to log queries over 5 seconds long
 (log_min_duration_statement = 5000) does not show the query.
...
 Can you think of some scenario where check_postgres.pl would see a
 long-running query but it would not get logged by the database server?

 You should either look at the source of this perl script to see what it is
 doing ...

Thank you, Achilleas.  This issue came up today again - check_postgres reports
a long-running query, but no corresponding entry in the Postgres log even though
I have log_min_duration_statement turned on.

I followed your suggestion and found what query check_postgres uses (below).

Turns out the long running query is a transaction and it was never completed!

Now, is there any way for me to look inside that transaction and see
what the queries
are?

check_postgres query to identify long-running queries:

SELECTxact_start,
SUBSTR(current_query,0,100) AS current_query,
client_addr,
client_port,
procpid,
COALESCE(ROUND(EXTRACT(epoch FROM now()-query_start)),0) AS qtime,
datname,
usename
FROM pg_stat_activity
WHERE current_query  'IDLE'  AND usename  'postgres'
ORDER BY xact_start, procpid DESC;

My long-running query that is in the middle of a transaction:

  xact_start  | current_query | client_addr |
client_port | procpid | qtime | datname |  usename
--+---+-+-+-+---+-+---
 2012-03-07 09:53:15.49363-08 | IDLE in transaction | |
-1 |   31881 |  6824 | [redacted] | [redacted]
(1 row)

Thanks,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding a timestamp to nearest x seconds

2012-03-07 Thread Daniele Varrazzo
On Wed, Mar 7, 2012 at 3:09 PM, Andy Colson a...@squeakycode.net wrote:
 Took me a while to figure this out, thought I'd paste it here for others to
 use:

 create or replace function round_timestamp(timestamp, integer) returns
 timestamp as $$
        select date_trunc('minute', $1) + cast(round(date_part('seconds',
 $1)/$2)*$2 || ' seconds' as interval);
 $$ language sql immutable;


 If you pass 10 to the second argument, it'll round the timestamp to the
 nearest 10 seconds.  Pass 5 to round to nearest 5 seconds, etc..

Your function can only round the seconds: it cannot round on intervals
longer than one minute and always rounds down to the minute, creating
irregular intervals, e.g.:

= select round_timestamp('2012-03-12 01:42:58', 13);
 2012-03-12 01:42:52
= select round_timestamp('2012-03-12 01:42:59', 13);
 2012-03-12 01:43:05
= select round_timestamp('2012-03-12 01:43:00', 13);
 2012-03-12 01:43:00

You don't get discontinuities if you map the timestamp on the real
axis by extracting the epoch, play there and then go back into the
time domain:

create or replace function round_timestamp(timestamp, integer) returns
timestamp as $$
select 'epoch'::timestamp + '1 second'::interval * ($2 *
round(date_part('epoch', $1) / $2));
$$ language sql immutable;

This version can round on any interval specified in seconds (but it
would be easy to specify the step as interval: date_part('epoch',
interval) returns the interval length in seconds).

-- Daniele

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Aleksey Tsalolikhin
On Tue, Mar 6, 2012 at 7:05 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
  We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x

  The origin database data/base directory is 197 GB in size.

  The slave database data/base directory is 562 GB in size and is
  over 75% filesystem utilization which has set off the disk free siren.

  My biggest table* measures 154 GB on the origin, and 533 GB on
  the slave.  (*As reported by

  SELECT relname as Table, pg_size_pretty(pg_total_relation_size(relid))
   As Size from pg_catalog.pg_statio_user_tables
   ORDER BY pg_total_relation_size(relid) DESC;
  )

I ran VACUUM FULL on this table, but it is still over 500 GB in size.
And growing...
I'm up to 77% utilization on the filesystem.

check_postgres --action=bloat now returns OK.  So it's not bloat.
What else could it be?

Best,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Joshua D. Drake


On 03/07/2012 06:27 PM, Aleksey Tsalolikhin wrote:


  SELECT relname as Table, pg_size_pretty(pg_total_relation_size(relid))
   As Size from pg_catalog.pg_statio_user_tables
   ORDER BY pg_total_relation_size(relid) DESC;
  )


I ran VACUUM FULL on this table, but it is still over 500 GB in size.
And growing...
I'm up to 77% utilization on the filesystem.

check_postgres --action=bloat now returns OK.  So it's not bloat.
What else could it be?


Try disabling replication on that table and clustering the table and 
then re-enabling replication. I would have to double check but I think 
check_postgres --action=bloat only checks for dead space, not usable 
space, so you could actually still have bloat, just bloat that is usable.


Alternatively you could disable replication on that table, truncate the 
table, and then re-enable replication for that table. A concern would be 
is that it is a large table regardless, which means you are going to 
hold open a transaction to refill it.



JD




Best,
Aleksey




--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [Slony1-general] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Aleksey Tsalolikhin
Dear Joshua,

  You wrote:

 Try disabling replication on that table and clustering the table and then 
 re-enabling
 replication.
...
 Alternatively you could disable replication on that table, truncate the 
 table, and then
 re-enable replication for that table. A concern would be is that it is a 
 large table
 regardless, which means you are going to hold open a transaction to refill it.

I don't see any way to disable replication on a table in Slony.   I do
see I can remove
a table from the replication set, and then add it back in.  Is that
what you meant, or
am I missing something?

I ask because I know when a table is added to a replication set, it is
copied over in
full from origin to slave, and since this table is huge, I'll need to
schedule a maintenance
window to minimize impact on production.

Yours truly,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FDWs, foreign servers and user mappings

2012-03-07 Thread Shigeru Hanada
(2012/03/08 6:16), Joe Abbate wrote:
 Does that make sense?  And if so, will it make sense in the future
 (considering potential FDW developments)?

I think that makes, and will make sense.  Because SQL/MED standard
mentions about schema for only foreign table in 4.12 SQL-schemas section.

FYI, pgAdmin III shows them as a tree like:

Database
  FDW
Server
  User Mapping
  Schema
Foreign Table

 A related question was whether user mapping options, which may include
 sensitive data such as passwords, should be output by default.  I'm not
 sure if this should extend to other FDW-related options, since a server
 option could presumably be a URI that includes logon information.

FDW options of user mappings are hidden from non-superusers for security
reason.  So, I think it's reasonable to show every visible option for
the user who is used for the dbtoyaml invocation.

I'm not sure about other object types, but IMO secure information such
as URI which includes password should be stored in user mappings rather
than servers.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Aleksey Tsalolikhin
On Wed, Mar 7, 2012 at 8:12 PM, Joshua D. Drake j...@commandprompt.com wrote:

 check_postgres --action=bloat returns OK [after VACUUM FULL].  So it's not 
 bloat.
 What else could it be?

 I would have to double check but I think
 check_postgres --action=bloat only checks for dead space, not usable space,
 so you could actually still have bloat, just bloat that is usable.

This is how check_postgres checks.  How would I check for usable
bloat, to confirm
that that's what I am running into?  What is usable bloat, anyway?
(Is there some write-up
on it?)


SELECT
  current_database() AS db, schemaname, tablename,
reltuples::bigint AS tups, relpages::bigint AS pages, otta,
  ROUND(CASE WHEN otta=0 OR sml.relpages=0 OR
sml.relpages=otta THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS
tbloat,
  CASE WHEN relpages  otta THEN 0 ELSE relpages::bigint -
otta END AS wastedpages,
  CASE WHEN relpages  otta THEN 0 ELSE
bs*(sml.relpages-otta)::bigint END AS wastedbytes,
  CASE WHEN relpages  otta THEN '0 bytes'::text ELSE
(bs*(relpages-otta))::bigint || ' bytes' END AS wastedsize,
  iname, ituples::bigint AS itups, ipages::bigint AS ipages, iotta,
  ROUND(CASE WHEN iotta=0 OR ipages=0 OR ipages=iotta THEN 0.0
ELSE ipages/iotta::numeric END,1) AS ibloat,
  CASE WHEN ipages  iotta THEN 0 ELSE ipages::bigint - iotta
END AS wastedipages,
  CASE WHEN ipages  iotta THEN 0 ELSE bs*(ipages-iotta) END
AS wastedibytes,
  CASE WHEN ipages  iotta THEN '0 bytes' ELSE
(bs*(ipages-iotta))::bigint || ' bytes' END AS wastedisize,
  CASE WHEN relpages  otta THEN
CASE WHEN ipages  iotta THEN 0 ELSE ipages-iotta::bigint END
ELSE CASE WHEN ipages  iotta THEN relpages-otta::bigint
  ELSE relpages-otta::bigint + ipages-iotta::bigint END
  END AS totalwastedbytes
FROM (
  SELECT
schemaname, tablename, cc.reltuples, cc.relpages, bs,
CEIL((cc.reltuples*((datahdr+ma-
  (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma
END))+nullhdr2+4))/(bs-20::float)) AS otta,
COALESCE(c2.relname,'?') AS iname,
COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS
ipages,
COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0)
AS iotta -- very rough approximation, assumes all cols
  FROM (
SELECT
  ma,bs,schemaname,tablename,
  (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE
hdr%ma END)))::numeric AS datahdr,
  (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma
ELSE nullhdr%ma END))) AS nullhdr2
FROM (
  SELECT
schemaname, tablename, hdr, ma, bs,
SUM((1-null_frac)*avg_width) AS datawidth,
MAX(null_frac) AS maxfracsum,
hdr+(
  SELECT 1+count(*)/8
  FROM pg_stats s2
  WHERE null_frac0 AND s2.schemaname = s.schemaname
AND s2.tablename = s.tablename
) AS nullhdr
  FROM pg_stats s, (
SELECT
  (SELECT current_setting('block_size')::numeric) AS bs,
CASE WHEN SUBSTRING(SPLIT_PART(v, ' ', 2) FROM
'#[0-9]+.[0-9]+#%' for '#')
  IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,
  CASE WHEN v ~ 'mingw32' OR v ~ '64-bit' THEN 8 ELSE
4 END AS ma
FROM (SELECT version() AS v) AS foo
  ) AS constants
  GROUP BY 1,2,3,4,5
) AS foo
  ) AS rs
  JOIN pg_class cc ON cc.relname = rs.tablename
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND
nn.nspname = rs.schemaname AND nn.nspname  'information_schema'
  LEFT JOIN pg_index i ON indrelid = cc.oid
  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml
 WHERE sml.relpages - otta  0 OR ipages - iotta  10 ORDER BY
totalwastedbytes DESC LIMIT 10

Yours,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] replay_location meaning in pg_stat_replication

2012-03-07 Thread François Beausoleil
Hi all,  

I'm just wondering what the numbers really mean. The output is 2FA/B3AFB890, 
for example. Do these numbers represent something meaningful I can use? Bytes, 
offsets within files, what? How do you monitor the difference between 
write_location and replay_location? It seems obvious the values are hex. 
Anybody has code they want to share?

Section 25.2.5.2 in the 9.1 docs states:

You can retrieve a list of WAL sender processes via the pg_stat_replication 
(http://www.postgresql.org/docs/9.1/static/monitoring-stats.html#MONITORING-STATS-VIEWS-TABLE)
 view. Large differences between pg_current_xlog_location and sent_location 
field might indicate that the master server is under heavy load, while 
differences between sent_location and pg_last_xlog_receive_location on the 
standby might indicate network delay, or that the standby is under heavy load.

What is a large difference in this context?

Thanks!
François



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Timeline switch after rsync of *offline* pgdata?

2012-03-07 Thread hubert depesz lubaczewski
Hi,
I have a script, that does, daily:
pg_ctl -D /data1 -s -m fast stop # data1 is working, rw database
pg_ctl -D /data2 -s -m fast stop # data2 is SR slave for another database
rsync -a --delete /data2/ /data1/
rm -rf /data1/pg_log/*

\print -u1 $(date '+%Y-%m-%d %T %Z'): Pg_controldata for /data2:
/usr/pgsql-9.0/bin/pg_controldata /data2

\print -u1 $(date '+%Y-%m-%d %T %Z'): Pg_controldata for /data1:
/usr/pgsql-9.0/bin/pg_controldata /data1

as you can see, it is full rsync of offline pgs.

But the pg_controldata outputs are puzzling:

2012-03-08 06:18:58 GMT: Pg_controldata for /data2:
pg_control version number:903
Catalog version number:   201008051
Database system identifier:   5554952960675921391
Database cluster state:   shut down in recovery
pg_control last modified: Thu Mar  8 05:35:02 2012
Latest checkpoint location:   CEA/29C835C0
Prior checkpoint location:CE9/FBE2C010
Latest checkpoint's REDO location:CEA/1F0028D0
Latest checkpoint's TimeLineID:   2
Latest checkpoint's NextXID:  0/759254939
Latest checkpoint's NextOID:  1817085
Latest checkpoint's NextMultiXactId:  10319293
Latest checkpoint's NextMultiOffset:  29852536
Latest checkpoint's oldestXID:654
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  759252601
Time of latest checkpoint:Thu Mar  8 05:31:09 2012
Minimum recovery ending location: CEA/30032028
Backup start location:0/0
Current wal_level setting:hot_standby
Current max_connections setting:  1024
Current max_prepared_xacts setting:   10
Current max_locks_per_xact setting:   64
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Maximum size of a TOAST chunk:1996
Date/time type storage:   64-bit integers
Float4 argument passing:  by value
Float8 argument passing:  by value

2012-03-08 06:18:58 GMT: /data1:
pg_control version number:903
Catalog version number:   201008051
Database system identifier:   5554952960675921391
Database cluster state:   shut down
pg_control last modified: Thu Mar  8 05:35:02 2012
Latest checkpoint location:   CEA/A020
Prior checkpoint location:CEA/9F20
Latest checkpoint's REDO location:CEA/A020
Latest checkpoint's TimeLineID:   3
Latest checkpoint's NextXID:  0/757048930
Latest checkpoint's NextOID:  1819890
Latest checkpoint's NextMultiXactId:  10282145
Latest checkpoint's NextMultiOffset:  29736818
Latest checkpoint's oldestXID:654
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  0
Time of latest checkpoint:Thu Mar  8 05:35:02 2012
Minimum recovery ending location: 0/0
Backup start location:0/0
Current wal_level setting:minimal
Current max_connections setting:  1024
Current max_prepared_xacts setting:   10
Current max_locks_per_xact setting:   64
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Maximum size of a TOAST chunk:1996
Date/time type storage:   64-bit integers
Float4 argument passing:  by value
Float8 argument passing:  by value

The important thing is that pg in data1 incremented timeline.

Pg is 9.0.6, on CentOS 5.5.

Stop of pg's is happening via a init.d script, which does the lines
shown above, and the initscript ends with OK

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Stuart Bishop
On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
  We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x

  The origin database data/base directory is 197 GB in size.

  The slave database data/base directory is 562 GB in size and is
  over 75% filesystem utilization which has set off the disk free siren.

  My biggest table* measures 154 GB on the origin, and 533 GB on
  the slave.  (*As reported by

  SELECT relname as Table, pg_size_pretty(pg_total_relation_size(relid))
   As Size from pg_catalog.pg_statio_user_tables
   ORDER BY pg_total_relation_size(relid) DESC;
  )

  I took a peek at this table on the slave using pgadmin3. The table
  has auto-vacuum enabled, and TOAST autovacuum enabled.

  There are 8.6 million live tuples, and 1.5 million dead tuples.

  Last autovacuum was over a month ago.

  Last autoanalyze was 3 hours ago.

  Table size is 4 Gigs, and TOAST table size is 527 Gigs.
  Indexes size is 3 Gigs.

  Autovacuum threshold is 20%, and the table is just under that threshold.

  I ran vacuum analyze verbose.  But the filesystem is still at 76%
 utilization.
  In fact, now, the data/base directory has grown to 565 GB.

  Why is my slave bigger than my master?  How can I compact it, please?

Do you have a long running transaction on the slave? vacuum will not
reuse space that was freed after the longest running transaction.

You need to use the CLUSTER command to compact it, or VACUUM FULL
followed by a REINDEX if you don't have enough disk space to run
CLUSTER. And neither of these will do anything if the space is still
live because some old transaction might still need to access the old
tuples.


-- 
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general