Re: Fwd: [GENERAL] Question on Trigram GIST indexes

2013-01-22 Thread Tom Lane
ERR ORR  writes:
>> Queries which use "WHERE "TST_PAYLOAD" LIKE 'SEAT%'" go to the btree
>> index as it should.
>> Queries which use "WHERE "TST_PAYLOAD" LIKE '%EAT%'" *should* use the
>> GIST index but do a full table scan instead.

Are you sure it "should" use the index for that?  That query doesn't
look very selective to me --- it might well be deciding that a seqscan
is cheaper.  You could try forcing the issue with enable_seqscan = off
to see if the query is really unable to match the index, or it just
doesn't like the cost estimate.

> Would it help to `ALTER DATABASE set lc_collate = 'C'`,supposing that is
> possible? (Oracle doesn't allow that iirc)

FWIW, I think you do want the index to have the database's default
collation, otherwise it could only match LIKE clauses that explicitly
specify the same non-default collation.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the impact of "varchar_pattern_ops" on performance and/or memory

2013-01-22 Thread Edson Richter

Em 22/01/2013 18:04, Tom Lane escreveu:

Edson Richter  writes:

Almost all indexed columns of kind varchar in my database require
"varchar_pattern_ops" op class in order to make my LIKE queries more
optmized.
Is there any way to define that this operator class is the default for
my database for varchar columns?

No.  (Well, you could go and hack the catalog entries to mark it as the
default, but things would fail rather badly whenever you dump and reload
the database, because pg_dump won't dump changes to built-in objects.)


Can I change this in template database, and then get the benefit of 
having these changes propagated to newly created databases?





What would be the impact in terms of performance and memory consumption?

It's probably cheaper, actually, than the regular strcoll-based
comparisons.

BTW, have you considered whether you could run your database in C locale
and thus dodge the whole problem?  In C locale there's no difference
between this opclass and text_ops.


Using C locale, would I face problems with Portuguese Brazilian 
characters like ã, ç, é?


Thanks, Tom!


Edson



regards, tom lane






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the impact of "varchar_pattern_ops" on performance and/or memory (was: What is impact of "varchar_ops")?

2013-01-22 Thread Tom Lane
Edson Richter  writes:
> Almost all indexed columns of kind varchar in my database require 
> "varchar_pattern_ops" op class in order to make my LIKE queries more 
> optmized.

> Is there any way to define that this operator class is the default for 
> my database for varchar columns?

No.  (Well, you could go and hack the catalog entries to mark it as the
default, but things would fail rather badly whenever you dump and reload
the database, because pg_dump won't dump changes to built-in objects.)

> What would be the impact in terms of performance and memory consumption?

It's probably cheaper, actually, than the regular strcoll-based
comparisons.

BTW, have you considered whether you could run your database in C locale
and thus dodge the whole problem?  In C locale there's no difference
between this opclass and text_ops.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-22 Thread Steve Crawford

On 01/22/2013 10:59 AM, Alvaro Herrera wrote:

Steve Crawford escribió:

On 01/21/2013 05:02 PM, Tim Uckun wrote:

I already posted the schema earlier. It's a handful of integer fields
with one hstore field.

Oh well. I can't find it but maybe it got lost in shipping or eaten
by a spam filter.

This is what we have the archives are for:

http://www.postgresql.org/message-id/flat/50fedf66.7050...@pinpointresearch.com#50fedf66.7050...@pinpointresearch.com

The original message is at the top of the page (obviously).

Didn't notice that the information was over on the github site (which, 
of course, also makes it impossible to search for in my email and 
unavailable to the mail archives for those wishing to view it in the 
future).


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the impact of "varchar_pattern_ops" on performance and/or memory

2013-01-22 Thread Edson Richter

Em 22/01/2013 15:57, Edson Richter escreveu:
I'm rephrasing the question I posted yesterday, because I have used 
the wrong terminology and caused confusion (and for instance, got no 
response to my question).


Almost all indexed columns of kind varchar in my database require 
"varchar_pattern_ops" op class in order to make my LIKE queries more 
optmized.


Is there any way to define that this operator class is the default for 
my database for varchar columns?


What would be the impact in terms of performance and memory consumption?


Thanks,

Edson Richter



Found it (partial) answer!

RTFM, as the wise man said. If you have not found, then have you read 
not enough.


If anyone else need to do the same, look the manuals... 
http://www.postgresql.org/docs/current/static/sql-alteropclass.html


BUT, I have not found how to change the existing operator class to be 
default for the data type without dropping and recreating.


It is there, or are the docs outdated?

Thanks,

Edson Richter




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore 1 Table from pg_dumpall?

2013-01-22 Thread Steve Crawford

On 01/22/2013 09:57 AM, Rich Shepard wrote:
  I neglected to dump a single table before adding additional rows to 
it via

psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days 
ago.
The file is 386M in size and emacs is taking a very long time to move 
around

in it.

  Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.



If you dump using the custom format you can use pg_restore to output the 
schema, data or both for a specified table.


If you have the basic text dump, then sed works reasonably well:

Table definition:
sed -n '/^CREATE TABLE yourtable (/,/^);/p' yourdump > tableschema.psql

Data:
sed -n '/^COPY yourtable (/,/^\\\./p' yourdump > tabledata.psql

This is imperfect as it doesn't include the various indexes, 
constraints, sequences, etc. but it gets you most of the way there. You 
can probably extract the relevant alterations with:

sed -n '/^ALTER TABLE .*yourtable$/,/;$/p'

And you can grep for index creation.

Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreS Security Concern

2013-01-22 Thread Vincent Veyron
Le mardi 22 janvier 2013 à 18:08 +0800, Roela B Montecillo a écrit :
> > Hi and good afternoon.
> >
> > I would like to inquire if you can help me search about a third party
> > software or any solution that can handle PCI-DSS requirement on PostgreS
> > database access and privileges. I am free to discuss options in case you
> > already have a known solution that we can implement.
> >


Braintree uses Postgresql :

https://www.braintreepayments.com/braintrust/scaling-postgresql-at-braintree-four-years-of-evolution


-- 
Salutations, Vincent Veyron
http://marica.fr/site/demonstration
Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-22 Thread Alvaro Herrera
Steve Crawford escribió:
> On 01/21/2013 05:02 PM, Tim Uckun wrote:
> >I already posted the schema earlier. It's a handful of integer fields
> >with one hstore field.
> 
> Oh well. I can't find it but maybe it got lost in shipping or eaten
> by a spam filter.

This is what we have the archives are for:

http://www.postgresql.org/message-id/flat/50fedf66.7050...@pinpointresearch.com#50fedf66.7050...@pinpointresearch.com

The original message is at the top of the page (obviously).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-22 Thread Steve Crawford

On 01/21/2013 06:21 PM, Kevin Grittner wrote:

Kevin Grittner wrote:
...
  shared_buffers | 32MB   | configuration file
...

I did absolutely no tuning from the default configuration.

But Tim has his shared_buffers set to 1600kB 
("shared_buffers";"1600kB";"configuration file") or roughly 1/20 of the 
typical default value, which is a very low starting value anyway, on a 
machine populated with 8GB RAM.


I'd like to see how it runs with a more reasonable shared_buffers 
setting. At a very minimum the 32MB default.


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-22 Thread Steve Crawford

On 01/21/2013 05:02 PM, Tim Uckun wrote:

I already posted the schema earlier. It's a handful of integer fields
with one hstore field.


Oh well. I can't find it but maybe it got lost in shipping or eaten by a 
spam filter.





On Tue, Jan 22, 2013 at 1:23 PM, Steve Crawford
 wrote:

select * from pg_stat_user_tables where relname='yourtable';



Messy output
Don't know if you are in psql but if so, expanded display works for 
this. I.e.:

steve@[local] => \x
Expanded display is on.
steve@[local] => select * from pg_stat_user_tables where relname='footest';
-[ RECORD 1 ]-+--
relid | 781691
schemaname| public
relname   | footest
seq_scan  | 3
seq_tup_read  | 609
idx_scan  |
idx_tup_fetch |
n_tup_ins | 609
n_tup_upd | 0
n_tup_del | 0
n_tup_hot_upd | 0
n_live_tup| 301
n_dead_tup| 0
last_vacuum   |
last_autovacuum   |
last_analyze  |
last_autoanalyze  | 2012-12-19 08:42:23.347368-08
vacuum_count  | 0
autovacuum_count  | 0
analyze_count | 0
autoanalyze_count | 2




"relid","schemaname","relname","seq_scan","seq_tup_read","idx_scan","idx_tup_fetch","n_tup_ins","n_tup_upd","n_tup_del","n_tup_hot_upd","n_live_tup","n_dead_tup","last_vacuum","last_autovacuum","last_analyze","last_autoanalyze","vacuum_count","autovacuum_count","analyze_count","autoanalyze_count"
26710,"chrysler","imports",274,25280539,4,2749,98926,5757462,92,327542,98834,0,"2013-01-22
12:28:29.01505+13","2013-01-22 12:32:29.249588+13","2013-01-22
12:28:29.173772+13","2013-01-22 12:32:44.123493+13",3,30,3,24


So at least autovacuum is running (and some manual vacuum and analyze as 
well).


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore 1 Table from pg_dumpall?

2013-01-22 Thread Kevin Grittner
Rich Shepard wrote:

> Is there a way I can extract a single table's schema and data from the
> full backup? If so, I can then drop the fubar'd table and do it correctly
> this time.

If you have a server with enough free space, you could restore
the whole cluster and then selectively dump what you need.

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Kevin Grittner
Cliff de Carteret wrote:

> I have now deleted the copy on the remote wal_archive folder and the
> archiving is now functioning and sending the logs from the local to the
> remote folder. The remote database does not startup and the following is in
> the log:
> 
> LOG: database system was shut down in recovery at 2013-01-22 10:54:48 GMT
> LOG: entering standby mode
> LOG: restored log file "000100AB0051" from archive
> LOG: invalid resource manager ID in primary checkpoint record
> PANIC: could not locate a valid checkpoint record
> LOG: startup process (PID 22350) was terminated by signal 6: Aborted
> LOG: aborting startup due to startup process failure
> 
> 000100AB0051 is in my remote database's pg_xlog folder

Any chance that there was a pg_start_backup() call on the master without
a matching pg_stop_backup() call?

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore 1 Table from pg_dumpall? [RESOLVED]

2013-01-22 Thread Rich Shepard

On Tue, 22 Jan 2013, Joshua D. Drake wrote:


Rich, the main problem is using pg_dumpall. Unfortunately pg_dumpall has
not kept up with all the other advances Postgres has had in the last
decade. To set up dump based backups properly I suggest reviewing:

http://www.commandprompt.com/blogs/joshua_drake/2010/07/a_better_backup_with_postgresql_using_pg_dump/


  Thanks, Josh, I will read that.

  I have only 4 databases so I can easily back up each one individually. I
make system backups daily using dirvish but understand that it's better to
backup the databases separately with the appropriate tools.

Regards,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore 1 Table from pg_dumpall? [RESOLVED]

2013-01-22 Thread Joshua D. Drake


On 01/22/2013 10:07 AM, Rich Shepard wrote:


On Tue, 22 Jan 2013, Rich Shepard wrote:


 Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.


   My solution: view the file in the pager I use (less), then copy relevant
lines to another file opened in a text editor (joe).


Rich, the main problem is using pg_dumpall. Unfortunately pg_dumpall has 
not kept up with all the other advances Postgres has had in the last 
decade. To set up dump based backups properly I suggest reviewing:


http://www.commandprompt.com/blogs/joshua_drake/2010/07/a_better_backup_with_postgresql_using_pg_dump/

Sincerely,

Joshua D. Drake




Rich






--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore 1 Table from pg_dumpall? [RESOLVED]

2013-01-22 Thread Rich Shepard

On Tue, 22 Jan 2013, Rich Shepard wrote:


 Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.


  My solution: view the file in the pager I use (less), then copy relevant
lines to another file opened in a text editor (joe).

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What is the impact of "varchar_pattern_ops" on performance and/or memory (was: What is impact of "varchar_ops")?

2013-01-22 Thread Edson Richter
I'm rephrasing the question I posted yesterday, because I have used the 
wrong terminology and caused confusion (and for instance, got no 
response to my question).


Almost all indexed columns of kind varchar in my database require 
"varchar_pattern_ops" op class in order to make my LIKE queries more 
optmized.


Is there any way to define that this operator class is the default for 
my database for varchar columns?


What would be the impact in terms of performance and memory consumption?


Thanks,

Edson Richter


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Restore 1 Table from pg_dumpall?

2013-01-22 Thread Rich Shepard

  I neglected to dump a single table before adding additional rows to it via
psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days ago.
The file is 386M in size and emacs is taking a very long time to move around
in it.

  Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Yet Another Timestamp Question: Time Defaults

2013-01-22 Thread Steve Crawford

On 01/22/2013 09:37 AM, Gavin Flower wrote:

On 23/01/13 06:30, Gavan Schneider wrote:

On 01/21/2013 07:40 PM, Steve Crawford wrote:

[...]
 (While I suppose some politician somewhere could decide that 
"fall-back" could cross date boundaries, I am unaware of any place 
that has ever done something so pathological as to have the same date 
occur in two non-contiguous pieces once every year.)

[...]

Don't tempt the gods!!!  :-)



Sorry. :)

Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Yet Another Timestamp Question: Time Defaults

2013-01-22 Thread Gavin Flower

On 23/01/13 06:30, Gavan Schneider wrote:

On 01/21/2013 07:40 PM, Gavan Schneider wrote:

[...]
 (While I suppose some politician somewhere could decide that 
"fall-back" could cross date boundaries, I am unaware of any place 
that has ever done something so pathological as to have the same date 
occur in two non-contiguous pieces once every year.)

[...]

Don't tempt the gods!!!  :-)


Cheers,
Gavin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Yet Another Timestamp Question: Time Defaults

2013-01-22 Thread Gavan Schneider

On 01/21/2013 07:40 PM, Gavan Schneider wrote:

...
The points raised by Adrain have prompted some more research on my 
part and I am intrigued to learn that on one day of the year in many 
countries (e.g., Brazil) where daylight conversion happens over 
midnight the local-time version of midnight as start of day does not 
exist. Basically the last day of unadjusted time ends at midnight and 
rolls directly into 01:00:00 the next day (i.e., time 00:00:00 never 
happens on this one day). So the current date-> date+time system must 
already have some added complexity/overhead to check for this rare 
special case. (If not, there's a bug needs fixing!)


Basically midnight is not safe as a target entity once timezones and 
daylight saving get involved. Midday, on the other hand, is a very 
solid proposition, no checks required, 12:00:00 will happen in all 
time zones on every day of the year! Basically nobody messes with 
their clocks in the middle of the day.


So restating:
'2013-10-20'::timestamp ==> 2013-10-20 12:00:00 can never be 
wrong; but,
'2013-10-20'::timestamp ==> 2013-10-20 00:00:00 is wrong in some 
places.


"Wrong" times occur in every time zone that changes offsets at various 
points of the year. Here in California, 02:00:00-02:59:59 March 10, 2013 
are "wrong" but PostgreSQL uses a reasonable interpretation to yield a 
point-in-time:


select '2013-03-10 0230'::timestamptz;
  timestamptz

 2013-03-10 03:30:00-07

And it does the exact same thing in Brazil:

set timezone to 'Brazil/West';
select '1993-10-17 00:00'::timestamptz;
  timestamptz

 1993-10-17 01:00:00-03

select '1993-10-17'::timestamptz;
  timestamptz

 1993-10-17 01:00:00-03

Note, too, that in both zones when the input is interpreted in the local 
zone and displayed in the local zone the date-portion of the 
point-in-time is the same as the input date. (While I suppose some 
politician somewhere could decide that "fall-back" could cross date 
boundaries, I am unaware of any place that has ever done something so 
pathological as to have the same date occur in two non-contiguous pieces 
once every year.)


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Cliff de Carteret
On 22 January 2013 16:43, Kevin Grittner  wrote:

> [Please keep the list copied, and put your reply in-line instead
> of at the top.]
>
> Cliff de Carteret wrote:
> > On 22 January 2013 16:07, Kevin Grittner  wrote:
> >
> >> Cliff de Carteret wrote:
> >>
> >>> The current setup has been working successfully for several years
> >>> until the recent database crash
> >>
> >> What file does the server log say it is trying to archive? What
> >> error are you getting? Does that filename already exist on the
> >> archive (or some intermediate location used by the archive command
> >> or script)?
>
> > The sever log is (repeated constantly):
> >
> > LOG: archive command failed with exit code 1
> > DETAIL: The failed archive command was: test ! -f
> > /opt/postgres/remote_pgsql/wal_archive/000100A80078 && cp
> > pg_xlog/000100A80078
> > /opt/postgres/remote_pgsql/wal_archive/000100A80078
> > WARNING: transaction log file "000100A80078" could not be
> > archived: too many failures
> >
> > The file 000100A80078 exists in the remote archive's
> > wal_archive directory. I read a post saying to copy the file over to the
> > archive and then delete the .ready file to get postgres to move onto the
> > next file but this ended up logging out saying that a log file was
> missing.
> > There are more recent files in this directory but they end at the point
> > where I reverted all of the changes I made last night when time was
> running
> > out and the database had to be put back to a known state.
>
> I would have deleted (or renamed) the copy in the archive
> directory. Archiving should have then resumed and cleaned up the
> pg_xlog directory.
>
I have now deleted the copy on the remote wal_archive folder and the
archiving is now functioning and sending the logs from the local to the
remote folder. The remote database does not startup and the following is in
the log:

LOG:  database system was shut down in recovery at 2013-01-22 10:54:48 GMT
LOG:  entering standby mode
LOG:  restored log file "000100AB0051" from archive
LOG:  invalid resource manager ID in primary checkpoint record
PANIC:  could not locate a valid checkpoint record
LOG:  startup process (PID 22350) was terminated by signal 6: Aborted
LOG:  aborting startup due to startup process failure

000100AB0051 is in my remote database's pg_xlog folder

Thanks for your help already!



>
> -Kevin
>


Re: [GENERAL] Yet Another Timestamp Question: Time Defaults

2013-01-22 Thread Steve Crawford

On 01/21/2013 08:56 PM, Gavan Schneider wrote:

On Monday, January 21, 2013 at 10:53, Steve Crawford wrote:


On 01/21/2013 02:48 PM, Gavan Schneider wrote:


Taking another tangent I would much prefer the default time to be 
12:00:00 for the conversion of a date to timestamp(+/-timezone).


Propose: '2013-12-25'::timestamp ==> 2013-12-25 12:00:00

The benefit of the midday point is that the actual date will not 
change when going through the timezone conversion.


Just like it doesn't change now? (I just checked against all of the
more than 1,100 zones in PG without seeing a problem.)

I find this result strange to say the least... our conversation is 
straddling Monday(you)/Tuesday(me). We shared the time point 
2013-01-22 01:30 UTC, but called it different things, viz., 2013-01-22 
12:30 and 2013-01-21 17:30.


We can call it all sorts of things but it is, in fact, the same point in 
time. What you have done is omitted a critical piece of information 
necessary for a "fully qualified" point-in-time - the time zone.


Now if I tell my wife I'll be home by 6 she says, "OK, see you then," 
not "do you mean AM or PM? Er, Pacific time? Today?" In other words she 
makes reasonable assumptions about what point-in-time I am conveying.


(Aside...It reminds me of the joke about the father admonishing his 
daughter's date to have her back by ten-fifteen to which he responds, 
"Mid-October? Cool!")


A date alone can be interpreted as any of a number of points-in-time 
covering a roughly two-day range:


select '2013-01-22 23:59 posix/Pacific/Midway'::timestamptz - 
'2013-01-22 00:00 Pacific/Kiritimati'::timestamptz;


1 day 01:00:00

So in order to calculate a single point-in-time, PostgreSQL, like my 
wife, has to make certain assumptions regarding the missing information 
(and fortunately PostgreSQL follows the SQL spec in this regard). The 
assumptions it makes are:


1) Interpret the date in local time not the date somewhere else in the 
world.


2) Interpret the missing time portion as 00:00:00.

You now have a point-in-time, not a date. You can display that 
point-in-time in whatever timezone you wish and some will have the same 
date as your local date while others will not. Assuming that the time is 
12:00:00 rather than 00:00:00 does not change that fact:


--localtime is US Pacific
select '2013-01-22 12:00'::timestamptz at time zone 'Pacific/Kiritimati';

2013-01-23 10:00:00

Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Kevin Grittner
[Please keep the list copied, and put your reply in-line instead
of at the top.]

Cliff de Carteret wrote:
> On 22 January 2013 16:07, Kevin Grittner  wrote:
> 
>> Cliff de Carteret wrote:
>>
>>> The current setup has been working successfully for several years
>>> until the recent database crash
>>
>> What file does the server log say it is trying to archive? What
>> error are you getting? Does that filename already exist on the
>> archive (or some intermediate location used by the archive command
>> or script)?

> The sever log is (repeated constantly):
> 
> LOG: archive command failed with exit code 1
> DETAIL: The failed archive command was: test ! -f
> /opt/postgres/remote_pgsql/wal_archive/000100A80078 && cp
> pg_xlog/000100A80078
> /opt/postgres/remote_pgsql/wal_archive/000100A80078
> WARNING: transaction log file "000100A80078" could not be
> archived: too many failures
> 
> The file 000100A80078 exists in the remote archive's
> wal_archive directory. I read a post saying to copy the file over to the
> archive and then delete the .ready file to get postgres to move onto the
> next file but this ended up logging out saying that a log file was missing.
> There are more recent files in this directory but they end at the point
> where I reverted all of the changes I made last night when time was running
> out and the database had to be put back to a known state.

I would have deleted (or renamed) the copy in the archive
directory. Archiving should have then resumed and cleaned up the
pg_xlog directory.

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I install pgcrypto in postgresql 9.2 on Linux

2013-01-22 Thread Adrian Klaver

On 01/22/2013 01:40 AM, Srinath Reddy wrote:

Hi Team,
Please provide me step by step commands for installing pgcrypto in
postgresql 9.2 on linux box.



It depends on how you installed Postgres.

Did you install from source?

Did you use a package?

What distribution?


Regards,
Srinath



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Cliff de Carteret
The local wal_archive directory is a directory named "wal_archive" which is
at the same file system level as the data directory. The time stamp states
that it has not been used for several years so it looks to be redundant.

To get the database running again I moved an old postgres installation to a
backups folder which gave me ~2G of space to start up again until I was
able to mount a new disk and move the pg_xlog to a separate partition using
a symbolic link.


On 22 January 2013 16:15, Adrian Klaver  wrote:

> On 01/22/2013 07:57 AM, Cliff de Carteret wrote:
>
>> There are no files in the pg_xlog directory from before the database
>> crash except one file which ends in .backup, there are 759 files. In the
>> pg_xlog/archive_status folder there are 755 files.
>> The local ../wal_archive folder has no files in it at all whereas the
>> remote wal_archive folder has 147 files present.
>>
>
> What is this local wal_archive directory?
> From a previous post:
>
>
> "LOG: archive command failed with exit code 1
> DETAIL: The failed archive command was: test ! -f
> /opt/postgres/remote_pgsql/**wal_archive/**000100A80078 && cp
> pg_xlog/**000100A80078 /opt/postgres/remote_pgsql/**
> wal_archive/**000100A80078
>
> I see only /opt/postgres/remote_pgsql/**wal_archive which I assumed was
> the remote. I am not sure where the local one fits in?
>
>
>
>> The remote archive folder is not full and has ~14G whereas the local
>> archive folder was previously full but now it has been moved so it has
>> 100G+
>>
>> The archive directory is a mount and as the user postgres I can copy a
>> file into it successfully.
>>
>> The current setup has been working successfully for several years until
>> the recent database crash
>>
>
> So what did you do to get the database running again?
>
>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


[GENERAL] How do I install pgcrypto in postgresql 9.2 on Linux

2013-01-22 Thread Srinath Reddy
Hi Team,

Please provide me step by step commands for installing pgcrypto in postgresql 
9.2 on linux box.

Regards,
Srinath

Re: [GENERAL] cache lookup failed

2013-01-22 Thread Pascal Tufenkji
Dear Tom,

Thank you very much for your prompt reply...

I deleted this oid from pg_depend from both objid and refobjid
The problem disappeared 
I also increased the following specs to avoid having this problem again:
- kernel.shmall
- kernel.shmmax

- shared_buffers
- max_fsm_pages 
- max_fsm_relations 
- effective_cache_size 

Thanks again



Pascal TUFENKJI
Service de Technologie de l'Information
Université Saint-Joseph - Rectorat
Tel: +961 1 421 132
Email: ptufen...@usj.edu.lb

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Tuesday, January 22, 2013 5:53 AM
To: Pascal Tufenkji
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] cache lookup failed

"Pascal Tufenkji"  writes:
> Everything was working fine until a couple of days ago, the database 
> is starting to display the following error more and more cache lookup 
> failed for relation 1852615815

Hm, what PG version is that?  Is 1852615815 anywhere near the range of
existing OIDs in pg_class?

> If I look up for this oid in pg_class I don’t find it, but I do find 
> it in pg_depend. Is that normal ?

No --- at least not when the classid says it's a pg_class OID, as it does
here.  Do you find any pg_depend rows with refobjid = 1852615815?
What about other rows mentioning OID 111893?  Is that OID present in
pg_namespace?  (The 2615 says it's a pg_namespace OID.)

We have seen some previous reports suggesting that once in awhile not all
the catalog entries associated with temp tables get cleaned out at backend
exit.  I'm not sure the cause has ever been conclusively identified, though
if memory serves we have fixed a bug or two that
*might* explain it.

If 1852615815 doesn't seem to be a legitimate OID, it's also possible that
this is just data corruption in pg_depend.

If you can't find any evidence of other issues, it might be reasonable to
conclude that the pg_depend row is leftover junk and delete it manually.
But you should make real sure nothing is linked to it first.

regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreS Security Concern

2013-01-22 Thread Roela B Montecillo



Hi and good afternoon.

I would like to inquire if you can help me search about a third party
software or any solution that can handle PCI-DSS requirement on PostgreS
database access and privileges. I am free to discuss options in case you
already have a known solution that we can implement.


Thanks and looking forward to our future discussion.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Adrian Klaver

On 01/22/2013 07:57 AM, Cliff de Carteret wrote:

There are no files in the pg_xlog directory from before the database
crash except one file which ends in .backup, there are 759 files. In the
pg_xlog/archive_status folder there are 755 files.
The local ../wal_archive folder has no files in it at all whereas the
remote wal_archive folder has 147 files present.


What is this local wal_archive directory?
From a previous post:

"LOG: archive command failed with exit code 1
DETAIL: The failed archive command was: test ! -f 
/opt/postgres/remote_pgsql/wal_archive/000100A80078 && cp 
pg_xlog/000100A80078 
/opt/postgres/remote_pgsql/wal_archive/000100A80078


I see only /opt/postgres/remote_pgsql/wal_archive which I assumed was 
the remote. I am not sure where the local one fits in?




The remote archive folder is not full and has ~14G whereas the local
archive folder was previously full but now it has been moved so it has 100G+

The archive directory is a mount and as the user postgres I can copy a
file into it successfully.

The current setup has been working successfully for several years until
the recent database crash


So what did you do to get the database running again?








--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Kevin Grittner
Cliff de Carteret wrote:

> The current setup has been working successfully for several years
> until the recent database crash

What file does the server log say it is trying to archive? What
error are you getting? Does that filename already exist on the
archive (or some intermediate location used by the archive command
or script)?

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Cliff de Carteret
There are no files in the pg_xlog directory from before the database crash
except one file which ends in .backup, there are 759 files. In the
pg_xlog/archive_status folder there are 755 files.
The local ../wal_archive folder has no files in it at all whereas the
remote wal_archive folder has 147 files present.

The remote archive folder is not full and has ~14G whereas the local
archive folder was previously full but now it has been moved so it has 100G+

The archive directory is a mount and as the user postgres I can copy a file
into it successfully.

The current setup has been working successfully for several years until the
recent database crash


On 22 January 2013 15:34, Adrian Klaver  wrote:

> On 01/22/2013 06:59 AM, Cliff de Carteret wrote:
>
>> Hello, thanks for your response.
>>
>> The xlogs that are filling up are in the original data directory and the
>> wal_keep_segments is commented out on my master and slave postgresql.conf
>>
>
> It is expected that there may a certain increase in WAL files.
>  In particular: checkpoint_segments = 16
>
> http://www.postgresql.org/**docs/9.2/static/runtime-**config-wal.html#GUC-
> **CHECKPOINT-SEGMENTS
> "
> checkpoint_segments (integer)
> Maximum number of log file segments between automatic WAL checkpoints
> (each segment is normally 16 megabytes). The default is three segments.
> Increasing this parameter can increase the amount of time needed for crash
> recovery. This parameter can only be set in the postgresql.conf file or on
> the server command line.
> "
>
> For more information that explains the above see:
> http://www.postgresql.org/**docs/9.2/static/wal-**configuration.html
>
> More questions:
>
> Does it look like any WAL files in the pg_xlog directory are being
> recycled?
>
> Re your archive error from your previous post, is your archive directory
> full?
>
> Assuming the archive directory is remote, is it reachable?
>
>
>> Cliff
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Adrian Klaver

On 01/22/2013 06:59 AM, Cliff de Carteret wrote:

Hello, thanks for your response.

The xlogs that are filling up are in the original data directory and the
wal_keep_segments is commented out on my master and slave postgresql.conf


It is expected that there may a certain increase in WAL files.
 In particular: checkpoint_segments = 16

http://www.postgresql.org/docs/9.2/static/runtime-config-wal.html#GUC-CHECKPOINT-SEGMENTS
"
checkpoint_segments (integer)
Maximum number of log file segments between automatic WAL checkpoints 
(each segment is normally 16 megabytes). The default is three segments. 
Increasing this parameter can increase the amount of time needed for 
crash recovery. This parameter can only be set in the postgresql.conf 
file or on the server command line.

"

For more information that explains the above see:
http://www.postgresql.org/docs/9.2/static/wal-configuration.html

More questions:

Does it look like any WAL files in the pg_xlog directory are being recycled?

Re your archive error from your previous post, is your archive directory 
full?


Assuming the archive directory is remote, is it reachable?



Cliff





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update rule on a view - what am I doing wrong

2013-01-22 Thread Leif Jensen
   Hi Marc,

Thanks a lot. That works fine. The names 'NEW' and 'OLD' works fine.

 Leif


- "Marc Schablewski"  wrote:

> Hi Leif,
> 
> Am 22.01.2013 14:34, schrieb Leif Jensen:
> 
> 
> CREATE update_rule_func( old record, new record ) AS ...  I am told,
> that I cannot use record for the parameter type. Could you please You
> should use your view instead of 'record' as parameter type, i.e.
> CREATE update_rule_func( old V_YOUR_VIEW, new V_YOUR_VIEW ) AS ... .
> Also, I'm not sure if 'new' and 'old' are reserved keywords in
> PostgreSQL, so you might want to choose different names for your
> parameters if you still have trouble with that function.
> 
> Marc


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Cliff de Carteret
Hello, thanks for your response.

The xlogs that are filling up are in the original data directory and the
wal_keep_segments is commented out on my master and slave postgresql.conf

Cliff


On 22 January 2013 14:48, Adrian Klaver  wrote:

> On 01/22/2013 03:12 AM, Cliff de Carteret wrote:
>
>> My database crashed a couple of days ago during an upgrade several
>> seconds after committing a large transaction to the database. Eventually
>> we found out that this was due to the disk being full as the transaction
>> had created several gigs of data. A day or so later the disk is full
>> again and PostgreSQL crashes due to the pg_xlog file taking up all of
>> the disk space. I have cleaned up the drive to have so extra space which
>> allows PostgreSQL to start again but the xlogs are still increasing. I
>> have two errors in my pg_log:
>>
>> "WARNING: transaction log file "000100A80078" could not be
>> archived: too many failures" and
>>
>> "LOG: archive command failed with exit code 1
>> DETAIL: The failed archive command was: test ! -f
>> /opt/postgres/remote_pgsql/**wal_archive/**000100A80078 && cp
>> pg_xlog/**000100A80078
>> /opt/postgres/remote_pgsql/**wal_archive/**000100A80078"
>>
>
> I am not seeing it below, but just to be complete is wal_keep_segments set
> to something greater than 0?
>
>
> Just to be clear which xlogs are filling up, the ones in the original data
> directory or in the archive directory?
>
>
>
>
>> Postgres version 9.0.3 conf:
>>
>>
>>   *
>>
>>
>> wal_level = hot_standby
>> archive_mode = true
>> archive_command = 'test ! -f
>> /opt/postgres/remote_pgsql/**wal_archive/%f && cp %p
>> /opt/postgres/remote_pgsql/**wal_archive/%f' # command to use to
>> archive a logfile segment
>> archive_timeout = 1800
>> max_wal_senders = 1
>> max_standby_archive_delay = 900s
>> max_standby_streaming_delay = 900s
>> default_statistics_target = 50 # pgtune wizard 2010-11-18
>> maintenance_work_mem = 480MB # pgtune wizard 2010-11-18
>> constraint_exclusion = on # pgtune wizard 2010-11-18
>> checkpoint_completion_target = 0.9 # pgtune wizard 2010-11-18
>> effective_cache_size = 5632MB # pgtune wizard 2010-11-18
>> work_mem = 48MB # pgtune wizard 2010-11-18
>> wal_buffers = 8MB # pgtune wizard 2010-11-18
>> checkpoint_segments = 16 # pgtune wizard 2010-11-18
>> shared_buffers = 1920MB # pgtune wizard 2010-11-18
>> max_connections = 80 # pgtune wizard 2010-11-18
>>
>>
>>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Adrian Klaver

On 01/22/2013 03:12 AM, Cliff de Carteret wrote:

My database crashed a couple of days ago during an upgrade several
seconds after committing a large transaction to the database. Eventually
we found out that this was due to the disk being full as the transaction
had created several gigs of data. A day or so later the disk is full
again and PostgreSQL crashes due to the pg_xlog file taking up all of
the disk space. I have cleaned up the drive to have so extra space which
allows PostgreSQL to start again but the xlogs are still increasing. I
have two errors in my pg_log:

"WARNING: transaction log file "000100A80078" could not be
archived: too many failures" and

"LOG: archive command failed with exit code 1
DETAIL: The failed archive command was: test ! -f
/opt/postgres/remote_pgsql/wal_archive/000100A80078 && cp
pg_xlog/000100A80078
/opt/postgres/remote_pgsql/wal_archive/000100A80078"


I am not seeing it below, but just to be complete is wal_keep_segments 
set to something greater than 0?



Just to be clear which xlogs are filling up, the ones in the original 
data directory or in the archive directory?






Postgres version 9.0.3 conf:

  *


wal_level = hot_standby
archive_mode = true
archive_command = 'test ! -f
/opt/postgres/remote_pgsql/wal_archive/%f && cp %p
/opt/postgres/remote_pgsql/wal_archive/%f' # command to use to
archive a logfile segment
archive_timeout = 1800
max_wal_senders = 1
max_standby_archive_delay = 900s
max_standby_streaming_delay = 900s
default_statistics_target = 50 # pgtune wizard 2010-11-18
maintenance_work_mem = 480MB # pgtune wizard 2010-11-18
constraint_exclusion = on # pgtune wizard 2010-11-18
checkpoint_completion_target = 0.9 # pgtune wizard 2010-11-18
effective_cache_size = 5632MB # pgtune wizard 2010-11-18
work_mem = 48MB # pgtune wizard 2010-11-18
wal_buffers = 8MB # pgtune wizard 2010-11-18
checkpoint_segments = 16 # pgtune wizard 2010-11-18
shared_buffers = 1920MB # pgtune wizard 2010-11-18
max_connections = 80 # pgtune wizard 2010-11-18




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RAISE NOTICE ... and CONTEXT field of the error report.

2013-01-22 Thread Dmitriy Igrishin
Hey Marc,


2013/1/22 Marc Schablewski 

>
>  Am 22.01.2013 14:59, schrieb Dmitriy Igrishin:
>
> Hey all,
>
>  Is there way to turn off printing of CONTEXT field of the error report
>
>  I think, this might help:
> http://www.depesz.com/2008/07/12/suppressing-context-lines-in-psql/
>
> Marc
>
>  I am sorry for thoughtlessness!
It's really should be filtered by client application.
Thanks for point!


-- 
// Dmitriy.


Re: [GENERAL] Question on Trigram GIST indexes

2013-01-22 Thread Merlin Moncure
On Tue, Jan 22, 2013 at 8:07 AM, Merlin Moncure  wrote:
> On Sat, Jan 5, 2013 at 12:20 PM, ERR ORR  wrote:
>>
>> @Moderators: I am reposting this because the original from 22 December
>> apparently didn't arrive on the list.
>>
>> I was trying to make Postgresql use a trigram gist index on a varchar field,
>> but to no avail.
>>
>> Specifically, I was trying to replicate what is done in this blog post:
>> http://www.postgresonline.com/journal/archives/212-PostgreSQL-9.1-Trigrams-teaching-LIKE-and-ILIKE-new-tricks.html
>>
>>
>> I use Postgresql 9.1.7 on Linux FC17 64bit, my locale is UTF8.
>>
>> My full table definition is
>>
>> CREATE TABLE "TEST"
>> (
>>   "RECID" bigint NOT NULL DEFAULT next_id(),
>>   "TST_PAYLOAD" character varying(255),
>>   CONSTRAINT "PK_TEST" PRIMARY KEY ("RECID")
>>   USING INDEX TABLESPACE local
>> )
>> WITH (
>>   OIDS=FALSE
>> );
>>
>> CREATE INDEX "TEST_PAYLOAD_PATTERN_1_IDX"
>>   ON "TEST"
>>   USING btree
>>   ("TST_PAYLOAD" COLLATE pg_catalog."default" varchar_pattern_ops)
>> TABLESPACE local;
>>
>> CREATE INDEX "TEST_PAYLOAD_TRIGRAM_GIST_1_IDX"
>>   ON "TEST"
>>   USING gist
>>   ("TST_PAYLOAD" COLLATE pg_catalog."default" gist_trgm_ops)
>> TABLESPACE local;
>>
>> CREATE INDEX "TEST_PAYLOAD_TRIGRAM_GIN_1_IDX"
>>   ON "TEST"
>>   USING gin
>>   ("TST_PAYLOAD" COLLATE pg_catalog."default" gin_trgm_ops)
>> TABLESPACE local;
>>
>>
>> The COLLATE pg_catalog."default" clause is inserted by the DB (default is
>> "Unicode"). I also tried to define the Trigram index with COLLATE
>> pg_catalog."C"  but the behavior did not change. I did vacuum and analyze
>> after creating each index.
>>
>> The field "TST_PAYLOAD" contains 26389 names of cities, all in uppercase.
>>
>> I have pg_tgrm installed - actually all extensions are present.
>>
>> Queries which use "WHERE "TST_PAYLOAD" LIKE 'SEAT%'" go to the btree index
>> as it should.
>> Queries which use "WHERE "TST_PAYLOAD" LIKE '%EAT%'" *should* use the GIST
>> index but do a full table scan instead.
>> (I am looking for names like 'SEATTLE' in this example)
>
> where did you determine that pg_trgm should optimize like expressions?
> pg_trgm provides new operators that are used to index on string
> similarity...

oops -- heh -- I guess you *can* do that (after further documentation
review).  hm...it works for me...

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question on Trigram GIST indexes

2013-01-22 Thread Merlin Moncure
On Sat, Jan 5, 2013 at 12:20 PM, ERR ORR  wrote:
>
> @Moderators: I am reposting this because the original from 22 December
> apparently didn't arrive on the list.
>
> I was trying to make Postgresql use a trigram gist index on a varchar field,
> but to no avail.
>
> Specifically, I was trying to replicate what is done in this blog post:
> http://www.postgresonline.com/journal/archives/212-PostgreSQL-9.1-Trigrams-teaching-LIKE-and-ILIKE-new-tricks.html
>
>
> I use Postgresql 9.1.7 on Linux FC17 64bit, my locale is UTF8.
>
> My full table definition is
>
> CREATE TABLE "TEST"
> (
>   "RECID" bigint NOT NULL DEFAULT next_id(),
>   "TST_PAYLOAD" character varying(255),
>   CONSTRAINT "PK_TEST" PRIMARY KEY ("RECID")
>   USING INDEX TABLESPACE local
> )
> WITH (
>   OIDS=FALSE
> );
>
> CREATE INDEX "TEST_PAYLOAD_PATTERN_1_IDX"
>   ON "TEST"
>   USING btree
>   ("TST_PAYLOAD" COLLATE pg_catalog."default" varchar_pattern_ops)
> TABLESPACE local;
>
> CREATE INDEX "TEST_PAYLOAD_TRIGRAM_GIST_1_IDX"
>   ON "TEST"
>   USING gist
>   ("TST_PAYLOAD" COLLATE pg_catalog."default" gist_trgm_ops)
> TABLESPACE local;
>
> CREATE INDEX "TEST_PAYLOAD_TRIGRAM_GIN_1_IDX"
>   ON "TEST"
>   USING gin
>   ("TST_PAYLOAD" COLLATE pg_catalog."default" gin_trgm_ops)
> TABLESPACE local;
>
>
> The COLLATE pg_catalog."default" clause is inserted by the DB (default is
> "Unicode"). I also tried to define the Trigram index with COLLATE
> pg_catalog."C"  but the behavior did not change. I did vacuum and analyze
> after creating each index.
>
> The field "TST_PAYLOAD" contains 26389 names of cities, all in uppercase.
>
> I have pg_tgrm installed - actually all extensions are present.
>
> Queries which use "WHERE "TST_PAYLOAD" LIKE 'SEAT%'" go to the btree index
> as it should.
> Queries which use "WHERE "TST_PAYLOAD" LIKE '%EAT%'" *should* use the GIST
> index but do a full table scan instead.
> (I am looking for names like 'SEATTLE' in this example)

where did you determine that pg_trgm should optimize like expressions?
pg_trgm provides new operators that are used to index on string
similarity...

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RAISE NOTICE ... and CONTEXT field of the error report.

2013-01-22 Thread Marc Schablewski

Am 22.01.2013 14:59, schrieb Dmitriy Igrishin:
> Hey all,
>
> Is there way to turn off printing of CONTEXT field of the error report
>
I think, this might help: 
http://www.depesz.com/2008/07/12/suppressing-context-lines-in-psql/

Marc



[GENERAL] RAISE NOTICE ... and CONTEXT field of the error report.

2013-01-22 Thread Dmitriy Igrishin
Hey all,

Is there way to turn off printing of CONTEXT field of the error report
in the following case:

create or replace function foo() returns void language plpgsql as $$
begin
  raise notice 'notice from foo()';
end;
$$;
create or replace function bar() returns void language plpgsql as $$
begin
  perform foo();
end;
$$;

wog=# select foo();
NOTICE:  notice from foo()
 foo
-

(1 row)

wog=# select bar();
NOTICE:  notice from foo()
CONTEXT:  SQL statement "SELECT foo()"
PL/pgSQL function bar() line 3 at PERFORM
 bar
-

(1 row)

If it does not possible, I would like to have this feature. From the POV
of the PL/pgSQL user I think it should be customizable mode of PL/pgSQL.

-- 
// Dmitriy.


Re: [GENERAL] Update rule on a view - what am I doing wrong

2013-01-22 Thread Marc Schablewski
Hi Leif,
 
Am 22.01.2013 14:34, schrieb Leif Jensen:
> CREATE update_rule_func( old record, new record ) AS ...  I am told, that I 
> cannot use record for the parameter type. Could you please 
You should use your view instead of 'record' as parameter type, i.e. CREATE 
update_rule_func( old
V_YOUR_VIEW, new V_YOUR_VIEW ) AS ... . Also, I'm not sure if 'new' and 'old' 
are reserved keywords
in PostgreSQL, so you might want to choose different names for your parameters 
if you still have
trouble with that function.

Marc


Re: [GENERAL] Update rule on a view - what am I doing wrong

2013-01-22 Thread Leif Jensen
   Hi Jasen.

   Thank you for your response (also thank you to Tom).

   I have now tried your suggestion, but I'm not sure how you have implemented 
the plpgsql function. When I create the function: CREATE update_rule_func( old 
record, new record ) AS ...  I am told, that I cannot use record for the 
parameter type. Could you please expand a little on your example ?

 Leif


- "Jasen Betts"  wrote:

> On 2013-01-18, Leif Jensen  wrote:
> 
> >I have been fighting a problem with an update rule on a view. I
> > have a view that combines two tables where the 'sub' table (scont)
> can
> > have several rows per row in the 'top' table (icont). The view
> > combines these to show only one record per row in the top table. To
> be
> > able to update on this view I have created a rule 'on update'. The
> > rule needs to have both UPDATE, DELETE, and INSERT commands. Is
> this
> > not possible or am I doing something else wrong ?   
> 
> when I hit that issue in 8.4 i used a plpgsql function
> 
>   ... do instead select update_rule_func(old,new);
>   
> -- 
> ⚂⚃ 100% natural
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pg_xlog increase due to postgres crash (disk full)

2013-01-22 Thread Cliff de Carteret
My database crashed a couple of days ago during an upgrade several seconds
after committing a large transaction to the database. Eventually we found
out that this was due to the disk being full as the transaction had created
several gigs of data. A day or so later the disk is full again and
PostgreSQL crashes due to the pg_xlog file taking up all of the disk space.
I have cleaned up the drive to have so extra space which allows PostgreSQL
to start again but the xlogs are still increasing. I have two errors in my
pg_log:

"WARNING: transaction log file "000100A80078" could not be
archived: too many failures" and

"LOG: archive command failed with exit code 1
DETAIL: The failed archive command was: test ! -f
/opt/postgres/remote_pgsql/wal_archive/000100A80078 && cp
pg_xlog/000100A80078
/opt/postgres/remote_pgsql/wal_archive/000100A80078"

Postgres version 9.0.3 conf:

   -
   wal_level = hot_standby
   archive_mode = true
   archive_command = 'test ! -f /opt/postgres/remote_pgsql/wal_archive/%f
   && cp %p /opt/postgres/remote_pgsql/wal_archive/%f' # command to use to
   archive a logfile segment
   archive_timeout = 1800
   max_wal_senders = 1
   max_standby_archive_delay = 900s
   max_standby_streaming_delay = 900s
   default_statistics_target = 50 # pgtune wizard 2010-11-18
   maintenance_work_mem = 480MB # pgtune wizard 2010-11-18
   constraint_exclusion = on # pgtune wizard 2010-11-18
   checkpoint_completion_target = 0.9 # pgtune wizard 2010-11-18
   effective_cache_size = 5632MB # pgtune wizard 2010-11-18
   work_mem = 48MB # pgtune wizard 2010-11-18
   wal_buffers = 8MB # pgtune wizard 2010-11-18
   checkpoint_segments = 16 # pgtune wizard 2010-11-18
   shared_buffers = 1920MB # pgtune wizard 2010-11-18
   max_connections = 80 # pgtune wizard 2010-11-18


I've tried stopping postgres and then deleting the
000100A80078.ready file and starting postgres but this appears
to be recreated instantly and the error is still in the log file.

I've read about the pg_reset_xlog() command but with having to pg_dump our
db with a large amount of blobs and restoring it again is highly
problematic as the pg_restore has struggled to restore.

Will setting zero_damaged_pages (true) work in 9.0.1 and would this resolve
the issue?

Would creating an empty file and replacing the offending xlog work, would
this need to be a specific size?

Any ideas?


Re: [GENERAL] Question on Trigram GIST indexes

2013-01-22 Thread Johann Spies
On 5 January 2013 20:20, ERR ORR  wrote:

>
>
>
> Queries which use "WHERE "TST_PAYLOAD" LIKE 'SEAT%'" go to the btree
> index as it should.
> Queries which use "WHERE "TST_PAYLOAD" LIKE '%EAT%'" *should* use the
> GIST index but do a full table scan instead.
> (I am looking for names like 'SEATTLE' in this example)
>
> I also tried dropping the btree index but that has no influence on the
> behavior.
>
>
>
I have the same problem:

Index:

CREATE INDEX akb_art_abstract_trgm
  ON akb_articles
  USING gin
  (abstract gin_trgm_ops);

and

 SELECT title, SIMILARITY(abstract, 'skeef') FROM akb_articles WHERE
SIMILARITY (abstract, 'water') > 0

 results in a full sequential scan:

"Seq Scan on public.akb_articles  (cost=0.00..45751.67 rows=107025
width=666) (actual time=0.236..63153.268 rows=169265 loops=1)"
"  Output: title, similarity(abstract, 'skeef'::text)"
"  Filter: (similarity(akb_articles.abstract, 'water'::text) > 0::double
precision)"
"  Buffers: shared hit=39000 read=46460"
"Total runtime: 63173.663 ms"

Regards
Johann

-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)