Re: [GENERAL] Server move using rsync

2011-10-27 Thread Scott Marlowe
On Thu, Oct 27, 2011 at 7:37 PM, Stephen Denne
 wrote:
> We're intending to move a 470GB PostgreSQL 8.3.13 database using the 
> following technique from 
> http://www.postgresql.org/docs/8.3/interactive/backup-file.html
>
> "Another option is to use rsync to perform a file system backup. This is done 
> by first running rsync while the database server is running, then shutting 
> down the database server just long enough to do a second rsync. The second 
> rsync will be much quicker than the first, because it has relatively little 
> data to transfer, and the end result will be consistent because the server 
> was down. This method allows a file system backup to be performed with 
> minimal downtime."
>
> Except that we plan on an initial rsync which we think might take a couple of 
> days, then subsequent daily rsyncs for up to a week to keep it up to date 
> till we stop the old database, rsync again, and start the new database.

Sounds reasonable.  don't forget the --delete switch or the
destination will just keep growing and growing.

> A very rough approximation of our database would be half a dozen large tables 
> taking up 1/3 of the disk space, and lots of indexes on those tables taking 
> the other 2/3 of the space.
>
> If we assume usage characteristics of:
> Much less than 1% of indexed data changing per day, with almost all of those 
> updates being within the 1% of most recently added data.
> Much less than 1% of historical indexed data being deleted per day with most 
> of the deletions expected to affect sets of contiguous file pages.
> About 1% of new indexed data added per day
>
> I'm curious of the impact of vacuum (automatic and manual) during that 
> process on expected amount of work rsync will have to do, and time it will 
> take, and on what the update pattern is on files of Btree indexes.
>
> Is it worth making sure vacuum is not run, in order to reduce the amount of 
> files that change during that period?

Probably not.  You can test that theory by turning off vacuum for a
day to see how much of a change it makes.   My semi-educated
scientific wild-assed guess is it won't make any difference, since the
file / block will be changed with or without the vacuum, and still
have to be copied.,

> Is there any way during that week, that we can verify whether our partially 
> completed database move process is going to result in a database that starts 
> up ok?

Try starting it up?

In general, the lower the traffic when you rsync the better the
chances, but honestly if you're not stopping the database then you
shouldn't count on luck to make it work.  Note that you CAN do the
whole rsync followed by setting up PITR to get a coherent database
backup that is guaranteed to start up, assuming you've followed all
the instructions on how to set up PITR properly.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglesslog for Postgres 9.1.1

2011-10-27 Thread Satoshi Nagayasu

Hi Louis,

2011/10/27 19:49, mailtolouis2020-postg...@yahoo.com wrote:

Hi,

I'm sorry I'm not good in C, anyone can help to put a patch or release a new 
version for that?



Regards
Louis


---

---

*From:* Tom Lane 
*To:* "mailtolouis2020-postg...@yahoo.com" 

*Cc:* Postgres 
*Sent:* Wednesday, October 26, 2011 3:42 PM
*Subject:* Re: [GENERAL] pglesslog for Postgres 9.1.1

"mailtolouis2020-postg...@yahoo.com " 
mailto:mailtolouis2020-postg...@yahoo.com>> writes:
 > remove.c:182: error: ‘XLOG_GIN_INSERT’ undeclared (first use in this 
function)
 > remove.c:182: error: (Each undeclared identifier is reported only once
 > remove.c:182: error: for each function it appears in.)
 > remove.c:184: error: ‘XLOG_GIN_VACUUM_PAGE’ undeclared (first use in 
this function)
 > remove.c:186: error: ‘XLOG_GIN_DELETE_PAGE’ undeclared (first use in 
this function)

That stuff got moved to gin_private.h in 9.1 ...

regards, tom lane


I'm taking part in.

Try this patch,

https://gist.github.com/1321650

and build as following.

$ make USE_PGXS=1 top_builddir=/path/to/postgresql-9.1.0

Regards,

--
NAGAYASU Satoshi 

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC connections very occasionally hang

2011-10-27 Thread Tom Lane
Karl Wright  writes:
> ... By the end of that time I usually see between one and three "stuck"
> threads, all waiting inside the JDBC driver for a response from the
> postgresql server.  I can provide a stack trace if requested.

How about a stack trace from the connected backend?  And what is its
state as shown by the pg_stat_activity and pg_locks views?  It's hard
to tell from what you say here whether the problem is on the server or
client side, which is surely the first thing to isolate.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JDBC connections very occasionally hang

2011-10-27 Thread Karl Wright
Hi folks,

I'm seeing something that on the face of it sounds very similar to the
issue reported at
http://archives.postgresql.org/pgsql-general/2011-10/msg00570.php.  I
am using Postgresql 8.4, and the problem occurs with both the 8.4 JDBC
type-3 driver and the 9.1 JDBC type-3 driver.  The test I have that
causes the failure runs for about 3 hours and is highly multithreaded.
 By the end of that time I usually see between one and three "stuck"
threads, all waiting inside the JDBC driver for a response from the
postgresql server.  I can provide a stack trace if requested.  The
actual queries it locks up on differ from run to run; I've seen it
hang on longer-running queries such as a REINDEX, or very basic
queries such as an update, or even on a BEGIN TRANSACTION.  Locking is
not likely to be the problem since the issue occurs with only one
thread involved with fair frequency.  The database is also running on
the same machine as the test client, so that would appear to rule out
network glitches.  Upon failure, there are no errors or warnings
recorded in the postgresql logs either.

Because of the volume of queries it will be difficult to determine by
simply turning on logging whether all the queries are in fact making
it to the server or not.  Is there any other diagnostics you could
recommend?

FWIW, this behavior seems to be new to 8.4; the same software ran
flawlessly and reliably on Postgresql 8.2 and 8.3.

Karl

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Server move using rsync

2011-10-27 Thread Stephen Denne
We're intending to move a 470GB PostgreSQL 8.3.13 database using the following 
technique from http://www.postgresql.org/docs/8.3/interactive/backup-file.html
 
"Another option is to use rsync to perform a file system backup. This is done 
by first running rsync while the database server is running, then shutting down 
the database server just long enough to do a second rsync. The second rsync 
will be much quicker than the first, because it has relatively little data to 
transfer, and the end result will be consistent because the server was down. 
This method allows a file system backup to be performed with minimal downtime."

Except that we plan on an initial rsync which we think might take a couple of 
days, then subsequent daily rsyncs for up to a week to keep it up to date till 
we stop the old database, rsync again, and start the new database.

A very rough approximation of our database would be half a dozen large tables 
taking up 1/3 of the disk space, and lots of indexes on those tables taking the 
other 2/3 of the space.

If we assume usage characteristics of:
Much less than 1% of indexed data changing per day, with almost all of those 
updates being within the 1% of most recently added data.
Much less than 1% of historical indexed data being deleted per day with most of 
the deletions expected to affect sets of contiguous file pages.
About 1% of new indexed data added per day

I'm curious of the impact of vacuum (automatic and manual) during that process 
on expected amount of work rsync will have to do, and time it will take, and on 
what the update pattern is on files of Btree indexes.

Is it worth making sure vacuum is not run, in order to reduce the amount of 
files that change during that period?

Do a number of additions evenly spread through the domain of an indexed field's 
values result in localized changes to the indexes files, or changes throughout 
the files?

How about for additions to the end of the domain of an indexed field's values 
(e.g. adding current dates)?

Is there any way during that week, that we can verify whether our partially 
completed database move process is going to result in a database that starts up 
ok?

Regards, Stephen Denne.
This email with any attachments is confidential and may be subject to legal 
privilege. If it is not intended for you please advise by replying immediately, 
destroy it and do not copy, disclose or use it in any way.

Please consider the environment before printing this e-mail
__
  This email has been scanned by the DMZGlobal Business Quality
  Electronic Messaging Suite.
Please see http://www.dmzglobal.com/dmzmessaging.htm for details.
__



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] User feedback requested on temp tables usage for Hot Standby

2011-10-27 Thread Ben Chobot
On Oct 27, 2011, at 5:13 PM, Simon Riggs wrote:

> Some people have asked for the ability to create temp tables on a Hot
> Standby server.
> 
> I've got a rough implementation plan but it would have some
> restrictions, so I would like to check my understanding of the use
> case for this feature so I don't waste time implementing something
> nobody actually finds useful.
> 
> My understanding is that the main use cases for that would be limited
> to these two options only:
> 
> 1. CREATE TEMP TABLE foo AS SELECT 
> 
> 2. CREATE TEMP TABLE foo (..);
>INSERT INTO foo ...
> 
> and sometimes a TRUNCATE foo;
> 
> In almost all cases people don't run multiple INSERTs, nor do they run
> UPDATEs or DELETEs, so the above actions would cover 99% of use cases.
> 
> Can anyone give backup to that opinion, or alternate viewpoints?

The times that we would use a temp table on a slave are times when we would 
want to materialize a large set of intermediate results while doing ad hoc 
queries. This seems to cover that….. although, just to be sure, do I understand 
you in that UDPATEs and DELETEs would not be allowed? That would be fine, but 
having multiple INSERTs would be very handy. 

Of course, even having a one-time insert temp table is better than no temp 
table at all. :)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS in a commercial project

2011-10-27 Thread Joshua D. Drake


On 10/27/2011 04:24 PM, Chris Travers wrote:


On Thu, Oct 27, 2011 at 9:44 AM, Martijn van Oosterhout
  wrote:


I note in the OPs case they are relying on the customer to install
PostGIS.  The GPL only applies to *redistribution* not usage.  So if
you're not supplying your customers with PostGIS then the fact that
it's GPL seems completely irrelevent.


Also as a note here, if linking implied derivation, then all software
that ran on Windows would be illegal to distribute without Microsoft's
permission..   Yet at least here in the US, jailbreaking an iPhone
is legal in the opinion of the Copyright Office because it allows fair
use of the device, namely installing apps that Apple hasn't otherwise
authorized.  So I'd generally agree with the assessment above.


Not to be a killjoy but unless any of us is an attorney, I suggest we 
defer to a person with a law degree. This seems more like a question for 
the SFLC than for the general community.


JD




Best Wishes,
Chris Travers




--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS in a commercial project

2011-10-27 Thread Chris Travers
On Thu, Oct 27, 2011 at 9:44 AM, Martijn van Oosterhout
 wrote:

> I note in the OPs case they are relying on the customer to install
> PostGIS.  The GPL only applies to *redistribution* not usage.  So if
> you're not supplying your customers with PostGIS then the fact that
> it's GPL seems completely irrelevent.
>
Also as a note here, if linking implied derivation, then all software
that ran on Windows would be illegal to distribute without Microsoft's
permission..   Yet at least here in the US, jailbreaking an iPhone
is legal in the opinion of the Copyright Office because it allows fair
use of the device, namely installing apps that Apple hasn't otherwise
authorized.  So I'd generally agree with the assessment above.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] User feedback requested on temp tables usage for Hot Standby

2011-10-27 Thread Simon Riggs
Some people have asked for the ability to create temp tables on a Hot
Standby server.

I've got a rough implementation plan but it would have some
restrictions, so I would like to check my understanding of the use
case for this feature so I don't waste time implementing something
nobody actually finds useful.

My understanding is that the main use cases for that would be limited
to these two options only:

1. CREATE TEMP TABLE foo AS SELECT 

2. CREATE TEMP TABLE foo (..);
INSERT INTO foo ...

and sometimes a TRUNCATE foo;

In almost all cases people don't run multiple INSERTs, nor do they run
UPDATEs or DELETEs, so the above actions would cover 99% of use cases.

Can anyone give backup to that opinion, or alternate viewpoints?

Thanks,

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Alan Hodgson
On October 27, 2011 01:09:51 PM Brian Fehrle wrote:
> We've restarted the postgresql cluster, so the issue is not happening at
> this moment. but running a vmstat 10 had my 'cs' average at 3K and 'in'
> averaging around 9.5K.

Random thought, is there any chance the server is physically overheating? I've 
seen CPUs throttle really low when overheating, which can make otherwise 
normal activity seem really slow.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom data type in C with one fixed and one variable attribute

2011-10-27 Thread Tom Lane
Adrian Schreyer  writes:
> The data type I have is

> typedef struct {
> int4   length;
> uint32 foo;
> char   bar[1];
> } oefp;

Seems reasonable enough.

> mydatatype *dt = (mydatatype*) palloc(VARHDRSZ + sizeof(uint32) +
> strlen(buffer));

> SET_VARSIZE(dt, VARHDRSZ + sizeof(uint32) + strlen(buffer));
> memcpy(dt->bar, buffer, strlen(buffer));
> dt->foo = foo;

Fine, but keep in mind that what you are creating here is a
non-null-terminated string.

> The problem is however that dt->bar contains not only the input string
> but random characters or other garbage as well, so something must go
> wrong at the end of the function. Any thoughts what it could be?

It sounds to me like you are inspecting dt->bar with something that
expects to see a null-terminated string.  You could either fix your
inspection code, or expend one more byte to make the string be
null-terminated as stored.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Brian Fehrle

On 10/27/2011 01:48 PM, Scott Marlowe wrote:

On Thu, Oct 27, 2011 at 12:39 PM, Brian Fehrle
  wrote:

Looking at top, I see no SWAP usage, very little IOWait, and there are a
large number of postmaster processes at 100% cpu usage (makes sense, at this
point there are 150 or so queries currently executing on the database).

  Tasks: 713 total,  44 running, 668 sleeping,   0 stopped,   1 zombie
Cpu(s):  4.4%us, 92.0%sy,  0.0%ni,  3.0%id,  0.0%wa,  0.0%hi,  0.3%si,
  0.2%st
Mem:  134217728k total, 131229972k used,  2987756k free,   462444k buffers
Swap:  8388600k total,  296k used,  8388304k free, 119029580k cached

OK, a few points.  1: You've got a zombie process.  Find out what's
causing that, it could be a trigger of some type for this behaviour.
2: You're 92% sys.  That's bad.  It means the OS is chewing up 92% of
your 32 cores doing something.  what tasks are at the top of the list
in top?

Out of the top 50 processes in top, 48 of them are postmasters, one is 
syslog, and one is psql. Each of the postmasters have a high %CPU, the 
top ones being 80% and higher, the rest being anywhere between 30% - 
60%. Would postmaster 'queries' that are running attribute to the sys 
CPU usage, or should they be under the 'us' CPU usage?




Try running vmstat 10 for a a minute or so then look at cs and int
columns.  If cs or int is well over 100k there could be an issue with
thrashing, where your app is making some change to the db that
requires all backends to be awoken at once and the machine just falls
over under the load.


We've restarted the postgresql cluster, so the issue is not happening at 
this moment. but running a vmstat 10 had my 'cs' average at 3K and 'in' 
averaging around 9.5K.


- Brian F

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL at LISA in Boston: Dec. 7-8

2011-10-27 Thread Josh Berkus
All,

We are going to be doing a booth at Usenix LISA[1] conference in Boston
from December 7-8.  If you live in Boston and are a PostgreSQL
enthusiast, or if you plan to attend LISA, I want your help!

I need booth volunteers to help me work the booth.  All you need is some
general knowledge of how to use PostgreSQL, enthusiasm, and 3 (or more)
free hours.  We will supply the rest, including flyers, magazines, and a
t-shirt for volunteers.  If you don't already have a pass for the
conference, I will get you one for the exhibit hall.

Or you can attend the full conference if you pay a registration fee; get
$100 off with our community discount, LISA11POSTGRE.

We will also have a BOF[2] on the night of the 7th at the conference
hotel, where I will demo some 9.1 and 9.2 features and talk about the
Postgres project.

Finally, PalominoDB is organizing a Boston PUG (PostgreSQL User Group)
meeting at which I will speak. Details TBD, but expect it at MIT on one
of the nights of the 5th, 6th, or 8th.  Maybe we can start a regular
BostonPUG!

[1] Large Information Systems Administration:
http://www.usenix.org/events/lisa11/exhibition.html
[2] http://www.usenix.org/events/lisa11/bofs.html#postgres

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Brian Fehrle
Also, I'm not having any issue with the database restarting itself, 
simply becoming unresponsive / slow to respond, to the point where just 
sshing to the box takes about 30 seconds if not longer. Performing a 
pg_ctl restart on the cluster resolves the issue.


I looked through the logs for any segmentation faults, none found. In 
fact the only thing in my log that seems to be 'bad' are the following.


Oct 27 08:53:18  postgres[17517]: [28932839-1] 
user=,db= ERROR:  deadlock detected
Oct 27 11:49:22  postgres[608]: [19-1] user=,db= 
ERROR:  could not serialize access due to concurrent update


I don't believe these occurred too close to the slowdown.

- Brian F

On 10/27/2011 02:09 PM, Brian Fehrle wrote:

On 10/27/2011 01:48 PM, Scott Marlowe wrote:

On Thu, Oct 27, 2011 at 12:39 PM, Brian Fehrle
  wrote:
Looking at top, I see no SWAP usage, very little IOWait, and there 
are a
large number of postmaster processes at 100% cpu usage (makes sense, 
at this

point there are 150 or so queries currently executing on the database).

  Tasks: 713 total,  44 running, 668 sleeping,   0 stopped,   1 zombie
Cpu(s):  4.4%us, 92.0%sy,  0.0%ni,  3.0%id,  0.0%wa,  0.0%hi,  0.3%si,
  0.2%st
Mem:  134217728k total, 131229972k used,  2987756k free,   462444k 
buffers
Swap:  8388600k total,  296k used,  8388304k free, 119029580k 
cached

OK, a few points.  1: You've got a zombie process.  Find out what's
causing that, it could be a trigger of some type for this behaviour.
2: You're 92% sys.  That's bad.  It means the OS is chewing up 92% of
your 32 cores doing something.  what tasks are at the top of the list
in top?

Out of the top 50 processes in top, 48 of them are postmasters, one is 
syslog, and one is psql. Each of the postmasters have a high %CPU, the 
top ones being 80% and higher, the rest being anywhere between 30% - 
60%. Would postmaster 'queries' that are running attribute to the sys 
CPU usage, or should they be under the 'us' CPU usage?




Try running vmstat 10 for a a minute or so then look at cs and int
columns.  If cs or int is well over 100k there could be an issue with
thrashing, where your app is making some change to the db that
requires all backends to be awoken at once and the machine just falls
over under the load.


We've restarted the postgresql cluster, so the issue is not happening 
at this moment. but running a vmstat 10 had my 'cs' average at 3K and 
'in' averaging around 9.5K.


- Brian F



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Tom Lane
Brian Fehrle  writes:
> Hi all, need some help/clues on tracking down a performance issue.
> PostgreSQL version: 8.3.11

> I've got a system that has 32 cores and 128 gigs of ram. We have 
> connection pooling set up, with about 100 - 200 persistent connections 
> open to the database. Our applications then use these connections to 
> query the database constantly, but when a connection isn't currently 
> executing a query, it's . On average, at any given time, there are 
> 3 - 6 connections that are actually executing a query, while the rest 
> are .

> About once a day, queries that normally take just a few seconds slow way 
> down, and start to pile up, to the point where instead of just having 
> 3-6 queries running at any given time, we get 100 - 200. The whole 
> system comes to a crawl, and looking at top, the CPU usage is 99%.

This is jumping to a conclusion based on insufficient data, but what you
describe sounds a bit like the sinval queue contention problems that we
fixed in 8.4.  Some prior reports of that:
http://archives.postgresql.org/pgsql-performance/2008-01/msg1.php
http://archives.postgresql.org/pgsql-performance/2010-06/msg00452.php

If your symptoms match those, the best fix would be to update to 8.4.x
or later, but a stopgap solution would be to cut down on the number of
idle backends.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting X coordinate from a point(lseg), btw i read the man page about points.

2011-10-27 Thread Tom Lane
"Ing.Edmundo.Robles.Lopez"  writes:
> Hi in the main page about geometric operations said:
> It is possible to access the two component numbers of a  point  as though it 
> were an array with indices 0 and 1. For example, if  t.p  is a  point  column 
> then  SELECT p[0] FROM t  retrieves the X coordinate and  UPDATE t SET p[1] = 
> ...  changes the Y coordinate. In the same way, a value of type  box  or  
> lseg  can be treated as an array of two  point  values.

> [ So how to get p2.x from an lseg value? ]

>   select  info[0] from table limit 1;
>   (647753.125,2825633.75)

Right, that gets you a point.

>   i still want to get647753.125, so i did:
> select  info[0][0] from table limit 1;

Close, but that notation only works for a 2-dimensional array, which an
lseg is not.  What you need is

regression=# select (info[0])[0] from table;
 f1 

 647753.125
(1 row)

The parenthesized object is a point, and then an entirely separate
subscripting operation has to be applied to it to get its X coordinate.

> then i did:
>   select point(info[0])[0]  from table limit 1;

Well, that's unnecessary since info[0] is already a point, but the
syntactic problem is again that you have to parenthesize the thing that
the second subscript is being applied to:

select (point(info[0]))[0]  from table limit 1;

You need parentheses any time you're going to apply subscripting or
field selection to something that isn't a simple variable reference.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Brian Fehrle

On 10/27/2011 02:27 PM, Scott Mead wrote:



On Thu, Oct 27, 2011 at 2:39 PM, Brian Fehrle 
mailto:bri...@consistentstate.com>> wrote:


Hi all, need some help/clues on tracking down a performance issue.

PostgreSQL version: 8.3.11

I've got a system that has 32 cores and 128 gigs of ram. We have
connection pooling set up, with about 100 - 200 persistent
connections open to the database. Our applications then use these
connections to query the database constantly, but when a
connection isn't currently executing a query, it's . On
average, at any given time, there are 3 - 6 connections that are
actually executing a query, while the rest are .


Remember, when you read pg_stat_activity, it is showing you query 
activity from that exact specific moment in time.  Just because it 
looks like only 3-6 connections are executing, doesn't mean that 200 
aren't actually executing < .1ms statements.  With such a beefy box, I 
would see if you can examine any stats from your connection pooler to 
find out how many connections are actually getting used.


Correct, we're getting a few hundred transactions per second, but under 
normal operation, polling pg_stat_activity will show the average of 3 - 
6 queries that were running at that moment, and those queries run for an 
average of 5 - 7 seconds. So my belief is that something happens to the 
system where either a) We get a ton more queries than normal from the 
application (currently hunting down data to support this), or b) the 
overall speed of the system slows down so that all queries increase in 
time so much that polling pg_stat_activity lets me actually see them.




About once a day, queries that normally take just a few seconds
slow way down, and start to pile up, to the point where instead of
just having 3-6 queries running at any given time, we get 100 -
200. The whole system comes to a crawl, and looking at top, the
CPU usage is 99%.

Looking at top, I see no SWAP usage, very little IOWait, and there
are a large number of postmaster processes at 100% cpu usage
(makes sense, at this point there are 150 or so queries currently
executing on the database).

 Tasks: 713 total,  44 running, 668 sleeping,   0 stopped,   1 zombie
Cpu(s):  4.4%us, 92.0%sy,  0.0%ni,  3.0%id,  0.0%wa,  0.0%hi,
 0.3%si,  0.2%st
Mem:  134217728k total, 131229972k used,  2987756k free,   462444k
buffers
Swap:  8388600k total,  296k used,  8388304k free, 119029580k
cached


In the past, we noticed that autovacuum was hitting some large
tables at the same time this happened, so we turned autovacuum off
to see if that was the issue, and it still happened without any
vacuums running.

That was my next question :)


We also ruled out checkpoints being the cause.

How exactly did you rule this out?  Just because a checkpoint is over 
doesn't mean that it hasn't had a negative effect on the OS cache.  If 
you're stuck going to disk, that could be hurting you (that being 
said, you do point to a low I/O wait above, so you're probably correct 
in ruling this out).


Checkpoint settings were set to the default per install. 5 minute 
timeout, 0.5 completion target, and 30s warning. Looking at the logs, we 
were getting a checkpoint every 5 minutes on the dot.


I looked at the data in pg_stat_database and noticed that buffers 
written by checkpoints are near 4X that of the background writer. So I 
implemented some changes to get more to be written by the background 
writer, including increasing the checkpoint timeout to 30 minutes, and 
setting the frequency of the bgwriter wait time from 200ms to 50ms.


checkpoints now happen 30 mins apart on the dot, and there was not a 
checkpoint happening the last time this issue of major slowdown occured.




I'm currently digging through some statistics I've been gathering
to see if traffic increased at all, or remained the same when the
slowdown occurred. I'm also digging through the logs from the
postgresql cluster (I increased verbosity yesterday), looking for
any clues. Any suggestions or clues on where to look for this to
see what can be causing a slowdown like this would be greatly
appreciated.

Are you capturing table-level stats from pg_stat_user_[tables | 
indexes]?  Just because a server doesn't look busy doesn't mean that 
you're not doing 1000 index scans per second returning 1000 tuples 
each time.



I am not grabbing any of those at the moment, I'll look into those.

- Brian F

--Scott

Thanks,
   - Brian F

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






[GENERAL] Custom data type in C with one fixed and one variable attribute

2011-10-27 Thread Adrian Schreyer
Hi,

I am trying to create a custom data type in C that has a fixed size
and a variable size attribute - is that actually possible? The
documentation mentions only one or the other but a struct in the
pg_trgm extension (TRGM) seems to have that.

The data type I have is

typedef struct {
int4   length;
uint32 foo;
char   bar[1];
} oefp;

The external representation of that data type would be (1,
'hexadecimal string here'), for example.

This is my _in function to parse the external cstring.

PG_FUNCTION_INFO_V1(mydatatype_in);
Datum mydatatype_in(PG_FUNCTION_ARGS)
{
char   *rawcstring = PG_GETARG_CSTRING(0);

uint32  foo;
char   *buffer = (char *) palloc(strlen(rawcstring));

if (sscanf(rawcstring, "(%u,%[^)])", &foo, buffer) != 2)
{
ereport(ERROR,
(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
 errmsg("Invalid input syntax: \"%s\"", rawcstring)));
}

mydatatype *dt = (mydatatype*) palloc(VARHDRSZ + sizeof(uint32) +
strlen(buffer));

SET_VARSIZE(dt, VARHDRSZ + sizeof(uint32) + strlen(buffer));
memcpy(dt->bar, buffer, strlen(buffer));
dt->foo = foo;

PG_RETURN_POINTER(dt);
}

The problem is however that dt->bar contains not only the input string
but random characters or other garbage as well, so something must go
wrong at the end of the function. Any thoughts what it could be?

Cheers,

Adrian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Scott Mead
On Thu, Oct 27, 2011 at 2:39 PM, Brian Fehrle wrote:

> Hi all, need some help/clues on tracking down a performance issue.
>
> PostgreSQL version: 8.3.11
>
> I've got a system that has 32 cores and 128 gigs of ram. We have connection
> pooling set up, with about 100 - 200 persistent connections open to the
> database. Our applications then use these connections to query the database
> constantly, but when a connection isn't currently executing a query, it's
> . On average, at any given time, there are 3 - 6 connections that are
> actually executing a query, while the rest are .
>

Remember, when you read pg_stat_activity, it is showing you query activity
from that exact specific moment in time.  Just because it looks like only
3-6 connections are executing, doesn't mean that 200 aren't actually
executing < .1ms statements.  With such a beefy box, I would see if you can
examine any stats from your connection pooler to find out how many
connections are actually getting used.



>
> About once a day, queries that normally take just a few seconds slow way
> down, and start to pile up, to the point where instead of just having 3-6
> queries running at any given time, we get 100 - 200. The whole system comes
> to a crawl, and looking at top, the CPU usage is 99%.
>
> Looking at top, I see no SWAP usage, very little IOWait, and there are a
> large number of postmaster processes at 100% cpu usage (makes sense, at this
> point there are 150 or so queries currently executing on the database).
>
>  Tasks: 713 total,  44 running, 668 sleeping,   0 stopped,   1 zombie
> Cpu(s):  4.4%us, 92.0%sy,  0.0%ni,  3.0%id,  0.0%wa,  0.0%hi,  0.3%si,
>  0.2%st
> Mem:  134217728k total, 131229972k used,  2987756k free,   462444k buffers
> Swap:  8388600k total,  296k used,  8388304k free, 119029580k cached
>
>
> In the past, we noticed that autovacuum was hitting some large tables at
> the same time this happened, so we turned autovacuum off to see if that was
> the issue, and it still happened without any vacuums running.
>
That was my next question :)

>
> We also ruled out checkpoints being the cause.
>
> How exactly did you rule this out?  Just because a checkpoint is over
doesn't mean that it hasn't had a negative effect on the OS cache.  If
you're stuck going to disk, that could be hurting you (that being said, you
do point to a low I/O wait above, so you're probably correct in ruling this
out).


>
> I'm currently digging through some statistics I've been gathering to see if
> traffic increased at all, or remained the same when the slowdown occurred.
> I'm also digging through the logs from the postgresql cluster (I increased
> verbosity yesterday), looking for any clues. Any suggestions or clues on
> where to look for this to see what can be causing a slowdown like this would
> be greatly appreciated.
>
> Are you capturing table-level stats from pg_stat_user_[tables | indexes]?
Just because a server doesn't look busy doesn't mean that you're not doing
1000 index scans per second returning 1000 tuples each time.

--Scott


> Thanks,
>- Brian F
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Brian Fehrle

On 10/27/2011 02:50 PM, Tom Lane wrote:

Brian Fehrle  writes:

Hi all, need some help/clues on tracking down a performance issue.
PostgreSQL version: 8.3.11
I've got a system that has 32 cores and 128 gigs of ram. We have
connection pooling set up, with about 100 - 200 persistent connections
open to the database. Our applications then use these connections to
query the database constantly, but when a connection isn't currently
executing a query, it's. On average, at any given time, there are
3 - 6 connections that are actually executing a query, while the rest
are.
About once a day, queries that normally take just a few seconds slow way
down, and start to pile up, to the point where instead of just having
3-6 queries running at any given time, we get 100 - 200. The whole
system comes to a crawl, and looking at top, the CPU usage is 99%.

This is jumping to a conclusion based on insufficient data, but what you
describe sounds a bit like the sinval queue contention problems that we
fixed in 8.4.  Some prior reports of that:
http://archives.postgresql.org/pgsql-performance/2008-01/msg1.php
http://archives.postgresql.org/pgsql-performance/2010-06/msg00452.php

If your symptoms match those, the best fix would be to update to 8.4.x
or later, but a stopgap solution would be to cut down on the number of
idle backends.

regards, tom lane
That sounds somewhat close to the same issue I am seeing. Main 
differences being that my spike lasts for much longer than a few 
minutes, and can only be resolved when the cluster is restarted. Also, 
that second link shows TOP where much of the CPU is via the 'user', 
rather than the 'sys' like mine.


Is there anything I can look at more to get more info on this 'sinval 
que contention problem'?


Also, having my cpu usage high in 'sys' rather than 'us', could that be 
a red flag? Or is that normal?


- Brian F

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Scott Marlowe
On Thu, Oct 27, 2011 at 1:52 PM, Scott Marlowe  wrote:
> On Thu, Oct 27, 2011 at 1:48 PM, Scott Marlowe  
> wrote:
>> OK, a few points.  1: You've got a zombie process.  Find out what's
>
> To expand on the zombie thing, it's quite possible that you're
> managing to make a pg backend process crashout, which would cause the
> db to restart midday, which is bad (TM) since that dumps all of shared
> buffers and forces all clients to reconnect.  So look through the
> system logs for segmentation faults, etc.

One last thing, you should upgrade to the latest 8.3 version to see if
that helps.  There was a bug fix around 8.3.13 or so that stopped
postgresql from restarting due to a simple data corruption issue that
should have only resulted in an error message not a restart of the db.
 I know, cause I found it. :)  Thanks to the pg devs for fixing it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Scott Marlowe
On Thu, Oct 27, 2011 at 1:48 PM, Scott Marlowe  wrote:
> OK, a few points.  1: You've got a zombie process.  Find out what's

To expand on the zombie thing, it's quite possible that you're
managing to make a pg backend process crashout, which would cause the
db to restart midday, which is bad (TM) since that dumps all of shared
buffers and forces all clients to reconnect.  So look through the
system logs for segmentation faults, etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Scott Marlowe
On Thu, Oct 27, 2011 at 12:39 PM, Brian Fehrle
 wrote:
> Looking at top, I see no SWAP usage, very little IOWait, and there are a
> large number of postmaster processes at 100% cpu usage (makes sense, at this
> point there are 150 or so queries currently executing on the database).
>
>  Tasks: 713 total,  44 running, 668 sleeping,   0 stopped,   1 zombie
> Cpu(s):  4.4%us, 92.0%sy,  0.0%ni,  3.0%id,  0.0%wa,  0.0%hi,  0.3%si,
>  0.2%st
> Mem:  134217728k total, 131229972k used,  2987756k free,   462444k buffers
> Swap:  8388600k total,      296k used,  8388304k free, 119029580k cached

OK, a few points.  1: You've got a zombie process.  Find out what's
causing that, it could be a trigger of some type for this behaviour.
2: You're 92% sys.  That's bad.  It means the OS is chewing up 92% of
your 32 cores doing something.  what tasks are at the top of the list
in top?

Try running vmstat 10 for a a minute or so then look at cs and int
columns.  If cs or int is well over 100k there could be an issue with
thrashing, where your app is making some change to the db that
requires all backends to be awoken at once and the machine just falls
over under the load.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread John R Pierce

On 10/27/11 11:39 AM, Brian Fehrle wrote:


I've got a system that has 32 cores and 128 gigs of ram. We have 
connection pooling set up, with about 100 - 200 persistent connections 
open to the database. Our applications then use these connections to 
query the database constantly, but when a connection isn't currently 
executing a query, it's . On average, at any given time, there 
are 3 - 6 connections that are actually executing a query, while the 
rest are . 



thats not a very effective use of pooling.   the pooling model, you'd 
have a connection pool sufficient actual database connections to satisfy 
your concurrency requirements, and your apps would grab a connection 
from the pool, do a transaction, then release the connection back to the 
pool.


now, I don't know that this has anything to do with your performance 
problem, I'm just pointing out this anomaly.  a pool doesn't do much 
good if the clients grab a connection and just sit on it.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Brian Fehrle

Hi all, need some help/clues on tracking down a performance issue.

PostgreSQL version: 8.3.11

I've got a system that has 32 cores and 128 gigs of ram. We have 
connection pooling set up, with about 100 - 200 persistent connections 
open to the database. Our applications then use these connections to 
query the database constantly, but when a connection isn't currently 
executing a query, it's . On average, at any given time, there are 
3 - 6 connections that are actually executing a query, while the rest 
are .


About once a day, queries that normally take just a few seconds slow way 
down, and start to pile up, to the point where instead of just having 
3-6 queries running at any given time, we get 100 - 200. The whole 
system comes to a crawl, and looking at top, the CPU usage is 99%.


Looking at top, I see no SWAP usage, very little IOWait, and there are a 
large number of postmaster processes at 100% cpu usage (makes sense, at 
this point there are 150 or so queries currently executing on the database).


 Tasks: 713 total,  44 running, 668 sleeping,   0 stopped,   1 zombie
Cpu(s):  4.4%us, 92.0%sy,  0.0%ni,  3.0%id,  0.0%wa,  0.0%hi,  0.3%si,  
0.2%st

Mem:  134217728k total, 131229972k used,  2987756k free,   462444k buffers
Swap:  8388600k total,  296k used,  8388304k free, 119029580k cached


In the past, we noticed that autovacuum was hitting some large tables at 
the same time this happened, so we turned autovacuum off to see if that 
was the issue, and it still happened without any vacuums running.


We also ruled out checkpoints being the cause.

I'm currently digging through some statistics I've been gathering to see 
if traffic increased at all, or remained the same when the slowdown 
occurred. I'm also digging through the logs from the postgresql cluster 
(I increased verbosity yesterday), looking for any clues. Any 
suggestions or clues on where to look for this to see what can be 
causing a slowdown like this would be greatly appreciated.


Thanks,
- Brian F

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS in a commercial project

2011-10-27 Thread Martijn van Oosterhout
On Tue, Oct 25, 2011 at 01:41:17PM +0200, Thomas Kellerer wrote:
> Thank you very much for the detailed explanation.
> 
> I always have a hard time to understand the GPL especially the
> dividing line between "using", "linkin" and creating a derived work.

That because the GPL does not get to define those terms. They are
defined by copyright law, the licence does not get to choose what is a
derived work and what isn't.  The FSF is of the opinion that anything
linked to a GPL library is a derived work, but that isn't true in all
cases (libedit vs libreadline is one of those borderline cases).

I note in the OPs case they are relying on the customer to install
PostGIS.  The GPL only applies to *redistribution* not usage.  So if
you're not supplying your customers with PostGIS then the fact that
it's GPL seems completely irrelevent.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] Are pg_xlog/* fiels necessary for PITR?

2011-10-27 Thread Venkat Balaji
On Thu, Oct 27, 2011 at 7:57 PM, rihad  wrote:

> Hi, I'm backing up the entire server directory from time to time. pg_xlog/
> directory containing WAL files is pretty heavy (wal_level=archive). Can I
> exclude it from the regular tar archive?
>

The best would be to perform "pg_switch_xlog()" and take a backup excluding
pg_xlog.

To recover the last moment TXNs, you might need pg_xlog (depends on when you
would be recovering). pg_switch_xlog() will reduce the dependency on pg_xlog
files to a greater extent.

>
> #!/bin/sh
>
> renice 20 $$ 2>/dev/null
> pgsql -U pgsql -q -c "CHECKPOINT" postgres # speed up pg_start_backup()
>

pg_start_backup() performs a checkpoint and ensures that all the data till
that particular checkpoint and TXN id will be backed up (or marked as needed
for data consistency while restoring and recovering).


> pgsql -U pgsql -q -c "select pg_start_backup('sol')" postgres
> tar -cjf - /db 2>/dev/null | ssh -q -i ~pgsql/.ssh/id_rsa -p 2022 -c
> blowfish dbarchive@10.0.0.1 'cat > db.tbz'
> pgsql -U pgsql -q -c "select pg_stop_backup()" postgres
> sleep 60 #wait for new WAL backups to appear
> echo 'ssh -q dbarchive@10.0.0.1 ./post-backup.sh' | su -m pgsql
>
>
> I want to change tar invocation to be: tar -cjf --exclude 'db/pg_xlog/*'
> ...
>
> Will there be enough data in case of recovery? (May God forbid... )))
>

But, all the WAL Archives between backup start time and end time must be
backed up. They are needed at any cost for the database to be consistent and
the recovery to be smooth.

Recovering to any point-in-time purely depends on your backup strategy.

Thanks
VB


Re: [GENERAL] matching against a list of regexp?

2011-10-27 Thread Richard Broersma
On Thu, Oct 27, 2011 at 9:18 AM, Gauthier, Dave  wrote:
> I need to be able to select all records with a col value that matches any of
> a list of regexp.  Sort of like...
>
> select a,b,c from foo where d ~ ('^xyz','blah','shrug$');

WHERE d ~ '^xyz|blah|shrug$'


-- 
Regards,
Richard Broersma Jr.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] matching against a list of regexp?

2011-10-27 Thread Gauthier, Dave
Hi:

I need to be able to select all records with a col value that matches any of a 
list of regexp.  Sort of like...

select a,b,c from foo where d ~ ('^xyz','blah','shrug$');

Does anyone know the right syntax for this?

Thanks!


Re: [GENERAL] WAL file size vs. data file size

2011-10-27 Thread Ben Chobot
On Oct 27, 2011, at 8:44 AM, Tom Lane wrote:

> Ben Chobot  writes:
>> Today I tried to restore a 70GB database with the standard "pg_dump -h 
>> old_server <∑> | psql -h new_server <∑>" method. I had 100GB set aside for 
>> WAL files, which I figured surely would be enough, because all of the data, 
>> including indices, is only 70GB. So I was a bit surprised when the restore 
>> hung mis-way because my pg_xlogs directory ran out of space. 
> 
>> Is it expected that WAL files are less dense than data files?
> 
> Yes, that's not particularly surprising ... but how come they weren't
> getting recycled?  Perhaps you had configured WAL archiving but it was
> broken?

It's because I'm archiving wal files into Amazon's S3, which is slooow. 
PG is recycling as fast as it can, but when a few MB of COPY rows seem to 
ballon up to a few hundred MB of WAL files, it has a lot to archive before it 
can recycle. It'll be fine for steady state but it looks like it's just going 
to be a waste for this initial load.

What's the expected density ratio? I was always under the impression it would 
be about 1:1 when doing things like COPY, and have never seen anything to the 
contrary. 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL file size vs. data file size

2011-10-27 Thread Tom Lane
Ben Chobot  writes:
> Today I tried to restore a 70GB database with the standard "pg_dump -h 
> old_server <…> | psql -h new_server <…>" method. I had 100GB set aside for 
> WAL files, which I figured surely would be enough, because all of the data, 
> including indices, is only 70GB. So I was a bit surprised when the restore 
> hung mis-way because my pg_xlogs directory ran out of space. 

> Is it expected that WAL files are less dense than data files?

Yes, that's not particularly surprising ... but how come they weren't
getting recycled?  Perhaps you had configured WAL archiving but it was
broken?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Are pg_xlog/* fiels necessary for PITR?

2011-10-27 Thread rihad
Hi, I'm backing up the entire server directory from time to time. 
pg_xlog/ directory containing WAL files is pretty heavy 
(wal_level=archive). Can I exclude it from the regular tar archive?



#!/bin/sh

renice 20 $$ 2>/dev/null
pgsql -U pgsql -q -c "CHECKPOINT" postgres # speed up pg_start_backup()
pgsql -U pgsql -q -c "select pg_start_backup('sol')" postgres
tar -cjf - /db 2>/dev/null | ssh -q -i ~pgsql/.ssh/id_rsa -p 2022 -c 
blowfish dbarchive@10.0.0.1 'cat > db.tbz'

pgsql -U pgsql -q -c "select pg_stop_backup()" postgres
sleep 60 #wait for new WAL backups to appear
echo 'ssh -q dbarchive@10.0.0.1 ./post-backup.sh' | su -m pgsql


I want to change tar invocation to be: tar -cjf --exclude 'db/pg_xlog/*' ...

Will there be enough data in case of recovery? (May God forbid... )))

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Getting X coordinate from a point(lseg), btw i read the man page about points.

2011-10-27 Thread Ing.Edmundo.Robles.Lopez

Hi in the main page about geometric operations said:

It is possible to access the two component numbers of a  point  as though it 
were an array with indices 0 and 1. For example, if  t.p  is a  point  column 
then  SELECT p[0] FROM t  retrieves the X coordinate and  UPDATE t SET p[1] = 
...  changes the Y coordinate. In the same way, a value of type  box  or  lseg  
can be treated as an array of two  point  values.

1st. i have a field

Column   | Type  |   Modifiers
---+---+---
 id  | integer   | not null
 info   | lseg  |

After read  the above,  i tried to

select  info from table limit 1;

info



 [(647753.125,2825633.75),(647738.8125,2825626.75)]

  


the value  i want to get is:647753.125

so i tried to do:

 select  info[0] from table limit 1;

  info

-

 (647753.125,2825633.75)

 i still want to get647753.125, so i did:

select  info[0][0] from table limit 1;

 info

--

(1 row)

But, nothing appears like a NULL.

then i did:

 select point(info[0])[0]  from table limit 1;

ERROR:  syntax error at or near "["

LINE 1: select point(info[0])[0]  from table limit 1;

and finally i wrote this mail  :-)

Regards.
El contenido de este correo electrónico y sus archivos adjuntos son privados y 
confidenciales y va dirigido exclusivamente a su destinatario.  No se autoriza 
la utilización, retransmisión, diseminación, o cualquier otro uso de esta 
información por un receptor o entidades distintas al destinatario.  Si recibe 
este correo sin ser el destinatario se le solicita eliminarlo y hacerlo del 
conocimiento del emisor. La empresa no se hace responsable de transmisiones o 
comunicaciones no autorizadas o emitidas por personas ajenas a sus 
colaboradores utilizando éste medio electrónico.

The content of this email and its attached files are private and confidential 
and intended exclusively for the use of the individual or entity to which they 
are addressed. The retransmission, dissemination, or any other use of this 
information other than by the intended recipient is prohibited.  If you have 
received this email in error please delete it and notify the sender.  The 
company cannot be held liable for unauthorized electronic transmissions or 
communications, nor for those emitted by non-company individuals and entities.


[GENERAL] How are PGRES_FATAL_ERROR raised?????

2011-10-27 Thread Ing.Edmundo.Robles.Lopez

Hi!

I would like toknow the conditionsorreasons  where postgressends 
aPGRES_FATAL_ERROR.

El contenido de este correo electrónico y sus archivos adjuntos son privados y 
confidenciales y va dirigido exclusivamente a su destinatario.  No se autoriza 
la utilización, retransmisión, diseminación, o cualquier otro uso de esta 
información por un receptor o entidades distintas al destinatario.  Si recibe 
este correo sin ser el destinatario se le solicita eliminarlo y hacerlo del 
conocimiento del emisor. La empresa no se hace responsable de transmisiones o 
comunicaciones no autorizadas o emitidas por personas ajenas a sus 
colaboradores utilizando éste medio electrónico.

The content of this email and its attached files are private and confidential 
and intended exclusively for the use of the individual or entity to which they 
are addressed. The retransmission, dissemination, or any other use of this 
information other than by the intended recipient is prohibited.  If you have 
received this email in error please delete it and notify the sender.  The 
company cannot be held liable for unauthorized electronic transmissions or 
communications, nor for those emitted by non-company individuals and entities.


[GENERAL] pgAgent and encoding

2011-10-27 Thread pasman pasmański
Hi.

I can't set proper encoding for pgAgent.
I have two databases: postgres and www.
postgres is encoded in UTF8, www is encoded in WIN1250.

When i run pgAgent's task in www database, it fail (message char cant
be converted to UTF8):


2011-10-27 14:50:29 CEST [nieznany]  1.COPY
 BŁĄD:  kolumna "KodB³êdu" nie istnieje przy znaku 80
2011-10-27 14:50:29 CEST [nieznany]  2.COPY
 WYRAŻENIE:  COPY (

SELECT

to_char("DataPliku",'MM') AS "MiesiÂąc zwrotĂłw",

"KodB³êdu","NKA","NTA",

sum(case when "NRB" like '070%' then null else 1 end) as "CDR",

sum(case when "NRB" like '070%' then 1 end) as "CDR_070",

array_agg(distinct case when "NRB" like '070%' then null else 
"ID
Kobat" end) AS "RecNR",

array_agg(distinct case when "NRB" like '070%' then "ID Kobat" 
end)
AS "RecNR_070"

FROM "Bladpol2"

WHERE "KodB³êdu" = '61' AND to_char("DataPliku",'MM') like
'${MONTH_1}'

GROUP BY to_char("DataPliku",'MM'), "KodB³êdu", "NKA", "NTA"

ORDER BY "NKA", "NTA", Min("DataPliku")

)

TO 'e:\raport_61.csv' CSV HEADER DELIMITER ';'


2011-10-27 14:50:29 CEST [nieznany]  1.idle
 BĹ Ä"D:  znak 0x83 kodowania "WIN1250" nie ma rĂłwnowaĹĽnego w "UTF8"



How to set the correct encoding ?



pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Saving score of 3 players into a table

2011-10-27 Thread David Johnston
Fair enough.  But look in the SQL Commands section under SELECT (FROM clause) 
as well, as that gives you the syntax and meaning and not just an overview of 
the concept.

David J.


On Oct 27, 2011, at 8:27, Alexander Farber  wrote:

> The PostgreSQL docs are unfortunately scarce on JOINs
> http://www.postgresql.org/docs/8.4/static/tutorial-join.html
> 
> I've never seen a JOIN producing several rows instead
> of columns before Michael suggested it in this thread
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Saving score of 3 players into a table

2011-10-27 Thread Michael Glaesemann

On Oct 27, 2011, at 7:21, Alexander Farber wrote:

> Thank you Michal and others -
> 
> On Wed, Oct 26, 2011 at 11:11 PM, Michael Glaesemann
>  wrote:
>> Get games for a particular user:
>> 
>> SELECT g.gid, g.rounds, g.finished
>>  FROM pref_games g
>>  JOIN pref_scores u USING (gid)
>>  WHERE u.id = :id;
>> 
>> Now, add the participants for those games
>> 
>> SELECT g.gid, g.rounds, g.finished,
>>   p.id, p.money, p.quit
>>  FROM pref_games g
>>  JOIN pref_scores u USING (gid)
>>  JOIN pref_scores p USING (gid)
>>  WHERE u.id = :id;
>> 
> 
> I don't know what kind of JOIN that is (above) - but it works well:

It's just a normal join. There's nothing special about it.

> but now I'm lost even more - how to JOIN this with
> the pref_users table containing first_name, city for each player:
> 
> #  select first_name, female, avatar, city
> from pref_users where id = 'DE9411';
> first_name | female |   avatar|   city
> ++-+--
> GRAF63 | f  | picture-9411-1299771547.jpg | ALCORCON
> 
> I'm trying:
> 
> # SELECT g.gid, g.rounds, g.finished,
>  p.id, p.money, p.quit,
>  i.first_name, i.avatar
> FROM pref_games g
> JOIN pref_scores u USING (gid)
> JOIN pref_scores p USING (gid)
> JOIN pref_users i USING (id)
> WHERE u.id = 'DE9411';
> 
> ERROR:  common column name "id" appears more than once in left table

There are two id's: u.id, and p.id. You need to specify which one you're 
joining on with i:

SELECT g.gid, g.rounds, g.finished,
 p.id, p.money, p.quit,
 i.first_name, i.avatar
  FROM pref_games g
  JOIN pref_scores u USING (gid)
  JOIN pref_scores p USING (gid)
  JOIN pref_users i ON i.id = p.id
  WHERE u.id = 'DE9411';

> Another try:
> 
> # SELECT g.gid, g.rounds, g.finished,
>  p.id, p.money, p.quit,
>  i.first_name, i.avatar
> FROM pref_games g, pref_users i
> JOIN pref_scores u USING (gid)
> JOIN pref_scores p USING (gid)
> WHERE u.id = 'DE9411' and p.id=i.id;
> 
> ERROR:  column "gid" specified in USING clause does not exist in left table

This is complaining about
  pref_users i
  JOIN pref_scores u USING (gid)

i doesn't have a gid column.

Looks like you could use some work on basic SQL. I recommend picking up a basic 
SQL book.

Michael Glaesemann
grzm seespotcode net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Saving score of 3 players into a table

2011-10-27 Thread Alexander Farber
The PostgreSQL docs are unfortunately scarce on JOINs
http://www.postgresql.org/docs/8.4/static/tutorial-join.html

I've never seen a JOIN producing several rows instead
of columns before Michael suggested it in this thread

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Saving score of 3 players into a table

2011-10-27 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alexander Farber
Sent: Thursday, October 27, 2011 7:21 AM
Cc: pgsql-general
Subject: Re: [GENERAL] Saving score of 3 players into a table

Thank you Michal and others -

On Wed, Oct 26, 2011 at 11:11 PM, Michael Glaesemann 
wrote:
> Get games for a particular user:
>
> SELECT g.gid, g.rounds, g.finished
>  FROM pref_games g
>  JOIN pref_scores u USING (gid)
>  WHERE u.id = :id;
>
> Now, add the participants for those games
>
> SELECT g.gid, g.rounds, g.finished,
>       p.id, p.money, p.quit
>  FROM pref_games g
>  JOIN pref_scores u USING (gid)
>  JOIN pref_scores p USING (gid)
>  WHERE u.id = :id;
>

I don't know what kind of JOIN that is (above) - but it works well:

#  SELECT g.gid, g.rounds, g.finished,
  p.id, p.money, p.quit
 FROM pref_games g
 JOIN pref_scores u USING (gid)
 JOIN pref_scores p USING (gid)
 WHERE u.id = 'DE9411';
 gid  | rounds |  finished  |   id   |
money | quit
--++++--
-+--
   43 | 12 | 2011-10-26 14:57:54.045975 | OK510649006288 |  -240
| f
   43 | 12 | 2011-10-26 14:57:54.045975 | DE9411 |64
| f
   43 | 12 | 2011-10-26 14:57:54.045975 | OK355993104857 |   176
| f
  159 | 19 | 2011-10-26 15:55:54.650444 | DE9396 |70
| f
  159 | 19 | 2011-10-26 15:55:54.650444 | DE9411 |  -110
| f
  159 | 19 | 2011-10-26 15:55:54.650444 | OK5409550866   |42
| f
  224 | 16 | 2011-10-26 16:27:20.996753 | DE9396 | 4
| f
  224 | 16 | 2011-10-26 16:27:20.996753 | DE9411 |66
| f
  224 | 16 | 2011-10-26 16:27:20.996753 | OK5409550866   |   -70
| f
  297 | 20 | 2011-10-26 17:05:53.514124 | OK486555355432 |  -114
| f
  297 | 20 | 2011-10-26 17:05:53.514124 | DE9411 |   -36
| f
  297 | 20 | 2011-10-26 17:05:53.514124 | OK5409550866   |   148
| f
  385 | 20 | 2011-10-26 17:43:44.473597 | OK486555355432 |   245
| f
  385 | 20 | 2011-10-26 17:43:44.473597 | DE9411 |29
| f
  385 | 20 | 2011-10-26 17:43:44.473597 | OK5409550866   |  -275
| f
  479 | 19 | 2011-10-26 18:26:05.00712  | OK486555355432 |30
| f
  479 | 19 | 2011-10-26 18:26:05.00712  | DE9411 |   -40
| f
  479 | 19 | 2011-10-26 18:26:05.00712  | OK5409550866   | 8
| f

but now I'm lost even more - how to JOIN this with the pref_users table
containing first_name, city for each player:

#  select first_name, female, avatar, city from pref_users where id =
'DE9411';
 first_name | female |   avatar|   city
++-+--
 GRAF63 | f  | picture-9411-1299771547.jpg | ALCORCON

I'm trying:

# SELECT g.gid, g.rounds, g.finished,
  p.id, p.money, p.quit,
  i.first_name, i.avatar
 FROM pref_games g
 JOIN pref_scores u USING (gid)
 JOIN pref_scores p USING (gid)
 JOIN pref_users i USING (id)
 WHERE u.id = 'DE9411';

ERROR:  common column name "id" appears more than once in left table

Another try:

# SELECT g.gid, g.rounds, g.finished,
  p.id, p.money, p.quit,
  i.first_name, i.avatar
 FROM pref_games g, pref_users i
 JOIN pref_scores u USING (gid)
 JOIN pref_scores p USING (gid)
 WHERE u.id = 'DE9411' and p.id=i.id;

ERROR:  column "gid" specified in USING clause does not exist in left table

Regards
Alex

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


--- / Original Message -

A) Read the documentation on JOINs until you understand what is going on in
the first query (specifically, how do "ON, NATURAL, USING" relate to each
other and to the JOIN itself)
B) Avoid mixing "JOIN" syntax with multiple tables in the "FROM" clause
C) If you are getting ambiguity in columns you either need to force a JOIN
order (using parentheses) OR revert to using explicit "ON ()" clauses

Note, the "column gid ..." error above result because the planner is trying
to join "pref_users AND pref_scores"  but pref_users does not have a GID
column to join on.  It's as if you wrote:

( (pref_gamesJOIN(pref_users JOIN pref_scores)   )   JOIN
pref_scores   )

David J.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Saving score of 3 players into a table

2011-10-27 Thread Alexander Farber
Thank you Michal and others -

On Wed, Oct 26, 2011 at 11:11 PM, Michael Glaesemann
 wrote:
> Get games for a particular user:
>
> SELECT g.gid, g.rounds, g.finished
>  FROM pref_games g
>  JOIN pref_scores u USING (gid)
>  WHERE u.id = :id;
>
> Now, add the participants for those games
>
> SELECT g.gid, g.rounds, g.finished,
>       p.id, p.money, p.quit
>  FROM pref_games g
>  JOIN pref_scores u USING (gid)
>  JOIN pref_scores p USING (gid)
>  WHERE u.id = :id;
>

I don't know what kind of JOIN that is (above) - but it works well:

#  SELECT g.gid, g.rounds, g.finished,
  p.id, p.money, p.quit
 FROM pref_games g
 JOIN pref_scores u USING (gid)
 JOIN pref_scores p USING (gid)
 WHERE u.id = 'DE9411';
 gid  | rounds |  finished  |   id   |
money | quit
--++++---+--
   43 | 12 | 2011-10-26 14:57:54.045975 | OK510649006288 |  -240 | f
   43 | 12 | 2011-10-26 14:57:54.045975 | DE9411 |64 | f
   43 | 12 | 2011-10-26 14:57:54.045975 | OK355993104857 |   176 | f
  159 | 19 | 2011-10-26 15:55:54.650444 | DE9396 |70 | f
  159 | 19 | 2011-10-26 15:55:54.650444 | DE9411 |  -110 | f
  159 | 19 | 2011-10-26 15:55:54.650444 | OK5409550866   |42 | f
  224 | 16 | 2011-10-26 16:27:20.996753 | DE9396 | 4 | f
  224 | 16 | 2011-10-26 16:27:20.996753 | DE9411 |66 | f
  224 | 16 | 2011-10-26 16:27:20.996753 | OK5409550866   |   -70 | f
  297 | 20 | 2011-10-26 17:05:53.514124 | OK486555355432 |  -114 | f
  297 | 20 | 2011-10-26 17:05:53.514124 | DE9411 |   -36 | f
  297 | 20 | 2011-10-26 17:05:53.514124 | OK5409550866   |   148 | f
  385 | 20 | 2011-10-26 17:43:44.473597 | OK486555355432 |   245 | f
  385 | 20 | 2011-10-26 17:43:44.473597 | DE9411 |29 | f
  385 | 20 | 2011-10-26 17:43:44.473597 | OK5409550866   |  -275 | f
  479 | 19 | 2011-10-26 18:26:05.00712  | OK486555355432 |30 | f
  479 | 19 | 2011-10-26 18:26:05.00712  | DE9411 |   -40 | f
  479 | 19 | 2011-10-26 18:26:05.00712  | OK5409550866   | 8 | f

but now I'm lost even more - how to JOIN this with
the pref_users table containing first_name, city for each player:

#  select first_name, female, avatar, city
from pref_users where id = 'DE9411';
 first_name | female |   avatar|   city
++-+--
 GRAF63 | f  | picture-9411-1299771547.jpg | ALCORCON

I'm trying:

# SELECT g.gid, g.rounds, g.finished,
  p.id, p.money, p.quit,
  i.first_name, i.avatar
 FROM pref_games g
 JOIN pref_scores u USING (gid)
 JOIN pref_scores p USING (gid)
 JOIN pref_users i USING (id)
 WHERE u.id = 'DE9411';

ERROR:  common column name "id" appears more than once in left table

Another try:

# SELECT g.gid, g.rounds, g.finished,
  p.id, p.money, p.quit,
  i.first_name, i.avatar
 FROM pref_games g, pref_users i
 JOIN pref_scores u USING (gid)
 JOIN pref_scores p USING (gid)
 WHERE u.id = 'DE9411' and p.id=i.id;

ERROR:  column "gid" specified in USING clause does not exist in left table

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglesslog for Postgres 9.1.1

2011-10-27 Thread mailtolouis2020-postg...@yahoo.com
Hi,

I'm sorry I'm not good in C, anyone can help to put a patch or release a new 
version for that?



Regards
Louis


>
>From: Tom Lane 
>To: "mailtolouis2020-postg...@yahoo.com" 
>Cc: Postgres 
>Sent: Wednesday, October 26, 2011 3:42 PM
>Subject: Re: [GENERAL] pglesslog for Postgres 9.1.1 
>
>"mailtolouis2020-postg...@yahoo.com"  
>writes:
>> remove.c:182: error: ‘XLOG_GIN_INSERT’ undeclared (first use in this 
>> function)
>> remove.c:182: error: (Each undeclared identifier is reported only once
>> remove.c:182: error: for each function it appears in.)
>> remove.c:184: error: ‘XLOG_GIN_VACUUM_PAGE’ undeclared (first use in this 
>> function)
>> remove.c:186: error: ‘XLOG_GIN_DELETE_PAGE’ undeclared (first use in this 
>> function)
>
>That stuff got moved to gin_private.h in 9.1 ...
>
>            regards, tom lane
>
>
>

Re: [GENERAL] GIN : Working with term positions

2011-10-27 Thread Cédric Villemain
2011/10/26 Yoann Moreau :
> On 21/10/11 12:23, Yoann Moreau wrote:
>>
>> Hello,
>> I'm using a GIN index for a text column on a big table. I use it to rank
>> the rows, but I also need to get the term positions for each document of a
>> subset of documents. I assume these positions are stored in the index,
>> because doc says positions can be used for cover density ranking and because
>> to_tsvector function gives them :
>>
>> select * from to_tsvector('I get lexemes and I get term positions.');
>>            to_tsvector
>> 
>>  'get':2,6 'lexem':3 'posit':8 'term':7
>>
>> I can get the term positions with to_tsvector but only by parsing the
>> result string, is there any more handy way ? Something like :
>> select * from term_and_positions('I get lexemes and I get term
>> positions.');
>>  term    | positions
>> -+---
>>   'get' |     {2,6}
>>  'lexem' |       {3}
>>
>>
>> Then, from the term positions, I need to get the character offset of these
>> term positions. I assume it is NOT stored in the GIN index. By character
>> offset I mean the character count from string begining to the term. For the
>> previous example it would be : 'get' --> {2,20}.
>>
>> I thought about using ts_headline to return the whole text with term
>> tagged and then parse it to compute the character offsets from the tags. But
>> this function is very slow, seems like it does not use the GIN index at all.
>> And I suppose it can't because there is no way to know from a term position
>> where its substring is in the text.
>>
>> Now I think the only solution is to make my own C function parsing the
>> text like to_tsvector does and counting terms AND characters read from the
>> begining of the text to match them. I got a look on the code, and it does
>> not seems easy to do because characters offset or string length are never
>> used by the parsetext function (ts_parse.c). If you have any other
>> suggestion, would love to hear that !
>>
>> Regards, Yoann Moreau
>
> Hello again, I'm sorry my need is actually a bit different than what I have
> asked. I need to get the term positions using the GIN index, when I query my
> text column, i.e. for a given term.
>
> For example for 2 rows of a 'docs' table with a text column 'text' :
> 'I get lexemes and I get term positions.'
> 'Did you get the positions ?'
>
> I'd need a function like this :
> select term_positions(text, 'get') from docs;
>  id_doc | positions
> -+---
>       1 |     {2,6}
>       2 |       {3}
>
> I know it can't be as simple as this, because the query would first need to
> be filtered with a WHERE using a tsquery and this can't be done in the
> function called like in my example. I suppose such a feature does not exist,
> but is there any way to get the positions of the matching terms when
> querying a GIN index ?
>
> The only possible way I imagine right now is to firstly filter the rows with
> "to_tsvector(text) @@ to_tsquery('get')" and then call "to_tsvector(text)"
> for the n highest ranked rows, parsing the string returned by the function
> to find the term and its positions. But would be way more efficient to get
> them directly at the first call when matching the terms with @@ operator. I
> know it would be impossible if the query contain more than 1 term because it
> can't return 2 arrays of position in one row (i.e. for one document), but
> for now I'm trying to do this for 1 query term.
> Any help or advice would be welcome !
>
> By the way, I have done the C function computing the character offset of a
> given term position for a text column. It's not done in a good way, but it's
> more a topic for hackers postgresql list.
>

Don't forget when you success that word positions are affected by the
word removed by stop-words.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can someone help explain what's going on from the attached logs?

2011-10-27 Thread Simon Riggs
On Wed, Oct 26, 2011 at 6:41 PM, Chris Redekop  wrote:
>> Caveat #2 applies here
>>
>> http://developer.postgresql.org/pgdocs/postgres/hot-standby.html#HOT-STANDBY-CAVEATS
>>
>> The consistent state is delayed until your long running transactions
>> end, which is workload dependent but transient.
>
> I'm not quite sure how this correlates to what I'm seeing in 9.1.1.  When
> attempting to start a hotstandby while the primary is under load it seems to
> get stuck in that 'starting up' mode even when there are no open
> transactions.  If I get it into this state, and then remove all the load
> from the primary it still will not finish starting up.  If I select from
> pg_stat_activity on the primary it shows a couple connections, but they all
> have null 'xact_start's and  'current_query's.  Even if I then kill
> all the connections to the primary (via pg_terminate_backend) the hotstandby
> still will not finish starting up.  I would assume there can't be any
> transactions in progress if there are no connections to the primary.
>  Attempting to restart the hotstandby when in this state produces the same
> result.  If there is a transaction in progress for the duration of the
> backup (or something like that) can it cause it to get into this state?

There's nothing in the log you've shown to indicate any problems.

Yes, when that caveat applies we may wait for some time to find a good
starting point. That could be anywhere from seconds to hours,
depending upon the exact load on the master, but shouldn't be any
longer than your longest running write transaction executing at that
time.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] specifying multiple ldapserver in pg_hba.conf

2011-10-27 Thread Magnus Hagander
On Wed, Oct 26, 2011 at 23:00, Darin Perusich  wrote:
> Are you able to specify multiple ldapservers in pg_hba.conf and if so
> what is the format? I'd like to be able to build some redundancy incase
> one of the ldap servers goes down.

This is unfortunately currently not possible. To do this, you need to
set up some IP level redundancy for your LDAP.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 9.1.1 - availability of xslt_process()

2011-10-27 Thread Vibhor Kumar

On Oct 27, 2011, at 11:43 AM, Andrea Peri wrote:

> Hi,
> 
> I'm using the compiled version of PG 9.1.1 on win32 .
> 
> try-ing to call the xslt_process(text,text) (two parameter version)
> 
> PG say me it is not available.
> 
> In the docs are say that it is available only if compiled with the libxslt 
> lib.
> I see that dll is available in the lib folder.
> So I guess it should be available.
> 
> To verify if it is available I search that function in the list of functions 
> with pg-admin.
> But it is not available.Neither the three parameter version of xslt_process() 
> is available.
> 
> Perhaps is need to run some script to have it available ?


xslt_process function is part of xml2 contrib module. Execute following for xml2

CREATE EXTENSION xml2;

Thanks & Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Blog: http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general