[GENERAL] explain analyze output with parallel workers - question about meaning of information for explain.depesz.com

2017-11-17 Thread hubert depesz lubaczewski
Hi,

up to parallel executions, when we had node in explain analyze showing
"loops=x" with x more than 1, it meant that the "actual time" had to be
multiplied by loops to get real time spent in a node.

For example, check step 13 in https://explain.depesz.com/s/gNBd

It shows time of 3ms, but loops of 1873, so the actual time is ~ 5600ms.

But with parallel execution it seems to be no longer the case.

For example:
https://explain.depesz.com/s/LTMp
or
https://explain.depesz.com/s/QHRi

It looks that the actual time is really actual time, and loops is
"worker nodes + 1".

Is that really the case? Should I, for explain.depesz.com, when dealing
with partial* and parallel* nodes, use "loops=1" for calculation of
exclusive/inclusive time? always? some other nodes?

or am I missing something in here?

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgxn manager down

2017-09-12 Thread hubert depesz lubaczewski
On Tue, Sep 12, 2017 at 02:56:26PM +0200, Chris Travers wrote:
> Normally I would not email the general list over this but it has been over
> a day and the google group for pgxn seems low enough traffic I figure I
> would mention it here.
> 
> manager.pgxn.org is giving internal server errors (which means no new
> extensions can be released on the platform).
> 
> If folks are working on this, is there an ETA on a fix?
> 
> Is there anything I can do to help?

Hi,
looks like it's fixed.

It's hosted on my server, and being taken care of by David - you can
reach him on twitter via @pgxn.

As far as I can tell, new request work fine.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread hubert depesz lubaczewski
On Wed, Feb 15, 2017 at 02:58:08PM +, Albe Laurenz wrote:
> Maybe the following answer can help:
> http://stackoverflow.com/a/39204667/6464308
> 
> I don't really know how stable that (undocumented) behaviour will be, though.

Yeah, I'd rather not depend on things like xids for production
(somewhat) code.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread hubert depesz lubaczewski
On Wed, Feb 15, 2017 at 06:44:09AM -0800, Adrian Klaver wrote:
> On 02/15/2017 06:27 AM, hubert depesz lubaczewski wrote:
> >On Wed, Feb 15, 2017 at 06:24:14AM -0800, Adrian Klaver wrote:
> >>On 02/15/2017 06:05 AM, hubert depesz lubaczewski wrote:
> >>>Hi,
> >>>I have a function, in PostgreSQL 9.6, which does:
> >>>
> >>>INSERT INTO table () values (...)
> >>>ON CONFLICT DO UPDATE ...;
> >>>
> >>>The thing is that the function should return information whether the row
> >>>was modified, or created - and currently it seems that this is not
> >>>available. Or am I missing something?
> >>
> >>All I can think of is to use:
> >>
> >>RETURNING pk
> >>
> >>and see if that changed or not.
> >
> >Well, this wouldn't work for me as pkey will not change.
> 
> Alright you lost me. If the pkey does not change then how do you get new
> rows(INSERT)?
> 
> >
> >For my particular case, I have this table
> >create table t (
> >a_from text,
> >a_to text,
> >created timestamptz,
> >updated timestamptz,
> >primary key (a_from, a_to)
> >);

Well, if I do:

insert into t (a_from, a_to)

and will use some values that do not exist in table, then insert
happens, but not sure what do you mean about "primary key change" in
this case.

On the other hand, if the from/to already exists in the table, then
update happens (on "updated" column) - and then there is definitely no
pkey change.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread hubert depesz lubaczewski
On Wed, Feb 15, 2017 at 06:24:14AM -0800, Adrian Klaver wrote:
> On 02/15/2017 06:05 AM, hubert depesz lubaczewski wrote:
> >Hi,
> >I have a function, in PostgreSQL 9.6, which does:
> >
> >INSERT INTO table () values (...)
> >ON CONFLICT DO UPDATE ...;
> >
> >The thing is that the function should return information whether the row
> >was modified, or created - and currently it seems that this is not
> >available. Or am I missing something?
> 
> All I can think of is to use:
> 
> RETURNING pk
> 
> and see if that changed or not.

Well, this wouldn't work for me as pkey will not change.

For my particular case, I have this table
create table t (
a_from text,
a_to text,
created timestamptz,
updated timestamptz,
primary key (a_from, a_to)
);

where created and updated are set (and kept correct) with triggers. And
in my insert/update, if the row exists, I just set updated to now().

So, for my particular case, I can, and do, compare if created is the
same as updated, and if no - it was update, otherwise - insert.

But it would be really good to get some proper support for
differentiating flow of such queries...

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread hubert depesz lubaczewski
Hi,
I have a function, in PostgreSQL 9.6, which does:

INSERT INTO table () values (...)
ON CONFLICT DO UPDATE ...;

The thing is that the function should return information whether the row
was modified, or created - and currently it seems that this is not
available. Or am I missing something?

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL segmentes names in wrong order?

2016-11-03 Thread hubert depesz lubaczewski
On Thu, Nov 03, 2016 at 11:28:57AM +0100, Tom DalPozzo wrote:
> What am I missing?

David already explained, but you might want to read also:
https://www.depesz.com/2011/07/14/write-ahead-log-understanding-postgresql-conf-checkpoint_segments-checkpoint_timeout-checkpoint_warning/

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq heartbeat

2016-10-27 Thread hubert depesz lubaczewski
On Thu, Oct 27, 2016 at 04:43:55PM +0200, Marcin Giedz wrote:
> Hello all 
> 
> I'm wondering if there is any natural implementation of heartbeats in
> libpq library? We've been facing specially in firewall env
> occasionally session drops between client and server. Extending
> session timeout directly on firewalls is not possible (company rules).
> Perhaps there is such mechanism "somewhere" to avoid reimplementation
> ? 

check tcp_keepalives_* in postgresql.conf

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locking question

2016-10-26 Thread hubert depesz lubaczewski
On Wed, Oct 26, 2016 at 10:42:29AM +0200, Frank Millman wrote:
> Hi all
> 
> I am designing an inventory application, and I want to ensure that the stock 
> level of any item cannot go negative.
> 
> Instead of maintaining a running balance per item, I store the original 
> quantities received in one table (call it ‘inv_rec’), and any amounts removed 
> in another table (call it ‘inv_alloc’).
> 
> CREATE TABLE inv_rec
> (row_id SERIAL PRIMARY KEY,
> product_id INT REFERENCES inv_products,
> qty INT);
> 
> CREATE TABLE inv_alloc
> (row_id SERIAL PRIMARY KEY,
> rec_id INT REFERENCES inv_rec,
> qty INT);
> 
> To get the balance of a particular item -
> 
> SELECT SUM(
> a.qty + COALESCE(
>   (SELECT SUM(b.qty) FROM inv_alloc b
>   WHERE b.rec_id = a.row_id), 0))
> FROM inv_rec a
> WHERE a.product_id = 99;
> 
> To remove a quantity from a particular item -
> 
> INSERT INTO inv_alloc (rec_id, qty)
>   VALUES (23, -1);
> Is this the correct approach, or am I missing something?

What I would do, is to add trigger on inv_alloc, than when you
insert/update/delete row there, it updates appropriate row in inv_rec by
correct number.

Then, I'd add check on inv_rec to make sure qty is never < 0.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom SQL function does not like IF-statement

2016-09-26 Thread hubert depesz lubaczewski
On Mon, Sep 26, 2016 at 08:22:11PM +0200, Alexander Farber wrote:
> ERROR:  syntax error at or near "IF"
> LINE 11: IF LENGTH(TRIM(in_msg)) > 0 AND
>  ^

of course it doesn't like it, because sql doesn't have "if" command.

If you want to use such syntax, you have to use plpgsql language for the
function.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitor pg_xlog size via SQL with postgres 9.4

2016-09-22 Thread hubert depesz lubaczewski
On Thu, Sep 22, 2016 at 02:23:20PM +0200, Sylvain Marechal wrote:
> is there a way to monitor the size of the pg_xlog directory in SQL? The

Assuming you have superuser privileges, it will most likely work:

select sum( (f).size ) from (select pg_stat_file('pg_xlog/' || pg_ls_dir) as f 
from pg_ls_dir('pg_xlog/')) x where not (f).isdir;

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Very slow queries to stats on 9.3

2016-09-09 Thread hubert depesz lubaczewski
Hi,
So, we have this situation, where there is cluster with 5 smallish
databases:
$ select oid, pg_database_size(oid) from pg_database;
  oid  | pg_database_size 
---+--
 1 |  6752440
 12035 |  6760632
 16428 |  59779475640
 16427 |294947000
 12030 |  6455812
(5 rows)

But the 16428 database has quite a lot of objects:

$ select count(*) from pg_class;
  count  
-
 1032761
(1 row)

This is reflected in stats:

# ls -l $( su -c "psql -qAtX -c 'show stats_temp_directory'" - postgres )
total 127452
-rw--- 1 postgres postgres  4230 Sep  9 12:02 db_0.stat
-rw--- 1 postgres postgres 20792 Sep  9 12:02 db_12035.stat
-rw--- 1 postgres postgres 30932 Sep  9 12:02 db_16427.stat
-rw--- 1 postgres postgres 130413431 Sep  9 12:03 db_16428.stat
-rw--- 1 postgres postgres 20792 Sep  9 12:02 db_1.stat
-rw--- 1 postgres postgres  1026 Sep  9 12:03 global.stat

This directory is on tmpfs (ramdisk).

And getting any kind of stats takes non-trivial time:

$ explain analyze SELECT sum(xact_commit+xact_rollback) as txs FROM 
pg_stat_database;   
 
 QUERY PLAN 
 
-
 Aggregate  (cost=1.12..1.13 rows=1 width=4) (actual time=460.917..460.918 
rows=1 loops=1)
   ->  Seq Scan on pg_database d  (cost=0.00..1.06 rows=6 width=4) (actual 
time=0.003..0.015 rows=5 loops=1)
 Total runtime: 460.946 ms
(3 rows)

This is repeatable, and quick strace shows that when dealing with stats, it
looks that pg has to read all stat files, in whole, parse, and return results.


Is there anything that could be done, aside from dropping 90% objects, to make
stat-relating queries faster?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] qustion about pgaudit

2016-09-09 Thread hubert depesz lubaczewski
On Thu, Sep 08, 2016 at 03:19:59PM +, PICCI Guillermo  SNOOP wrote:
> Hi,
> we are trying to install pgaudit in order to check its funcionality, and we'd 
> like to know if there is any available rpm to do this.

as far as quick google search shows, pgaudit is a tool from 2ndQuadrant,
available here:
https://github.com/2ndQuadrant/pgaudit
so it would bo probably better to ask them, possibly using
https://github.com/2ndQuadrant/pgaudit/issues
then mail *PostgreSQL* mailing list.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread hubert depesz lubaczewski
On Wed, Sep 07, 2016 at 03:05:38PM +0200, Alexander Farber wrote:
> ERROR:  syntax error at or near "in_until"
> LINE 69: ...   banned_until = CURRENT_TIMESTAMP + INTERVAL in_until,
>^
> Is there please a better way here?

Why don't you simply make in_util parameter interval? What is the point
of accepting varchar, if you're unconditionally casting it to interval
anyway?

Having said that, the error looks rather strange for this.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any work on better parallelization of pg_dump?

2016-08-29 Thread hubert depesz lubaczewski
On Mon, Aug 29, 2016 at 01:13:17PM -0300, Alvaro Herrera wrote:
> > > This happens on Pg 9.5. Are there any plans to make getting schema
> > > faster for such cases? Either by parallelization, or at least by getting
> > > schema for all tables "at once", and having pg_dump "sort it out",
> > > instead of getting schema for each table separately?
> Depesz: I suggest you start coding ASAP.

If only I knew C :(

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Any work on better parallelization of pg_dump?

2016-08-29 Thread hubert depesz lubaczewski
Hi,
we have rather uncommon case - DB with ~ 50GB of data, but this is
spread across ~ 8 tables.

Running pg_dump -Fd -jxx dumps in parallel, but only data, and MOST of
the time is spent on queries that run sequentially, and as far as I can
tell, get schema of tables, and sequence values.

This happens on Pg 9.5. Are there any plans to make getting schema
faster for such cases? Either by parallelization, or at least by getting
schema for all tables "at once", and having pg_dump "sort it out",
instead of getting schema for each table separately?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_replace double quote

2016-08-15 Thread hubert depesz lubaczewski
On Mon, Aug 15, 2016 at 06:27:06PM +0500, Михаил wrote:
> I need to escape double quotes only:
> test=# select regexp_replace('"""{Performer,"Boomwacker ""a"" 
> Recording""}"""', '([^"])"{2}([^"])', '\1\"\2', 'g');
>  regexp_replace
> -
>  """{Performer,"Boomwacker \"a"" Recording\"}"""
> 
> This is unexpected result.
> 
> But when added one symbol to ""a"" the result is right:
> test=# select regexp_replace('"""{Performer,"Boomwacker ""a1"" 
> Recording""}"""', '([^"])"{2}([^"])', '\1\"\2', 'g');
>   regexp_replace
> --
>  """{Performer,"Boomwacker \"a1\" Recording\"}"""

This is because when finding first "", "a" that is afterwards get
assigned to \2. and thus is already "used", and can't be part of
match for the second "".

What will solve the problem is to use lookahead, like:
$ select regexp_replace('"""{Performer,"Boomwacker ""a"" Recording""}"""', 
'([^"])"{2}(?=[^"])', '\1\"', 'g');
 regexp_replace  
-
 """{Performer,"Boomwacker \"a\" Recording\"}"""
(1 row)

because then the part inside (?=...) is not "used", and can be used for next
match.

Not sure if I'm clear, but hopefully you'll understand what I'm trying to
explain :)

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow SQL?

2016-07-12 Thread hubert depesz lubaczewski
On Tue, Jul 12, 2016 at 10:23:24AM +0200, Bjørn T Johansen wrote:
> I am trying to move a small system from Oracle to PostgreSQL and
> I have come upon a sql that runs really slow compared to on the Oracle
> database and I am not able to interpret why this is slow.

I loaded your explain analyze to https://explain.depesz.com/, as:
https://explain.depesz.com/s/iXK

as you can see there, the problem is that you made 280 thousand checks
for "sed_uttak y", which seems to be related to this part:


> Select a.status, a.plass, a.navn, a.avlsnr,
>date_part('day',(now() - s.dato)) dato_diff, v.tekst, 
> COALESCE(a.avlsverdi,0)
> From   sed_uttak s, sem_avlsverdi a, semin_vare v
> where a.aktiv = 1
> Ands.dato  = (Select Max(y.dato)
>   From sed_uttak y
>   Where y.avlsnr = s.avlsnr)

from what I understand, you're doing it to work on newest record from
sed_uttak, for each avlsnr.

What is rowcount in the table, and how many different avlsnr are there?

You might want to do something like:

with s as (
select distinct on (avlsnr) *
from sed_uttak
order by avlsnr desc, dato desc
)

and then use "s" instead of set_uttak, and get rid of the s.dato
= (select max) checks.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] swarm of processes in BIND state?

2016-05-31 Thread hubert depesz lubaczewski
On Mon, May 30, 2016 at 11:05:17AM -0700, Jeff Janes wrote:
> So my theory is that you deleted a huge number of entries off from
> either end of the index, that transaction committed, and that commit
> became visible to all.  Planning a mergejoin needs to dig through all
> those tuples to probe the true end-point.  On master, the index
> entries quickly get marked as LP_DEAD so future probes don't have to
> do all that work, but on the replicas those index hint bits are, for
> some unknown to me reason, not getting set.  So it has to scour the
> all the heap pages which might have the smallest/largest tuple, on
> every planning cycle, and that list of pages is very large leading to
> occasional IO stalls.

This I get, but why was the same backend reading data for all 3 largest
tables, while I know for sure (well, 99.9% sure) that no query touches
all of them?

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] swarm of processes in BIND state?

2016-05-28 Thread hubert depesz lubaczewski
On Sat, May 28, 2016 at 02:15:07PM -0400, Tom Lane wrote:
> hubert depesz lubaczewski <dep...@depesz.com> writes:
> > Does that help us in any way?
> 
> Not terribly.  That confirms that the processes are contending for a
> spinlock, but we can't tell which one.  Can you collect a few stack traces
> from those processes?

Sure, tarballed them into https://depesz.com/various/all.bt.logs.tar.gz.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] swarm of processes in BIND state?

2016-05-28 Thread hubert depesz lubaczewski
On Sat, May 28, 2016 at 10:32:15AM -0700, Jeff Janes wrote:
> > any clues on where to start diagnosing it?
> 
> I'd start by using strace (with -y -ttt -T) on one of the processes
> and see what it is doing.  A lot of IO, and one what file?  A lot of
> semop's?

So, I did:

sudo strace -o bad.log -y -ttt -T -p $( ps uwwf -u postgres | grep BIND | awk 
'{print $2}' | head -n1 )
and killed it after 10 seconds, more or less. Results:

$ wc -l bad.log
6075 bad.log
$ grep -c semop bad.log
6018

The rest were reads, seeks, and single open to these files:
$ grep -v semop bad.log | grep -oE '/16421/[0-9.]*' | sort | uniq -c
2 /16421/3062403236.20
2 /16421/3062403236.8
25 /16421/3222944583.49
28 /16421/3251043620.60

Which are:
$ select oid::regclass from pg_class  where relfilenode in (3062403236, 
3222944583, 3251043620);
   oid
--
 app_schema.s_table
 app_schema.v_table
 app_schema.m_table
(3 rows)

which are 3 largest tables there are. But, logs dont show any queries
that would touch all 3 of them.

> If that wasn't informative, I'd attach to one of the processes with
> the gdb debugger and get a backtrace.  (You might want to do that a
> few times, just in case the first one accidentally caught the code
> during a part of its execution which was not in the bottlenecked
> spot.)

I did:
for a in $( ps uww -U postgres | grep BIND | awk '{print $2}' ); do echo "bt" | 
gdb -p $a > $a.bt.log 2>&1; done

Since there is lots of output, I made a tarball with it, and put it on 
https://depesz.com/various/all.bt.logs.tar.gz

The file is ~ 19kB.

> > So far we've:
> > 1. ruled out IO problems (enough io both in terms of bandwidth and iops)
> 
> Are you saying that you are empirically not actually doing any IO
> waits, or just that the IO capacity is theoretically sufficient?

there are no iowaits per what iostat returns. Or, there are but very low.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] swarm of processes in BIND state?

2016-05-28 Thread hubert depesz lubaczewski
On Sat, May 28, 2016 at 08:04:43AM +0200, Pavel Stehule wrote:
> > > you should to install debug info - or compile with dubug symbols
> > Installed debug info, and the problem stopped.

OK. ot he problem back.

Ps looked like this:

USERPID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres   9762  0.0  1.3 6816588 424512 ?  S05:49   0:09 
/usr/lib/postgresql/9.3/bin/postgres -D /var/lib/postgresql/9.3/main -c 
config_file=/etc/postgresql/9.3/main/postgresql.conf
postgres   9763  1.6 20.3 6819980 6284520 ? Ss   05:49  11:49  \_ postgres: 
startup process   recovering 00047D6100D2
postgres  10300  0.1 20.0 6827396 6190236 ? Ss   05:49   1:22  \_ postgres: 
checkpointer process
postgres  10301  0.1 19.9 6819736 6145144 ? Ss   05:49   0:55  \_ postgres: 
writer process
postgres  1  0.0  0.0 104704  2076 ?Ss   05:49   0:11  \_ postgres: 
stats collector process
postgres  11148  0.7  0.0 6832848 4840 ?Ds   05:49   5:42  \_ postgres: 
wal receiver process   streaming 7D61/D2CE4D90
postgres  43248  0.0  0.2 6823908 79072 ?   Ss   06:41   0:00  \_ postgres: 
monitor dbname [local] idle
postgres  14802  4.0 18.5 6898688 5723652 ? Ss   09:57  19:54  \_ postgres: 
app_user dbname 10.1.15.177(44508) BIND
postgres  14804  2.9 16.3 6870980 5061020 ? Ss   09:57  14:25  \_ postgres: 
app_user dbname 10.1.15.177(44510) BIND
postgres 121044  6.2 16.8 6871756 5189808 ? Ss   13:19  17:52  \_ postgres: 
app_user dbname 10.1.14.77(58756) BIND
postgres 130506  5.0 15.6 6871536 4844704 ? Ss   13:38  13:22  \_ postgres: 
app_user dbname 10.1.15.177(47734) BIND
postgres   3767  2.8 13.6 6870876 4226928 ? Ss   13:45   7:20  \_ postgres: 
app_user dbname 10.1.14.77(59142) BIND
postgres   3768  3.7 16.0 6877232 4943916 ? Ss   13:45   9:35  \_ postgres: 
app_user dbname 10.1.14.77(59143) BIND
postgres   7168  0.0  0.0 6821248 8260 ?Ss   13:52   0:00  \_ postgres: 
pgbouncer dbname 10.1.16.158(50990) idle
postgres   7170  0.0  2.0 6863740 637368 ?  Ss   13:52   0:01  \_ postgres: 
app_user dbname 10.1.16.158(50991) idle
postgres  23083  5.9 14.7 6872584 4552872 ? Ds   14:22  13:16  \_ postgres: 
app_user dbname 10.1.15.177(48363) BIND
postgres  70043  0.0  0.0 6821224 7548 ?Ss   15:52   0:00  \_ postgres: 
pgbouncer dbname 10.1.14.77(60980) idle
postgres  70719  0.0  0.0 6821248 7448 ?Ss   15:53   0:00  \_ postgres: 
pgbouncer dbname 10.1.15.177(49701) idle
postgres  71257  5.8 14.5 6870940 4479808 ? Ss   15:54   7:33  \_ postgres: 
app_user dbname 10.1.14.77(32783) BIND
postgres  77780  5.9 13.3 6871008 4119172 ? Ss   16:06   6:58  \_ postgres: 
app_user dbname 10.1.14.77(32951) BIND
postgres  84682  9.2 15.5 6877216 4806100 ? Ss   16:19   9:40  \_ postgres: 
app_user dbname 10.1.15.177(50074) BIND
postgres 100924  7.1 13.5 6870804 4176864 ? Ss   16:51   5:11  \_ postgres: 
app_user dbname 10.1.15.177(50537) BIND
postgres 118767  7.7 13.7 6870736 4251048 ? Ss   17:24   3:04  \_ postgres: 
app_user dbname 10.1.14.77(34090) BIND
postgres 118897  6.2 13.6 6870588 4219824 ? Ss   17:24   2:27  \_ postgres: 
app_user dbname 10.1.14.77(34094) BIND
postgres 118916  8.6 12.4 6869524 3852648 ? Ss   17:25   3:22  \_ postgres: 
app_user dbname 10.1.15.177(51026) BIND
postgres 118978  3.3 13.0 6868456 4033924 ? Ss   17:25   1:18  \_ postgres: 
app_user dbname 10.1.14.77(34100) BIND
postgres 119022  2.3 13.4 6829152 4148088 ? Ss   17:25   0:53  \_ postgres: 
app_user dbname 10.1.14.77(34101) BIND
postgres 119060  3.3 13.8 6869172 4265976 ? Ss   17:25   1:18  \_ postgres: 
app_user dbname 10.1.15.177(51034) BIND
postgres 119096  1.9 12.1 6828716 3754204 ? Ss   17:25   0:46  \_ postgres: 
app_user dbname 10.1.15.177(51037) BIND
postgres 119097  6.4 13.8 6870868 4263168 ? Ss   17:25   2:29  \_ postgres: 
app_user dbname 10.1.15.177(51038) BIND
postgres 119111  1.9 12.1 6826656 3763260 ? Ss   17:25   0:44  \_ postgres: 
app_user dbname 10.1.14.77(34105) BIND
postgres 119152  3.5 12.7 6869468 3921916 ? Ss   17:25   1:22  \_ postgres: 
app_user dbname 10.1.14.77(34107) BIND
postgres 119266  7.3 13.5 6868908 4193496 ? Ss   17:25   2:49  \_ postgres: 
app_user dbname 10.1.15.177(51041) BIND
postgres 119298  5.4 13.3 6870552 4107692 ? Ss   17:25   2:05  \_ postgres: 
app_user dbname 10.1.14.77(34110) BIND
postgres 119303  6.2 14.7 6870816 4553052 ? Ss   17:25   2:24  \_ postgres: 
app_user dbname 10.1.15.177(51043) BIND
postgres 119314  4.9 13.0 6869704 4024040 ? Ss   17:25   1:54  \_ postgres: 
app_user dbname 10.1.14.77(34113) BIND
postgres 119315  2.0 12.5 6869208 3881956 ? Ss   17:25   0:47  \_ postgres: 
app_user dbname 10.1.15.177(51045) BIND
postgres 119316  7.0 13.3 6870908 4136492 ? Ss   17:25   2:41  \_ postgres: 
app_user dbname 10.1.15.177(51046) BIND
postgres 119317  3.2 13.0 6870100 4018860 ? Ss   17:25   1:14  \_ postgres: 

Re: [GENERAL] swarm of processes in BIND state?

2016-05-28 Thread hubert depesz lubaczewski
On Sat, May 28, 2016 at 07:46:52AM +0200, Pavel Stehule wrote:
> you should to install debug info - or compile with dubug symbols

Installed debug info, and the problem stopped.

Don't think it's related - it could be just timing. I'll report back
if/when the problem will re-appear.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] swarm of processes in BIND state?

2016-05-27 Thread hubert depesz lubaczewski
On Sat, May 28, 2016 at 07:25:18AM +0200, Pavel Stehule wrote:
> It is looking like spinlock issue.
> try to look there by "perf top"

First results look like:

Samples: 64K of event 'cpu-clock', Event count (approx.): 2394094576


 [0/65]
 32.08%  [unknown][.] 0x7f27ed20a296
  6.31%  [kernel] [k] finish_task_switch
  5.72%  [kernel] [k] _raw_spin_unlock_irqrestore
  3.46%  postgres [.] 0x002764ec
  3.38%  libruby-2.1.so.2.1.0 [.] 0x0008a8d0
  2.76%  [kernel] [k] __do_page_fault
  2.08%  postgres [.] hash_search_with_hash_value
  1.94%  libc-2.19.so [.] 0x00160b96
  1.79%  postgres [.] LWLockAcquire
  1.50%  postgres [.] heap_page_prune_opt
  1.28%  postgres [.] LWLockRelease
  1.09%  postgres [.] heap_hot_search_buffer
  1.00%  [kernel] [k] tick_nohz_idle_enter

I guess I have to use pg with debugging flags?

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] swarm of processes in BIND state?

2016-05-27 Thread hubert depesz lubaczewski
hi,
we have following situation:
pg 9.3.11 on ubuntu.
we have master and slave.
the db is large-ish, but we're removing *most* of its data from all
across the tables, and lots of tables too.

while we're doing it, sometimes, we get LOTS of processes, but only on
slave, never on master, that spend long time in BIND state. same queries
ran on master run without any problem, and are fast.

any clues on where to start diagnosing it?

So far we've:
1. ruled out IO problems (enough io both in terms of bandwidth and iops)
2. ruled out memory (plenty)
3. vacuumed all tables, including system ones, with analyze
4. did pg_repack on most of the tables to remove bloat.

we are in process of doing the same operation to series of servers, so
i'm not sure if the criteria on all of them are the same, but perhaps
you can think of something we didn't?

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql color hostname prompt

2016-04-25 Thread hubert depesz lubaczewski
On Mon, Apr 25, 2016 at 08:55:21AM -0500, Cal Heldenbrand wrote:
> The only outside tool it requires is lsof to determine the hostname of the
> remote socket.  Otherwise it uses plain stuff like awk / sec and bash tools.

Why would you need lsof to get hostname for remote connection, when you
can use %M ?

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Doesn't PostgreSQL clean data in data file after delete records form table?

2016-03-23 Thread hubert depesz lubaczewski
On Wed, Mar 23, 2016 at 03:27:45AM -0700, zh1029 wrote:
> Hi,
>   we are using PostgreSQL 9.3.6 version and observe data file size is not
> decreased after we deleted records from the table. It looks quite abnormal.
> Is it as PostreSQL designed? 
> 
> DBTestPostgres=# select count (*) from test_data ;
>  0
> 
> # ls -l
> total 788932
> .
> -rw--- 1 _nokfssystestpostgres root *801067008* Mar 23 17:51 32768

Yes, that's perfectly OK. It is due to MVCC mechanisms.

subsequent vacuum (at least in your case) should free the disk space,
and if it doesn't, check other ways, described here:
http://www.depesz.com/2011/07/06/bloat-happens/

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [SQL] refer function name by a variable in the function body

2016-02-16 Thread hubert depesz lubaczewski
On Tue, Feb 16, 2016 at 09:41:18AM -0600, Suresh Raja wrote:
> I use the function name in the body of pgsql code of the same function.  Is
> the function name set to any variable that i can easily reference.  Now I'm
> hard coding the function name in the code.
> In the above Raise info i use select user to get the login role, similarly
> can i use a variable for function name which is automatically set by the
> system.

There is no such variable, unfortunately.

Also, please don't cross-post to multiple lists - replied only to
-general, as the question is not about sql language.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Escaping text / hstore

2015-10-20 Thread hubert depesz lubaczewski
On Tue, Oct 20, 2015 at 07:00:04AM -0700, pinker wrote:
> Maybe somebody have an idea how to escape text string for use in hstore
> column? I have tried $$ and quote_literal in audit trigger function, but
> still db won't let me pass values with // or ' to the hstore...INSERT FROM
> trigger function:
> EXECUTE 'INSERT INTO history.' || TG_TABLE_SCHEMA || '_' ||
> TG_TABLE_NAME || l_table_suffix ||'(operation, event_time,
> executed_by, new_value) VALUES(''' || TG_OP || ''', ''' ||
> CURRENT_TIMESTAMP || ''', ''' || SESSION_USER || ''', $$''' ||   
> hstore(NEW) || '''$$)';
> During insert occurs error:
> INSERT INTO my_table ("some_id", "someother_id","description") VALUES (1751,
> 10907,'gdfddfg''gdfg');
> The same occurs with backslash:
> INSERT INTO my_table ("some_id", "someother_id","description") VALUES (1751,
> 10907,'gdfddfg//gdfg');
> ERROR:  Syntax error near ''' at position 73LINE 2: ..., '2015-10-20
> 15:41:08.534645+02', 'my_user', $$'"some...   
>  
> ^QUERY:  INSERT INTO history.public_my_table_2015_10(operation, event_time,
> executed_by, new_value) VALUES('INSERT', '2015-10-20
> 15:41:08.534645+02', 'my_user', $$'"some_id"=>"1751",
> "someother_id"=>"10907", "description"=>"gdfddfg'gdfg"'$$)CONTEXT:  PL/pgSQL
> function insert() line 6 at EXECUTE statement

First of all - stop the insanity of wrapping long lines like above
- it's unreadable.

Second-  learn to use "EXECUTE USING"

Third - learn to use format() when execute using can't help.

and it will stop the ''' $$''" madness.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL Shipping and streaming replication

2015-09-29 Thread hubert depesz lubaczewski
On Mon, Sep 28, 2015 at 12:53:37PM -0600, Scott Marlowe wrote:
> The issue was reported as omnipitr-cleanup is SLOOOW, so we run
> purgewal by hand, because the cleanup is so slow it can't keep up. But
> running it by hand is not supported.
> 
> We fixed the problem though, we wrote out own script and are now
> moving to wal-e for all future stuff.

where or when was it reported?
In issue list I see two issues (closed of course) for cleanup, but they
don't mention slowness.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL Shipping and streaming replication

2015-09-28 Thread hubert depesz lubaczewski
On Mon, Sep 28, 2015 at 08:54:54AM -0600, Scott Marlowe wrote:
> Look up WAL-E. It's works really well. We tried using OmniPITR and
> it's buggy and doesn't seem to get fixed very quickly (if at all).

Any examples? I'm developer of OmniPITR, and as far as I know there are
(currently) no unfixed bugs, and from what I can tell we fix them pretty
fast after they get reported.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Float Array

2015-09-21 Thread hubert depesz lubaczewski
On Mon, Sep 21, 2015 at 11:55:23AM +0200, Alex Magnum wrote:
> Hello,
> 
> I have a float array holding geo location information.
> 
>   geoloc
> ---
>  {5.3443133704554,100.29457569122}
>  {5.3885574294704,100.29601335526}
>  {3.1654978750403,101.60915851593}
>  {5.3766154817748,100.31472444534}
>  {3.1545014704258,101.70036971569}
> (5 rows)
> Is there an easy way to round all values to 4 decimals.

Sure:
$ select geoloc from alex;
  geoloc   
---
 {5.3443133704554,100.29457569122}
 {5.3885574294704,100.29601335526}
 {3.1654978750403,101.60915851593}
 {5.3766154817748,100.31472444534}
 {3.1545014704258,101.70036971569}
(5 rows)

(depesz@[local]:5960) 12:15:46 [depesz] 
$ select geoloc::numeric(8,4)[] from alex;
  geoloc
---
 {5.3443,100.2946}
 {5.3886,100.2960}
 {3.1655,101.6092}
 {5.3766,100.3147}
 {3.1545,101.7004}
(5 rows)

depesz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bugs with like_option in CREATE TABLE

2015-09-10 Thread hubert depesz lubaczewski
On Wed, Sep 09, 2015 at 07:51:48PM -0400, Melvin Davidson wrote:
> *O/S = Windows 10PostgreSQL 9.2.10, compiled by Visual C++ build 1600,
> 32-bit*
> http://www.postgresql.org/docs/9.1/interactive/sql-createtable.html
> 
> and like_option is:
> { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | INDEXES | STORAGE |
> COMMENTS | ALL }
> 1. INCLUDING CONSTRAINTS does not bring over the Foreign Keys

Of course it doesn't. It's documented in the page you linked:

"Not-null constraints are always copied to the new table. CHECK
constraints will only be copied if INCLUDING CONSTRAINTS is specified;
other types of constraints will never be copied"

> 2. INCLUDING ALL does not work and generates an ERROR;*

Works for me. Please provide a test case that can be used to see the
error.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does this SQL work?

2015-05-12 Thread hubert depesz lubaczewski
On Tue, May 12, 2015 at 04:07:52PM +0800, Anil Menon wrote:
 Thank you very much - looks like I will have to prefix all cols.

You should anyway.
Queries with unaliased columns make it impossible to analyze without
in-depth knowledge of the database.

Consider:

select c1, c2, c3, c4, c5
from t1 join t2 using (c6)
where c7 = 'a' and c8  now() and c9;

which fields belong to which tables? what indexes make sense? it's
impossible to tell. if the column references were prefixed with table
name/alias - it would become possible, and easy, even, to figure out
what's going on.

depesz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does this SQL work?

2015-05-11 Thread hubert depesz lubaczewski
On Tue, May 12, 2015 at 12:26:15AM +0800, Anil Menon wrote:
 manualscan= select count(*) From msgtxt where msgid in (
 manualscan( select msgid From courier where org_id=3
 manualscan( )
 manualscan-  ;
  count
 ---
  10225
 (1 row)
 manualscan= select count(*) From public.msgtxt where msgid in (select
 msgid From ver736.courier where org_id=3);
  count
 ---
  10225
 (1 row)
 Please note, there is no msgid col in courier table. Which brings the
 question why does this SQL work? An select msgid From courier where
 org_id=3 by itself gives error column msgid does not exist.

This works because this is correlated subquery.

You should have always use aliases to avoid such errors. Like here:
select count(*) From msgtxt as m where m.msgid in (
select c.msgid from courier c where c.org_id = 3
);

Your query is equivalent to:
select count(*) From msgtxt as m where m.msgid in (
select m.msgid from courier c where c.org_id = 3
);
which returns all rows from msgtxt if there is at least one row in
courier with org_id = 3.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I bump a row to the front of sort efficiently

2015-02-02 Thread hubert depesz lubaczewski
On Mon, Feb 02, 2015 at 05:16:40PM +1100, Sam Saffron wrote:
 Even this is fast, and logically equiv as id is primary key unique
 select * from topic
 where id = 1000
 union all
 select * from (
   select * from topics
   where id  1000
   order by bumped_at desc
   limit 30
 ) as x
 limit 30
 Is there any clean technique to bump up particular rows to the front
 of a sort if a certain condition is met without paying a huge
 performance hit?

Why don't you use the union-all approach? If it's fast, and does what
you need ?

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom type literal conversion

2014-09-24 Thread hubert depesz lubaczewski
On Wed, Sep 24, 2014 at 2:45 PM, Chris Bandy bandy.ch...@gmail.com wrote:

 I would like to create a new type for version strings that sorts
 numerically. The composite type below was quick to write and does not
 require superuser privileges. However, it doesn't respond to type casts the
 way I'd like.

 Is there a way to implement this type's literal conversion without
 resorting to a base type?


I think you need to define casts too, but in any way - check if semver
extension (available on pgxn) doesn't solve your problem.

depesz


Re: [GENERAL] cloning database

2014-09-19 Thread hubert depesz lubaczewski
On Fri, Sep 19, 2014 at 8:35 AM, Philipp Kraus philipp.kr...@flashpixx.de
wrote:

 Is there a buildin way to clone the database_source with all structure
 and data into a new database database1..150 ?


assuming you're using bash shell, this should work:

for i in {1..150}; do createdb -T database_source database$i; done

it's not a built-in, but very close.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-25 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 8:33 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 Not sure, just the combination of parallel operations and remote
 connections seemed to be an avenue to explore. Given that everything is
 local, turns out it was dead end.
 Looking at the pastebin log again, am I reading it right that the first
 process actually COMMITs properly?
 Also is there a trigger in the mix that might be fouling things up?


Please note that the pastebin log is split by backend pid, and only in
backend-pid groups sorted by timestamp.

66014 started transaction later, and committed, while 66017, which started
transaction earlier, and actually obtained lock earlier - got killed by
deadlock resolution.

There are no triggers aside from some (~10) fkeys.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-25 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 9:21 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:

 FWIW this problem was reported also by Andrew Sackville-West at

 http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230
 I strongly suspect now that the problem is related to the locking of
 updated versions as heap_lock_tuple_updated, and perhaps the internal
 locking done by EvalPlanQual.  Haven't traced through it.


Is there anything I could tell the developer to do (he's on Mac) so he
could provide more information?

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-25 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 6:55 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 What transaction isolation level is being used?


Sorry for late reply - the user was away for parts of friday, I was away on
weekend, and just now got answer - it's read committed.

depesz


[GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-22 Thread hubert depesz lubaczewski
I have developer with pg 9.3.5, which is reporing something really strange.

He runs importer, which does, in single transaction:

begin;
select * from table where pkey = value limit 1 for update;
update table set ... where pkey = the same_value;
commit;

and two backends running the same transaction deadlock.

I checked for duplicated rows with the same pkey value - none are there.
And frankly - I'm out of ideas.

What could be wrong in such case?

Detailed logs, with just some obfuscation:
https://depesz.privatepaste.com/0594a93459

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-22 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 6:45 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 So process 66017 and 66014 are blocking each because they are running the
 exact same queries. The interesting part is the process with the lower pid
 is starting later then the none with the higher pid.


Locking is obvious. But why deadlock? There is just single row, and it
shouldn't be able to deadlock on it?!


 So what exactly is 'importer' and what does it do?


Some software written by some guy. Runs lots of queries, but the only
problem we have is with these transactions.


 Also what is this (59303)?


log_line_prefix is  '%m %r %p %u %d ' so it's port number.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-22 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 7:29 PM, John R Pierce pie...@hogranch.com wrote:

 On 8/22/2014 9:29 AM, hubert depesz lubaczewski wrote:

 select * from table where pkey = value limit 1 for update;

  why is there a limit 1 in there?pkey=somevalue should only return a
 single row.   if it DID return multiple rows, you don't have an ORDER BY,
 so the limit 1 would be indeterminate.


leftover from some other thing.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-22 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 7:20 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 So why are different processes running the exact same queries coming in on
 different ports?


the importer is parallelized, and sometimes two processes handle batches of
data that happen to update the same top level row.

but the deadlocking problem is happening only on one machine, though very
repeatably.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-22 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 7:43 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 Which begs the question, what is different about that machine?


No idea. I can pass all the question you might have, but I'm ~ 6000 miles
away from any machine running this code.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-22 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 7:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 You have not shown us the full sequence of events leading up to the
 deadlock failure, but I hypothesize that there were yet other transactions
 that updated that same row in the very recent past.  That might allow
 there to be more than one tuple lock involved (ie, locks on different
 versions of the row), which would create some scope for a deadlock
 failure.


Well, showing all events is difficult due to parallelization of importer,
but shouldn't select for update solve the problem of other locks?

The transactions are exactly as shown - select for update and then update.

depesz


Re: [GENERAL] deadlock in single-row select-for-update + update scenario? How could it happen?

2014-08-22 Thread hubert depesz lubaczewski
On Fri, Aug 22, 2014 at 7:54 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 Which in itself might be a clue.

 Is all the code/data running on/coming from that machine or is some coming
 in remotely?

 Where network latency might be an issue?


All locally, but hey - how could network latency be a problem? Transaction
gets the lock on row, and then it updates. the same row. in the same
transaction. with nothing else in the transaction. where is here place for
deadlock for another, identical transaction?

depesz


[GENERAL] Archiving skipped an xlog?

2014-08-15 Thread hubert depesz lubaczewski
Hi,
we have weird situation today.

Pg 9.1.13, on ubuntu, on AWS.

At ~ 7am one of db servers stopped responding, and got restarted using AWS
console.

After reboot everything seemed to work fine.

But later on I found that one of the segments was not archived.

Segment ...7 was written (mtime of file in pg_xlog) at 6:51 (more or less)
Segment ...8 was written at 6:57
Segment ...9 was written at 7:13

then there were more.

What's interesting - in archive_status, i see .done files for all xlogs,
except for ...8. In logs the *8 xlog is not mentioned (I was looking for
error message or something).

The xlog didn't make it to destination or archiving, but instead of being
retried after reboot - pg moved to *9, and then continued.

I manually archived *8, and everythinh was fine afterwards, but I'm puzzled
on how could it happen that an xlog gets skipped? Even in case of
force-reboot.

Regards,

depesz


Re: [GENERAL] event triggers in 9.3.4

2014-07-25 Thread hubert depesz lubaczewski
On Thu, Jul 24, 2014 at 7:13 PM, Vasudevan, Ramya 
ramya.vasude...@classmates.com wrote:

   You could compare list of tables before (_start) and after (_end) the
 ddl. Doing it in plpgsql will be tricky, but if you'd use some other
 language - like plperl - it's relatively simple:
 http://www.depesz.com/2013/12/18/waiting-for-9-4-plperl-add-event-trigger-support/

 Thank You Depesz.  This will work for ‘CREATE’ and ‘DROP’ DDLs.



 But, what about the scenario where I want to just have event triggers for
 operations like these? - 'ALTER TABLE','ALTER TRIGGER', 'ALTER FUNCTION'


Sure - just check differences in appropriate catalogs. pg_attribute,
pg_trigger, pg_proc.

In any way - if you want to really use it - you'll have to write in C.

depesz


Re: [GENERAL] event triggers in 9.3.4

2014-07-24 Thread hubert depesz lubaczewski
On Thu, Jul 24, 2014 at 2:22 AM, Vasudevan, Ramya 
ramya.vasude...@classmates.com wrote:

  CREATE TABLE log_ddl_info(ddl_tag text, ddl_event text, ddl_time
 timestamp);

 CREATE OR REPLACE FUNCTION log_ddl_execution()

 RETURNS event_trigger AS $$

 DECLARE

 insertquery TEXT;

 BEGIN

 insertquery := 'INSERT INTO log_ddl_info VALUES (''' || tg_tag ||''', '''
 || tg_event || ''', statement_timestamp())';

 EXECUTE insertquery;



Why are you using dynamic query and not just run insert normally? And even
for dynamic query it shows basically a worst possible way to do it, that is
prone to sql injection. Of course the problem is unlikely now given that
the values come from pg itself, and have pretty well defined values, but
why do it unsafely even in such simple case?!


RAISE NOTICE 'Recorded execution of command % with event %', tg_tag,
 tg_event;

 END;

 $$ LANGUAGE plpgsql;



 CREATE EVENT TRIGGER log_ddl_info_start  ON ddl_command_start EXECUTE
 PROCEDURE log_ddl_execution();

 CREATE EVENT TRIGGER log_ddl_info_end   ON ddl_command_end  EXECUTE
 PROCEDURE log_ddl_execution();

 Is there a way to log the object name (or the oid) in the function?



You could compare list of tables before (_start) and after (_end) the ddl.
Doing it in plpgsql will be tricky, but if you'd use some other language -
like plperl - it's relatively simple:
http://www.depesz.com/2013/12/18/waiting-for-9-4-plperl-add-event-trigger-support/

depesz


Re: [GENERAL] Table checksum proposal

2014-07-24 Thread hubert depesz lubaczewski
On Thu, Jul 24, 2014 at 3:35 AM, m...@byrney.com wrote:

 I have a suggestion for a table checksumming facility within PostgreSQL.
 The applications are reasonably obvious - detecting changes to tables,
 validating data migrations, unit testing etc.  A possible algorithm is as
 follows:

 1. For each row of the table, take the binary representations of the
 values and serialise them to CSV.
 2. Calculate the MD5 sum of each CSV-serialised row.
 3. XOR the row MD5 sums together.
 4. CSV-serialise and MD5 a list of representations (of some sort) of the
 types of the table's columns and XOR it with the rest.
 5. Output the result as the table's checksum.

 Advantages of this approach:

 1. Easily implemented using SPI.
 2. Since XOR is commutative and associative, order of ingestion of rows
 doesn't matter; therefore, unlike some other table checksumming methods,
 this doesn't need an expensive ORDER BY *.  So, this should be pretty much
 as fast as a SELECT * FROM, which is probably as fast as a table checksum
 can be.
 3. Using a cursor in SPI, rows can be ingested a few at a time.  So memory
 footprint is low even for large tables.
 4. Output has a convenient fixed size of 128 bits.

 Questions:

 1. Should this be a contrib module which provides a function, or should it
 be a built-in piece of functionality?
 2. Is MD5 too heavyweight for this?  Would using a non-cryptographic
 checksum be worth the speed boost?
 3. Is there a risk of different architectures/versions returning different
 checksums for tables which could be considered identical?  If so, is this
 worth worrying about?


Hmm - Do you really think we need an extension for something that can be
done using query as simple as:

select md5(string_agg(md5(c::text), '' order by md5(c::text))) from
pg_class c;

(of course you can do it on any table, not only pg_class).

If you want to use the xor idea (which make sense), all you need is to
write xor aggregate.

depesz


Re: [GENERAL] tab_to_sting

2014-07-23 Thread hubert depesz lubaczewski
I don't know what collect actually does, but just guessing, I would say
that you're looking for string_agg()

depesz


On Wed, Jul 23, 2014 at 6:12 PM, Ramesh T rameshparnandit...@gmail.com
wrote:

 Hi,

 SELECT hr.tab_to_largestring(CAST(COLLECT(cust_name) AS
 t_varchar2_tab)) AS employees
 FROM   hr.customer

 when i run function for  table column values  to single row function name
 is hr.tab_to_largestring

 this code from oracle


 it return like  function collect(character varying) does not exit

 please let me know in postgres collect () key is thier..?


 and how to run this function..?



Re: [GENERAL] BAKUP ISSUE

2014-07-09 Thread hubert depesz lubaczewski
On Wed, Jul 9, 2014 at 12:45 PM, Ramesh T rameshparnandit...@gmail.com
wrote:

 HI,
   I have a problem when i take backup using pg_basebackup and
 pg_start_backup('base');
 following ,


What is the problem? You didn't write it. Just some outputs and config, but
you never mentioned what is the problem itself.


 archive_mode = on
 archive_command = 'copy %p C:\\Program
 Files\\PostgreSQL\\9.3\\data\\pg_xlog\\%f'


This archive_command looks bad.

Is c:\program files\postgresql\9.3\data\pg_xlog directory for xlog for THIS
postgresql?
If yes - why would you want to copy xlog back to its original location?
If no - while it might work, I don't think it's safe to copy xlogs directly
to pg_xlog directory of slave (assuming it's slave).

depesz


Re: [GENERAL] BAKUP ISSUE

2014-07-09 Thread hubert depesz lubaczewski
Hi,

First - do not take the messages off list. Someone else might be able to
help you if you'll mail to the list, I'm not always there, and others might
have better ideas.

On Wed, Jul 9, 2014 at 2:43 PM, Ramesh T rameshparnandit...@gmail.com
wrote:

  i rune problem is when i run  the pg_stop_backup() its return error  in
 this way

 postgres=# select pg_stop_backup();


 *NOTICE:  pg_stop_backup cleanup done, waiting for required WAL segments
 to be ar*
 *chived*
 *WARNING:  pg_stop_backup still waiting for all required WAL segments to
 be archi*
 *ved (60 seconds elapsed)*
 *HINT:  Check that your archive_command is executing properly.
  pg_stop_backup ca*
 *n be canceled safely, but the database backup will not be usable without
 all the*
 * WAL segments.*


But there is no ERROR?! What error are you talking about?

This is just notice and a warning.


 i changed in postgresql.conf to archive_command = 'copy %p C:\Program
 Files\PostgreSQL\ramesh %f' still working same error return..


What error? So far you didn't show any.

Is the archiving working? Are there files in c:\program
files\postgresql\ramesh ?
If not, what does pg logs say about archiving? Any errors there?

depesz


Re: [GENERAL] BAKUP ISSUE

2014-07-09 Thread hubert depesz lubaczewski
On Wed, Jul 9, 2014 at 3:28 PM, Ramesh T rameshparnandit...@gmail.com
wrote:

   Yes,not an error it is a warning  and archive is working.


How can you say that archive is working when in logs you have?


 2014-07-09 18:53:33 IST LOG:  archive command failed with exit code 1
 2014-07-09 18:53:33 IST DETAIL:  The failed archive command was: copy
 pg_xlog\00010001 C:Program FilesPostgreSQL
 amesh 00010001
 2014-07-09 18:53:34 IST LOG:  archive command failed with exit code 1
 2014-07-09 18:53:34 IST DETAIL:  The failed archive command was: copy
 pg_xlog\00010001 C:Program FilesPostgreSQL
 amesh 00010001
 2014-07-09 18:53:34 IST WARNING:  archiving transaction log file
 00010001 failed too many times, will try again later


Fix the archive command so that it will actually work (files should appear
in destination directory).

I have 0 knowledge about windows, but I would guess you need to use \\ and
not \ in the path. Also - does the destination path exist?

depesz


Re: [ADMIN] [GENERAL] WARNING: database must be vacuumed within 8439472 transactions

2014-07-08 Thread hubert depesz lubaczewski
First question - are you  sure you ran vacuum in the correct database? I.e.
in caesius?

Second - is there any long running transaction? select min(xact_start) from
pg_stat_activity where xact_start is not null; should tell you.

depesz


On Tue, Jul 8, 2014 at 12:44 PM, Prabhjot Sheena 
prabhjot.she...@rivalwatch.com wrote:

 So this is what i did but my problem is still not going away.

 i shutdown the database and started it in single user mode and issued
 command vacuum full

 The command completed but the issue still exists

 The thing i noticed is that whenever i start the database autovaccum
 automatically starts on one table all the time like this
 autovacuum: VACUUM public.hotel_site_market (to prevent wraparound)

 This same auto vacuum is running since the problem started. i tried to
 cancel it using pg_cancel_backend but it starts again. i did a vacuum full
 public.hotel_site_market and the statement completes but again it starts
 running.

 i checked the stats using this

 caesius=# select relname, age(relfrozenxid) from pg_class where relkind =
 'r' order by 2 desc;
 WARNING:  database caesius must be vacuumed within 1648680 transactions
 HINT:  To avoid a database shutdown, execute a full-database VACUUM in
 caesius.
   relname   |age
 +
  hotel_site_market  | 2145834967
  cc_table_data  |198017413

 Even after running the full vacuum the stats are not changing and this
 autovacuum: VACUUM public.hotel_site_market (to prevent wraparound) keeps
 coming back i m getting this message as well

 WARNING:  database prod01 must be vacuumed within 1648687 transactions

 Pls let me know what i should do on this

 Thanks
 avi



 On Mon, Jul 7, 2014 at 4:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 John R Pierce pie...@hogranch.com writes:
  On 7/7/2014 2:14 PM, Prabhjot Sheena wrote:
  i will run full vacuum than and see how it goes.

  do make sure there aren't any OLD pending transactions hanging around.

 Not only regular transactions, but prepared transactions:

 select * from pg_prepared_xacts;

 8.3 was the last release in which max_prepared_transactions was nonzero
 by default, thereby allowing people to shoot themselves in the foot
 this way without having taken off the safety first :-(

 regards, tom lane


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [ADMIN] [GENERAL] WARNING: database must be vacuumed within 8439472 transactions

2014-07-08 Thread hubert depesz lubaczewski
OK. Please run what Tom suggested ( select * from pg_prepared_xacts; ), and
show us output.

Also, please run:

vacuum verbose analyze hotel_site_market;

and also show us output.

depesz



On Tue, Jul 8, 2014 at 2:39 PM, Prabhjot Sheena 
prabhjot.she...@rivalwatch.com wrote:

 Yes i did ran it in caesius database and  not prod01 db that was a typo

 there is no long running transactions. i just ran this command select
 min(xact_start) from pg_stat_activity where xact_start is not null; to make
 sure

 Thanks


 On Tue, Jul 8, 2014 at 4:43 AM, hubert depesz lubaczewski 
 dep...@gmail.com wrote:

 First question - are you  sure you ran vacuum in the correct database?
 I.e. in caesius?

 Second - is there any long running transaction? select min(xact_start)
 from pg_stat_activity where xact_start is not null; should tell you.

 depesz


 On Tue, Jul 8, 2014 at 12:44 PM, Prabhjot Sheena 
 prabhjot.she...@rivalwatch.com wrote:

 So this is what i did but my problem is still not going away.

 i shutdown the database and started it in single user mode and issued
 command vacuum full

 The command completed but the issue still exists

 The thing i noticed is that whenever i start the database autovaccum
 automatically starts on one table all the time like this
 autovacuum: VACUUM public.hotel_site_market (to prevent wraparound)

 This same auto vacuum is running since the problem started. i tried to
 cancel it using pg_cancel_backend but it starts again. i did a vacuum full
 public.hotel_site_market and the statement completes but again it starts
 running.

 i checked the stats using this

 caesius=# select relname, age(relfrozenxid) from pg_class where relkind
 = 'r' order by 2 desc;
 WARNING:  database caesius must be vacuumed within 1648680 transactions
 HINT:  To avoid a database shutdown, execute a full-database VACUUM in
 caesius.
   relname   |age
 +
  hotel_site_market  | 2145834967
  cc_table_data  |198017413

 Even after running the full vacuum the stats are not changing and this
 autovacuum: VACUUM public.hotel_site_market (to prevent wraparound) keeps
 coming back i m getting this message as well

 WARNING:  database prod01 must be vacuumed within 1648687 transactions

 Pls let me know what i should do on this

 Thanks
 avi



 On Mon, Jul 7, 2014 at 4:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 John R Pierce pie...@hogranch.com writes:
  On 7/7/2014 2:14 PM, Prabhjot Sheena wrote:
  i will run full vacuum than and see how it goes.

  do make sure there aren't any OLD pending transactions hanging around.

 Not only regular transactions, but prepared transactions:

 select * from pg_prepared_xacts;

 8.3 was the last release in which max_prepared_transactions was nonzero
 by default, thereby allowing people to shoot themselves in the foot
 this way without having taken off the safety first :-(

 regards, tom lane


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general







Re: [GENERAL] Random-looking primary keys in the range 100000..999999

2014-07-04 Thread hubert depesz lubaczewski
How many rows do you plan on having in this table? Why this particular key
range?

depesz


On Fri, Jul 4, 2014 at 3:24 PM, Kynn Jones kyn...@gmail.com wrote:

 I'm looking for a way to implement pseudorandom primary keys in the range
 10..99.

 The randomization scheme does not need to be cryptographically strong.  As
 long as it is not easy to figure out in a few minutes it's good enough.

 My starting point for this is the following earlier message to this list:

 http://www.postgresql.org/message-id/49f96730.4000...@postnewspapers.com.au

 The answer given to it here


 http://www.postgresql.org/message-id/448163db-cac5-4e99-8c4c-57cbc6f6af78@mm

 ...is really cool, but I don't see how to modify it for the case where the
 domain of the permutation has a cardinality that is not a power of 2, as it
 is in my case (cardinality = 90).

 ---

 (In the crypto world there are format preserving encryption techniques
 that probably could do what I want to do, but their focus on cryptographic
 strength makes learning and implementing them tough going, plus, the
 performance will probably be poor, since high workloads are an asset for
 such crypto applications.  Since cryptographic strength is not something I
 need, I'm trying to find non-crypt-grade alternatives.)

 Thanks in advance!

 kynn




Re: [GENERAL] Validating User Login Within Postgres

2014-07-01 Thread hubert depesz lubaczewski
On Tue, Jul 1, 2014 at 3:58 PM, Rich Shepard rshep...@appl-ecosys.com
wrote:

   I'm developing a new application and want to take advantage of postgres
 features such as triggers and stored procedures and put as much of the
 middleware 'business logic' into the database engine as is practical.
   Is it possible, or practical, to validate a potential user login within
 the database? What I've done in the past is search the User table for that
 name and password, return it to the application, and have a Python script
 see if the entered username and password match that stored in the table.


That depends. For example - for system that will have 5 users, and requires
strict security policies - it would make sense. On the other hand, for
website, with thousands of users, putting them all as actual roles in Pg
doesn't make much sense, and could potentially cause problem.

When I write apps I tend to create database users per *type* of application
that will use it, and then handle application users using table in my
database.

So, for example, I might have app_dba account (the one that owns all
objects, and is used to create tables/views/functions/...), app_website
account (the one used by application to handle web requests), app_cronjob
(for general cronjobs, or sometimes specialized app_cron_whatever for every
cronjob).

Also, if you're thinking about security - please consider reading
http://www.depesz.com/2007/08/18/securing-your-postgresql-database/ .

Best regards,

depesz


Re: [GENERAL] Repeated semop calls

2014-06-27 Thread hubert depesz lubaczewski
On Thu, Jun 26, 2014 at 10:03 PM, Anand Kumar, Karthik 
karthik.anandku...@classmates.com wrote:

 We run postgres 9.3.3 on Centos 6.3, kernel 2.6.32-431.3.1. Every once in
 a while, we see postgres processes spinning on semop:


it might be long shot, but when we had problems with lots of backends
sitting in semop, it was solved by:

1. disabling zone_reclaim (echo 0  /proc/sys/vm/zone_reclaim_mode)
2. disabling transparent hugepage support - this has various names on
different kernel/distributions, but find /sys | grep -i
transparent.*hugepage.*enable will find it, and then just echo never there.

depesz


Re: [GENERAL] Alternative to psql -c ?

2014-06-25 Thread hubert depesz lubaczewski
Perhaps you can explain what is the functionality you want to achieve, as
I, for one, don't understand. Do you want transactions? Or not?

Also - I have no idea what peer authentication has to do with Pg gem -
care to elaborate? The gem is for client, and authentication happens in
server, so ... ?

depesz


On Wed, Jun 25, 2014 at 4:37 PM, James Le Cuirot ch...@aura-online.co.uk
wrote:

 On Wed, 25 Jun 2014 10:24:53 -0400
 Andrew Sullivan a...@crankycanuck.ca wrote:

  On Wed, Jun 25, 2014 at 03:16:19PM +0100, James Le Cuirot wrote:
   Same problem as stdin, the transactional behaviour is different.
   There is the --single-transaction option but as the man page says...
  
   If the script itself uses BEGIN, COMMIT, or ROLLBACK, this option
   will not have the desired effects.
 
  Hmm.  I've _used_ transactions in such files, I'm pretty sure.  You
  don't need the --single-transaction setting for this, just do the
  BEGIN; and COMMIT; yourself.
 
  A

 Sorry, you're missing the point. I'm trying not to alter the existing
 behaviour of the Chef database cookbook which is used by countless
 people to execute scripts big and small, with and without transactions.
 If I just naively wrapped them all in BEGIN/COMMIT then it would
 override any additional transactions within the scripts.

 James


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Alternative to psql -c ?

2014-06-25 Thread hubert depesz lubaczewski
On Wed, Jun 25, 2014 at 5:18 PM, James Le Cuirot ch...@aura-online.co.uk
wrote:

  Also - I have no idea what peer authentication has to do with Pg
  gem - care to elaborate? The gem is for client, and authentication
  happens in server, so ... ?
 Right but peer authentication is all to do with the operating system
 user that the client is connecting from. In the case of chef-client,


Any reason why you can't reconfigure Pg to allow root connections to
postgres account?

depesz


Re: [GENERAL] Warm standby (log shipping) from PG 8.3 to 9.3

2014-06-10 Thread hubert depesz lubaczewski
On Tue, Jun 10, 2014 at 8:13 PM, David Wall d.w...@computer.org wrote:

 Is it safe to assume that my working PG 8.3 archive command on the master
 and recovery.conf (using contrib's pg_standby) on the standby will work the
 same under 9.3?


Yes, it will work just fine. Of course you can't load 9.3 xlogs into 8.3,
or 8.3 xlogs into 9.3, but the commands are the same.

Regards,

depesz


Re: [GENERAL] Problem with locales on Linux with 9.3.4

2014-06-07 Thread hubert depesz lubaczewski
localedef --no-archive, requires additional argument, and then it waits on
something. I'm definitely not an locale expert, so I have no idea what it
does. There is locale-gen option --no-archive, too, but when I run
locale-gen --no-archive, I just get:

# locale-gen --no-archive
Generating locales...
  cs_CZ.UTF-8... up-to-date
  de_DE.UTF-8... up-to-date
  en_GB.ISO-8859-1... up-to-date
  en_GB.ISO-8859-15... up-to-date
  en_GB.UTF-8... up-to-date
  en_US.UTF-8... up-to-date
  pl_PL.UTF-8... up-to-date
  sk_SK.UTF-8... up-to-date
Generation complete.

And nothing changes.

depesz


On Sat, Jun 7, 2014 at 2:23 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 06/06/2014 04:36 PM, hubert depesz lubaczewski wrote:

 locale-gen just regenerates the locale - which I have. It's just that
 PostgreSQL doesn't see it. When I run locale-gen, it just shows that all
 locales (listed by locale -a) are up to date.


 On further reading another way would be to use the localedef command with
 the --no-archive option:

 http://manpages.ubuntu.com/manpages/precise/man1/localedef.1.html


  depesz




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Problem with locales on Linux with 9.3.4

2014-06-07 Thread hubert depesz lubaczewski
Seems I now have the locale. Too bad I had to delete the bad databases
earlier.

Thanks Adrian,

depesz


On Sat, Jun 7, 2014 at 7:16 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 06/07/2014 08:17 AM, hubert depesz lubaczewski wrote:

 localedef --no-archive, requires additional argument, and then it waits
 on something. I'm definitely not an locale expert, so I have no idea
 what it does. There is locale-gen option --no-archive, too, but when
 I run locale-gen --no-archive, I just get:

 # locale-gen --no-archive
 Generating locales...
cs_CZ.UTF-8... up-to-date
de_DE.UTF-8... up-to-date
en_GB.ISO-8859-1... up-to-date
en_GB.ISO-8859-15... up-to-date
en_GB.UTF-8... up-to-date
en_US.UTF-8... up-to-date
pl_PL.UTF-8... up-to-date
sk_SK.UTF-8... up-to-date
 Generation complete.

 And nothing changes.


 Should have been clearer on my previous post, the dpkg command is for use
 after locale-gen.

 Missed the part where you ran localedef until I reread the post. localedef
 is looking for the following, from example in man page:

 EXAMPLES

Compile the locale files for Finnish in the UTF-8 character set and
 add
it to the default locale archive with the name fi_FI.UTF-8:

   localedef -f UTF-8 -i fi_FI fi_FI.UTF-8

 Where:
  localedef [options] outputpath

 and outpath with --no-archive is by default  /usr/lib/locale
 otherwise outpath is /usr/lib/locale/locale-archive



 depesz




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Problem with locales on Linux with 9.3.4

2014-06-07 Thread hubert depesz lubaczewski
localedef.

depesz


On Sat, Jun 7, 2014 at 9:57 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 06/07/2014 12:43 PM, hubert depesz lubaczewski wrote:

 Seems I now have the locale. Too bad I had to delete the bad databases
 earlier.


 So, just for completeness what worked:

 locale-gen
 sudo dpkg-reconfigure locales

 or

 localedef ?


 Thanks Adrian,

 depesz






 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] Problem with locales on Linux with 9.3.4

2014-06-06 Thread hubert depesz lubaczewski
Hi,
I'm running Pg 9.3.4 on Ubuntu Linux 12.04/Precise. Pg is installed from
PGDG repo (http://apt.postgresql.org/pub/repos/apt/).

It somehow got database created in locale that it can't now open:

$ psql
psql: FATAL:  database locale is incompatible with operating system
DETAIL:  The database was initialized with LC_COLLATE en_GB.UTF-8,  which
is not recognized by setlocale().
HINT:  Recreate the database with another locale or install the missing
locale.

When I connect to another database, I can see that:


List of databases
  Name   | Owner  | Encoding |   Collate   |Ctype
|   Access privileges
-++--+-+-+---
 xxx | xx | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 |
 postgres| postgres   | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 |
 template0   | postgres   | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 |
=c/postgres  +
 ||  | | |
postgres=CTc/postgres
 template1   | postgres   | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 |
=c/postgres  +
 ||  | | |
postgres=CTc/postgres
(4 rows)


Locale settings in Pg:

# select name, setting from pg_settings where name  ~ '^lc_';
name |   setting
-+-
 lc_collate  | pl_PL.UTF-8
 lc_ctype| pl_PL.UTF-8
 lc_messages | en_US.UTF-8
 lc_monetary | en_US.UTF-8
 lc_numeric  | en_US.UTF-8
 lc_time | en_US.UTF-8
(6 rows)

The thing is that system knows about en_GB:

$ locale -a
C
cs_CZ.utf8
C.UTF-8
de_DE.utf8
en_GB
en_GB.iso88591
en_GB.iso885915
en_GB.utf8
en_US.utf8
pl_PL.utf8
POSIX
sk_SK.utf8

$ LC_ALL=en_GB.UTF-8 locale
LANG=en_GB.UTF-8
LANGUAGE=
LC_CTYPE=en_GB.UTF-8
LC_NUMERIC=en_GB.UTF-8
LC_TIME=en_GB.UTF-8
LC_COLLATE=en_GB.UTF-8
LC_MONETARY=en_GB.UTF-8
LC_MESSAGES=en_GB.UTF-8
LC_PAPER=en_GB.UTF-8
LC_NAME=en_GB.UTF-8
LC_ADDRESS=en_GB.UTF-8
LC_TELEPHONE=en_GB.UTF-8
LC_MEASUREMENT=en_GB.UTF-8
LC_IDENTIFICATION=en_GB.UTF-8
LC_ALL=en_GB.UTF-8

I straced Pg, and it showed:

17:18:42.386260 open(/usr/lib/locale/en_GB.UTF-8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.14
17:18:42.386299 open(/usr/lib/locale/en_GB.utf8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.14
17:18:42.386337 open(/usr/lib/locale/en_GB/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.14
17:18:42.386374 open(/usr/lib/locale/en.UTF-8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.16
17:18:42.386418 open(/usr/lib/locale/en.utf8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.14
17:18:42.386455 open(/usr/lib/locale/en/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.15
17:18:42.386492
open(/usr/share/locale-langpack/en_GB.UTF-8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.15
17:18:42.386530
open(/usr/share/locale-langpack/en_GB.utf8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.13
17:18:42.386562 open(/usr/share/locale-langpack/en_GB/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.11
17:18:42.386591 open(/usr/share/locale-langpack/en.UTF-8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.08
17:18:42.386618 open(/usr/share/locale-langpack/en.utf8/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.09
17:18:42.386645 open(/usr/share/locale-langpack/en/LC_COLLATE,
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) 0.08
17:18:42.386723 write(2, 2014-06-06 15:18:42 GMT FATAL:  database
locale is incompatible with operating system\n2014-06-06 15:18:42 GMT
DETAIL:  The database was initialized with LC_COLLATE \en_GB.UTF-8\,
 which is not recognized by setlocale().\n2014-06-06 15:18:42 GMT
HINT:  Recre..., 324) = 324 0.23

In /usr/lib/locale, I have only:
1. Directory C.UTF-8
2. File: locale-archive

It looks like if system locale was packed into this archive, but Pg for
some reason cannot open it, and required locale files to be separately in
subdirectories.

Is it possible? Anyone encountered something like this before? If so - what
can be done? I didn't found any way to unpack the archive with locales.

depesz


Re: [GENERAL] Problem with locales on Linux with 9.3.4

2014-06-06 Thread hubert depesz lubaczewski
locale-gen just regenerates the locale - which I have. It's just that
PostgreSQL doesn't see it. When I run locale-gen, it just shows that all
locales (listed by locale -a) are up to date.

depesz


On Fri, Jun 6, 2014 at 10:04 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 06/06/2014 09:05 AM, hubert depesz lubaczewski wrote:

 Hi,
 I'm running Pg 9.3.4 on Ubuntu Linux 12.04/Precise. Pg is installed from
 PGDG repo (http://apt.postgresql.org/pub/repos/apt/).

 It somehow got database created in locale that it can't now open:

 $ psql
 psql: FATAL:  database locale is incompatible with operating system
 DETAIL:  The database was initialized with LC_COLLATE en_GB.UTF-8,
 which is not recognized by setlocale().
 HINT:  Recreate the database with another locale or install the missing
 locale.



 The thing is that system knows about en_GB:



 Is it possible? Anyone encountered something like this before? If so -
 what can be done? I didn't found any way to unpack the archive with
 locales.


 Seems this is a new Ubuntu thing:

 http://manpages.ubuntu.com/manpages/precise/man8/locale-gen.8.html

 and an example:

 http://backdrift.org/fixing-no-such-file-or-directory-locale-errors


 depesz




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Backups over slave instead master?

2014-05-29 Thread hubert depesz lubaczewski
OmniPITR (https://github.com/omniti-labs/omnipitr) has working backups off
slave. Working as in - we were using in it production since 8.3 at least.

depesz


On Thu, May 29, 2014 at 8:02 PM, Andres Freund and...@2ndquadrant.com
wrote:

 Hi,

 On 2014-05-16 12:49:25 +0530, chiru r wrote:
  Yes, It is possible to execute backups on the slave server instead of
  master.
 
  Below are the steps we run for one of our past customer every day to
  refresh his Dev/test environments using slave backups.
 
  *On Slave:*
 
  1. Pause the replication
 
  postgres=# select pg_xlog_replay_pause();
 
   pg_xlog_replay_pause
 
  --
 
  (1 row)
 
  2. Make sure wheather Replication paused or not.
 
  postgres =# select pg_is_xlog_replay_paused();
 
   pg_is_xlog_replay_paused
 
  --
 
  * t*
 
  (1 row)
 
  3. Copy the data directory using any one rsync,tar,scp or cp..etc
 
  4. Resume the replication to continue the replication process.
 
  postgres=# select pg_xlog_replay_resume();
 
   pg_xlog_replay_resume
 
  ---
 
  (1 row)
 
  5. Verify the weather replication is resumed or not.
 
  postgres=# select pg_is_xlog_replay_paused();
 
   pg_is_xlog_replay_paused
 
  --
 
   *f*

 This procedure is absolutely broken:
 a) There'll be further writes even if you stop replay. Both the
background writer and the checkpointer are active. The latter will
only create restartpoints, but that's still problematic.
 b) Because of the nonexistance of a backup label a backup that's been
created won't necessarily start up from the right point.

 From 9.2. you can simply use pg_basebackup from standby servers
 though. That does all the necessary things internally.

 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] 9.1.11 - many backends in semtimedop syscall

2014-03-10 Thread hubert depesz lubaczewski
On Thu, Mar 06, 2014 at 06:03:54PM +0100, hubert depesz lubaczewski wrote:
 On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
  hubert depesz lubaczewski dep...@depesz.com writes:
   I didn't have a chance to do it. Can try if there is a way to get trace
   *without* making core (sorry, my c/gdb knowledge is very, very limited).
  
  Sure, you just attach to the process:
  
  $ gdb /path/to/postgres PID-of-process
  gdb bt
  gdb quit
  
  This is usually preferable to forcing a core dump.
 
 Thank you. If the problem will strike again, I will do it on all (or
 most, depending how fast I can make it) backends.

The problem did happen again, and we were able to find a fix (I think).
For some reason we had a table with over 5 (yes, 50 thousand)
indexes on it. This table was a bucardo internals table, so maybe it was
something in bucardo (we are using it to migrate hundreds of tables to
another machine, so maybe it has something to do with it.

Anyway - after removing obsolete indexes there - the problem is gone.

Best regards,

depesz



signature.asc
Description: Digital signature


[GENERAL] 9.1.11 - many backends in semtimedop syscall

2014-03-06 Thread hubert depesz lubaczewski
Hi,
Ihave following situation:
amazon, but not RDS - just plain EC2, with ubuntu system.
zone reclaiming is disabled (at least in my system, no idea about host).
and there is no transparent huge page support.

today we had at least 3 cases where bunch of abckends (~40) gets
interlocked.
Some of them are shown as waiting in pg_stat_activity, but not all.
simple insert into table () - without any triggers - gets killed after
10 minutes with no visible progress.

From what I saw - most of the backends are locked (though it's not
visible in pg_stat_actrivity) on the same table.

Did take coredump from one of the backends, but there are no symbols, so
the backtrace is just:

root@xx:/mnt# gdb --batch --quiet -ex thread apply all bt full -ex 
quit /usr/lib/postgresql/9.1/bin/postgres pg.core.21422
[New LWP 21422]
[Thread debugging using libthread_db enabled]
Using host libthread_db library
/lib/x86_64-linux-gnu/libthread_db.so.1.
Core was generated by `postgres'.
#0  0x7ffa60da2dc7 in semop () from /lib/x86_64-linux-gnu/libc.so.6

Thread 1 (LWP 21422):
#0  0x7ffa60da2dc7 in semop () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#1  0x005f65e8 in PGSemaphoreLock ()
No symbol table info available.
#2  0x00636125 in LWLockAcquire ()
No symbol table info available.
#3  0x00630f91 in LockAcquireExtended ()
No symbol table info available.
#4  0x0062f88c in LockRelationOid ()
No symbol table info available.
#5  0x00470f6d in relation_open ()
No symbol table info available.
#6  0x0047b013 in index_open ()
No symbol table info available.
#7  0x0057bb4c in ExecOpenIndices ()
No symbol table info available.
#8  0x005894c8 in ExecInitModifyTable ()
No symbol table info available.
#9  0x0057266a in ExecInitNode ()
No symbol table info available.
#10 0x00570e4a in standard_ExecutorStart ()
No symbol table info available.
#11 0x00593406 in ?? ()
No symbol table info available.
#12 0x00593947 in SPI_execute_plan_with_paramlist ()
No symbol table info available.
#13 0x7ff8c34c8aed in ?? () from
/usr/lib/postgresql/9.1/lib/plpgsql.so
No symbol table info available.
#14 0x7ff8c34c9716 in ?? () from
/usr/lib/postgresql/9.1/lib/plpgsql.so
No symbol table info available.
#15 0x7ff8c34ca252 in ?? () from
/usr/lib/postgresql/9.1/lib/plpgsql.so
No symbol table info available.
#16 0x7ff8c34ca252 in ?? () from
/usr/lib/postgresql/9.1/lib/plpgsql.so
No symbol table info available.
#17 0x7ff8c34cc9dc in ?? () from
/usr/lib/postgresql/9.1/lib/plpgsql.so
No symbol table info available.
#18 0x7ff8c34ccf01 in plpgsql_exec_trigger () from
/usr/lib/postgresql/9.1/lib/plpgsql.so
No symbol table info available.
#19 0x7ff8c34c214a in plpgsql_call_handler () from
/usr/lib/postgresql/9.1/lib/plpgsql.so
No symbol table info available.
#20 0x00709d67 in ?? ()
No symbol table info available.
#21 0x00555fda in ?? ()
No symbol table info available.
#22 0x0055668f in ?? ()
No symbol table info available.
#23 0x0055ba6a in AfterTriggerEndQuery ()
No symbol table info available.
#24 0x0056febf in standard_ExecutorFinish ()
No symbol table info available.
#25 0x00645a2a in ?? ()
No symbol table info available.
#26 0x00645c13 in ?? ()
No symbol table info available.
#27 0x00646962 in PortalRun ()
No symbol table info available.
#28 0x0064274a in PostgresMain ()
No symbol table info available.
#29 0x00604443 in ?? ()
No symbol table info available.
#30 0x00604eb1 in PostmasterMain ()
No symbol table info available.
#31 0x0045a720 in main ()
No symbol table info available.


What could that be, and how to get rid of the problem?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


signature.asc
Description: Digital signature


Re: [GENERAL] 9.1.11 - many backends in semtimedop syscall

2014-03-06 Thread hubert depesz lubaczewski
On Thu, Mar 06, 2014 at 11:56:06AM -0500, Tom Lane wrote:
 hubert depesz lubaczewski dep...@depesz.com writes:
  Thread 1 (LWP 21422):
  #0  0x7ffa60da2dc7 in semop () from /lib/x86_64-linux-gnu/libc.so.6
  No symbol table info available.
  #1  0x005f65e8 in PGSemaphoreLock ()
  No symbol table info available.
  #2  0x00636125 in LWLockAcquire ()
  No symbol table info available.
  #3  0x00630f91 in LockAcquireExtended ()
  No symbol table info available.
  #4  0x0062f88c in LockRelationOid ()
  No symbol table info available.
  #5  0x00470f6d in relation_open ()
  No symbol table info available.
 
 Huh.  Looks like it's blocked trying to acquire one of the lock-partition
 LWLocks, which is odd because those ought never be held very long.
 Somebody else has failed to release that LWLock, looks like.
 
 Did you by any chance capture stack traces from all of the backends?
 The interesting one would be the one that *doesn't* look like this.
 Or possibly there's more than one such.

I didn't have a chance to do it. Can try if there is a way to get trace
*without* making core (sorry, my c/gdb knowledge is very, very limited).

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


signature.asc
Description: Digital signature


Re: [GENERAL] 9.1.11 - many backends in semtimedop syscall

2014-03-06 Thread hubert depesz lubaczewski
On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
 hubert depesz lubaczewski dep...@depesz.com writes:
  I didn't have a chance to do it. Can try if there is a way to get trace
  *without* making core (sorry, my c/gdb knowledge is very, very limited).
 
 Sure, you just attach to the process:
 
   $ gdb /path/to/postgres PID-of-process
   gdb bt
   gdb quit
 
 This is usually preferable to forcing a core dump.

Thank you. If the problem will strike again, I will do it on all (or
most, depending how fast I can make it) backends.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


signature.asc
Description: Digital signature


Re: [GENERAL] Grep'ing for a string in all functions in a schema?

2014-01-31 Thread hubert depesz lubaczewski
On Thu, Jan 30, 2014 at 12:52:35PM -0800, bricklen wrote:
  Since Postgres does not consider a table as a dependency of a function if
  that table is referenced in the function (probably a good reason), I often
  find myself in a position of asking is this table/sequence/index
  referenced in any of these N number of functions?
 
  Is there an easy way of essentially grep'ing all of the functions in a
  given schema for a string?
 A method I've used in the past is to create a view of function source which
 can then be searched.

Why not simply:

select p.oid::regproc from pg_proc p join pg_namespace n on p.pronamespace = 
n.oid where n.nspname = 'your-schema' and p.prosrc ~ 'searched-string';

depesz


signature.asc
Description: Digital signature


Re: [GENERAL] help interpreting explain analyze output

2013-11-27 Thread hubert depesz lubaczewski
On Tue, Nov 26, 2013 at 12:24:08PM -0500, David Rysdam wrote:
 I'm not really looking for information on how to speed this query
 up. I'm just trying to interpret the output enough to tell me which step
 is slow:

You might want to read this:
http://www.depesz.com/tag/unexplainable/

Best regards,

depesz



signature.asc
Description: Digital signature


Re: [GENERAL] log_line_prefix

2013-11-16 Thread hubert depesz lubaczewski
On sob, lis 16, 2013 at 08:44:52 +0530, Jayadevan M wrote:
 I was trying different options of log_line_prefix. I am making chnages,
 doing a pg_ctl reload and checking the output in the log files. For some
 reason ,the changes seem to have no impact. What am I doing wrong? Here is
 the output form the log files where you can see the change being accepted
 (reload) and then the output of a select now().
 2013-11-16 08:24:50.657 IST,,,3186,,5286c0d6.c72,5,,2013-11-16 06:18:22
 IST,,0,LOG,0,received SIGHUP, reloading configuration files,
 2013-11-16 08:24:50.657 IST,,,3186,,5286c0d6.c72,6,,2013-11-16 06:18:22
 IST,,0,LOG,0,parameter log_line_prefix changed to %t
 ,

It looks like you're using csvlog. log_line_prefix is used only for
stderr/syslog logging.

Best regards,

depesz



signature.asc
Description: Digital signature


Re: [GENERAL] Explanantion on pgbouncer please

2013-10-31 Thread hubert depesz lubaczewski
On czw, paź 31, 2013 at 07:25:21 -0700, si24 wrote:
 Can some one please give me a bit more of a better explanation on how exactly
 the pgbouncer works as I am now lost.
 
 I'm not sure if it is pooling the connections cause surely if its not being
 used the connections should go down not up i.e i run the webpage which has
 my map running which is an open layers map reading geoserver all my data on
 geoserver is from a database in postgres. When you start the web page it
 goes to 46 connections and after moving around for a while and selecting the
 different overlays that I have on the map it goes up to 75 connections after
 not touching it for a while nothing happens the connections don't go up or
 down, but when I move the map around and zoom then the connections increase
 again to 84 connections.

You probably are using session pooling mode in pgbouncer, and
persistent connections in app.

This is generally not so good idea.
Reasons, explanation, and suggestions:

http://www.depesz.com/2012/12/02/what-is-the-point-of-bouncing/

depesz


signature.asc
Description: Digital signature


Re: [GENERAL] Count of records in a row

2013-10-22 Thread hubert depesz lubaczewski
On pon, paź 21, 2013 at 08:38:52 -0400, Robert James wrote:
 I have a table of event_id, event_time.  Many times, several events
 happen in a row.  I'd like a query which replaces all of those events
 with a single record, showing the count.
 
 Eg: Take A,A,A,B,C,A,D,A,A,D,D,B,C,C and return: A,3; B,1; C,1; A,1;
 D,1; A,2; D,2; B,1; C,2
 How can I do that?

A or other letters don't really match your schema description.
Please provide sample schema (as in: create table ...), sample data, and
expected output.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PSQL: argument exceeds the maximum length of 1024 bytes

2013-10-12 Thread hubert depesz lubaczewski
On sob, paź 12, 2013 at 06:30:51 +0200, jane...@web.de wrote:
What does it mean if the Error: 'argument exceeds the maximum length of 
 1024 bytes' is raising using psql.
How can I increase this limit?

How did you achieve it? Can you show screenshot of what you did, and how
the error was presented?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Quotes, double quotes...

2013-09-30 Thread hubert depesz lubaczewski
On nie, wrz 29, 2013 at 02:09:54 +0100, António M. Rodrigues wrote:
 The code is the following:
 
 -
 DO $$
 DECLARE
 i integer;
 BEGIN
 FOR i IN (select nn from numeros)
 LOOP
 EXECUTE
 'create table contagio' || i || ' as
 SELECT *
  FROM pgr_drivingdistance(


 character is not for strings - it's for identifiers.

if you want to have string within string you have following options:
a. multiply ' - i.e. use '' (not )
b. use $ quotation

so you can:
execute 'whatever ''sub string'' anything ';
or
execute 'whatever $sub$sub string$sub$';

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of ORDER BY RANDOM to select random rows?

2013-08-08 Thread hubert depesz lubaczewski
On Thu, Aug 08, 2013 at 12:01:17PM +1000, Victor Hooi wrote:
 I'm just wondering if this is still the case?

Yes. Order by random() is and, most likely, will be slow. Not sure if
there is any engine that could make it fast.

 I just ran those benchmarks on my system (Postgres 9.2.4), and using ORDERY
 BY RANDOM did not seem substantially to generating random integers in
 Python and picking those out (and handling non-existent rows).

I think you accidentally a word.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can you get WAL segment has already been removed when doing synchronous replication ?!

2013-07-12 Thread hubert depesz lubaczewski
On Fri, Jul 12, 2013 at 12:30:22PM +0530, Amit Langote wrote:
  Increasing the wal_keep_segments ?
  I know that I can increase wal_keep_segments to solve it, but
  shouldn't it be *impossible* to happen with synchronous replication?
  After all - all commits should wait for slave to be 100% up to date!
 Is it possible that xlog recycling might have caused this wherein the
 xlog segment which is yet to be archived/shipped is recycled? I

As far as I know, pg will not recycle log before it's archived.
Otherwise we wouldn't be able to have archives.

 remember something of that sort. Check this discussion:
 http://www.postgresql.org/message-id/51779b3b.1020...@lab.ntt.co.jp
 Is this logged on the master or a standby?

master.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can you get WAL segment has already been removed when doing synchronous replication ?!

2013-07-11 Thread hubert depesz lubaczewski
We are seeing situation like this:
1. 9.2.4 database
2. Master settings:
   name|setting
---+---
 fsync | on
 synchronize_seqscans  | on
 synchronous_commit| remote_write
 synchronous_standby_names | *
 wal_sync_method   | open_datasync
(5 rows)

Yet, every now and then we're getting:
FATAL:  requested WAL segment * has already been removed

Assuming no part of the system is issuing set synchronous_commit
= off, how can we get in such situation?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can you get WAL segment has already been removed when doing synchronous replication ?!

2013-07-11 Thread hubert depesz lubaczewski
On Thu, Jul 11, 2013 at 11:29:24PM +0530, Raghavendra wrote:
 On Thu, Jul 11, 2013 at 11:18 PM, hubert depesz lubaczewski 
 dep...@depesz.com wrote:
 
  We are seeing situation like this:
  1. 9.2.4 database
  2. Master settings:
 name|setting
  ---+---
   fsync | on
   synchronize_seqscans  | on
   synchronous_commit| remote_write
   synchronous_standby_names | *
   wal_sync_method   | open_datasync
  (5 rows)
 
  Yet, every now and then we're getting:
  FATAL:  requested WAL segment * has already been removed
 
  Assuming no part of the system is issuing set synchronous_commit
  = off, how can we get in such situation?
 
  Best regards,
 
  depesz
 
 
 Increasing the wal_keep_segments ?

I know that I can increase wal_keep_segments to solve it, but
shouldn't it be *impossible* to happen with synchronous replication?
After all - all commits should wait for slave to be 100% up to date!

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Longest Common Subsequence in Postgres - Algorithm Challenge

2013-07-08 Thread hubert depesz lubaczewski
On Mon, Jul 08, 2013 at 09:09:26AM -0400, Robert James wrote:
 I have two relations, where each relation has two fields, one
 indicating a name and one indicating a position.  That is, each
 relation defines a sequence.
 
 I need to determine their longest common subsequence.  Yes, I can do
 this by fetching all the data into Java (or any other language) and
 computing it using the standard LCS dynamic programming language.  But
 I'd like to stay within Postgres.  Is there any way to do this?

I'm not entirely sure I understand. Can you show us some sample data and
expected output?

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query on query

2013-07-05 Thread hubert depesz lubaczewski
On Fri, Jul 05, 2013 at 08:35:22AM +0200, Luca Ferrari wrote:
 On Fri, Jul 5, 2013 at 5:09 AM, Jayadevan M
 jayadevan.maym...@ibsplc.com wrote:
 
 
  So each student may get counted many times, someone with 99 will be counted
  10 times. Possible to do this with a fat query? The table will have many
  thousands of records.
 
 
 
 Not sure I got the point, but I guess this is a good candidate for a CTE:
 
 WITH RECURSIVE t(n) AS (
 VALUES (10)
   UNION ALL
 SELECT n+10 FROM t WHERE n  50
 )
 select count(*), t.n from m, t where mark  t.n group by t.n;

This might get expensive with many rows.

On the other hand, you can do it like this:

create table grades (username text, grade int4);
insert into grades select 'x', int(rand() * 50) from generate_series(1,100);

with a as
  (select (grade/10)*10 as mark,
 count(*)
   from grades
   group by mark)
select mark,
   sum(count) over (
order by mark)
from a
order by mark;

Whis should be faster.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why are there no inequality scans for ctid?

2013-06-28 Thread hubert depesz lubaczewski
Hi,
while working on removing bloat from some table, I had to use ltos of
logic simply because there are no (idnexable) inequality scans for
ctids.

Is it because just noone thought about adding them, or are there some
more fundamental issues?

I could imagine that things like:

select * from table where ctid @ '123' could return all rows from 123rd
page, or I could:
select * from table where ctid = '(123,0)' and ctid  '(124,0)';

Having such operators work would greatly improve bloat reducing
options.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are there no inequality scans for ctid?

2013-06-28 Thread hubert depesz lubaczewski
On Fri, Jun 28, 2013 at 02:21:10PM +0530, Atri Sharma wrote:
 How would this be helpful for general use cases? Querying on tids on a
 specific page doesn't seem too useful for any other case than the one
 you mentioned above, and IMHO it seems to be the job of vacuum.
 I may be missing something here though.

Vacuum doesn't move rows around (as far as I can tell by running vacuum
~ 100 times on bloated table).

And as for general case - sure. It's not really useful aside from bloat
removal, but I think that bloat removal is important enough to warrant
some help from Pg.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SR Slave leaves off every 32nd wal segment ?!

2013-05-21 Thread hubert depesz lubaczewski
Hi,
We have following situation:
Pg master on US east coast.
Pg slave on US west coast.

Replication set using omnipitr + streaming replication.

Setup on slave:
postgres=# select name, setting from pg_settings where name ~ '(checkpo|wal)';  
 name |   setting   
--+-
 checkpoint_completion_target | 0.9
 checkpoint_segments  | 30
 checkpoint_timeout   | 300
 checkpoint_warning   | 30
 log_checkpoints  | on
 max_wal_senders  | 3
 wal_block_size   | 8192
 wal_buffers  | 2048
 wal_keep_segments| 0
 wal_level| hot_standby
 wal_receiver_status_interval | 10
 wal_segment_size | 2048
 wal_sync_method  | fdatasync
 wal_writer_delay | 200
(14 rows)

Replication is working usually well:
$ grep -c 'FATAL:..could not receive data from WAL stream:' postgresql-2013-05-*
postgresql-2013-05-14.log:7
postgresql-2013-05-15.log:7
postgresql-2013-05-16.log:9
postgresql-2013-05-17.log:9
postgresql-2013-05-18.log:8
postgresql-2013-05-19.log:8
postgresql-2013-05-20.log:7
postgresql-2013-05-21.log:7

After each such break, Pg switches to omnipitr, which does recover some wal 
files, and then it switches back to SR. All looks fine.

But. In pg_xlog on slave we have lots of files - almost 500 now. I.e. WAL 
segments.

What's weird is their numbering. List of 500 files is too big for now, so just 
last 50:

pg_xlog$ ls -l | tail -n 50
-rw--- 1 postgres postgres 16777216 May 16 22:41 0001008B0009
-rw--- 1 postgres postgres 16777216 May 17 01:16 0001008B0029
-rw--- 1 postgres postgres 16777216 May 17 03:56 0001008B0049
-rw--- 1 postgres postgres 16777216 May 17 06:36 0001008B0069
-rw--- 1 postgres postgres 16777216 May 17 09:16 0001008B0089
-rw--- 1 postgres postgres 16777216 May 17 11:56 0001008B00A9
-rw--- 1 postgres postgres 16777216 May 17 14:36 0001008B00C9
-rw--- 1 postgres postgres 16777216 May 17 17:16 0001008B00E9
-rw--- 1 postgres postgres 16777216 May 17 19:56 0001008C000A
-rw--- 1 postgres postgres 16777216 May 17 22:36 0001008C002A
-rw--- 1 postgres postgres 16777216 May 18 01:12 0001008C004A
-rw--- 1 postgres postgres 16777216 May 18 03:52 0001008C006A
-rw--- 1 postgres postgres 16777216 May 18 06:32 0001008C008A
-rw--- 1 postgres postgres 16777216 May 18 09:13 0001008C00AA
-rw--- 1 postgres postgres 16777216 May 18 14:33 0001008C00EA
-rw--- 1 postgres postgres 16777216 May 18 17:13 0001008D000B
-rw--- 1 postgres postgres 16777216 May 18 19:53 0001008D002B
-rw--- 1 postgres postgres 16777216 May 18 22:33 0001008D004B
-rw--- 1 postgres postgres 16777216 May 19 01:05 0001008D006B
-rw--- 1 postgres postgres 16777216 May 19 03:45 0001008D008B
-rw--- 1 postgres postgres 16777216 May 19 06:25 0001008D00AB
-rw--- 1 postgres postgres 16777216 May 19 09:05 0001008D00CB
-rw--- 1 postgres postgres 16777216 May 19 11:45 0001008D00EB
-rw--- 1 postgres postgres 16777216 May 19 14:25 0001008E000C
-rw--- 1 postgres postgres 16777216 May 19 17:05 0001008E002C
-rw--- 1 postgres postgres 16777216 May 19 19:45 0001008E004C
-rw--- 1 postgres postgres 16777216 May 19 22:25 0001008E006C
-rw--- 1 postgres postgres 16777216 May 20 01:01 0001008E008C
-rw--- 1 postgres postgres 16777216 May 20 03:41 0001008E00AC
-rw--- 1 postgres postgres 16777216 May 20 06:21 0001008E00CC
-rw--- 1 postgres postgres 16777216 May 20 09:01 0001008E00EC
-rw--- 1 postgres postgres 16777216 May 20 11:41 0001008F000D
-rw--- 1 postgres postgres 16777216 May 20 14:21 0001008F002D
-rw--- 1 postgres postgres 16777216 May 20 17:01 0001008F004D
-rw--- 1 postgres postgres 16777216 May 20 19:41 0001008F006D
-rw--- 1 postgres postgres 16777216 May 20 22:21 0001008F008D
-rw--- 1 postgres postgres 16777216 May 21 01:00 0001008F00AD
-rw--- 1 postgres postgres 16777216 May 21 03:35 0001008F00CD
-rw--- 1 postgres postgres 16777216 May 21 06:15 0001008F00ED
-rw--- 1 postgres postgres 16777216 May 21 08:55 0001009E
-rw--- 1 postgres postgres 16777216 May 21 11:35 00010090002E
-rw--- 1 postgres postgres 16777216 May 21 14:15 00010090004E
-rw--- 1 postgres postgres 16777216 May 21 16:55 00010090006E
-rw--- 1 postgres postgres 16777216 May 21 19:35 00010090008E
-rw--- 1 postgres 

Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-18 Thread hubert depesz lubaczewski
Not sure if it helps, but it's apparently not a very rare thing.
Quick analysis on data from explain.depesz.com showed that  12% of
plans with nested loop have such estimate.

Couple of examples:

http://explain.depesz.com/s/Qm4
http://explain.depesz.com/s/qmW
http://explain.depesz.com/s/qnG
http://explain.depesz.com/s/QO
http://explain.depesz.com/s/qov
http://explain.depesz.com/s/qqb
http://explain.depesz.com/s/QqF
http://explain.depesz.com/s/qQO
http://explain.depesz.com/s/qrI
http://explain.depesz.com/s/QRK
http://explain.depesz.com/s/QUX9
http://explain.depesz.com/s/QvN
http://explain.depesz.com/s/QWL
http://explain.depesz.com/s/r4F
http://explain.depesz.com/s/R7q
http://explain.depesz.com/s/r8
http://explain.depesz.com/s/R8
http://explain.depesz.com/s/RaB
http://explain.depesz.com/s/RbV
http://explain.depesz.com/s/Rc7

all these plans are public and not anonymized.

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_stop_backup running for 2h10m?

2013-04-24 Thread hubert depesz lubaczewski
On Tue, Apr 23, 2013 at 03:08:52PM -0400, François Beausoleil wrote:
 I used omnipitr to launch a base backup, but I fumbled a couple of
 things, so I Ctrl+C'd *once* the console where I had
 omnipitr-backup-master running. omnipitr-backup-master correctly
 launched pg_stop_backup, but pg_stop_backup has been active for 2h10
 minutes, as reported by pg_stat_activity.

Most likely your archive_command is not doing its job. Why - hard to
tell without knowing more about the setup, but that's the direction you
should be looking in.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Set Returning Functions and array_agg()

2013-04-24 Thread hubert depesz lubaczewski
On Wed, Apr 24, 2013 at 12:48:44PM -0700, Stephen Scheck wrote:
 I have a UDF (written in C) that returns SETOF RECORD of an anonymous
 record type
 (defined via OUT parameters). I'm trying to use array_agg() to transform
 its output to
 an array:
 pg_dev=# SELECT array_agg((my_setof_record_returning_func()).col1);
 ERROR:  set-valued function called in context that cannot accept a set

Is there any reason why you're not using normal syntax:
select array_agg(col1) from my_setof_record_returning_func();
?

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is pg backend using memory for?

2013-04-10 Thread hubert depesz lubaczewski
On Wed, Apr 10, 2013 at 07:36:59AM +, Albe Laurenz wrote:
 What libraries are loaded in this backend (lsof)?
 Maybe it's something non-PostgreSQL that's hogging the memory.

I don't have this particular backend anymore, and I don't have lsof. But
in smaps there are libraries listed, so:

Still there is 51MB non-shared block:
2ba63c797000-2ba63fa68000 rw-p 2ba63c797000 00:00 0
Size: 52036 kB
Rss:  51340 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean:0 kB
Private_Dirty:51340 kB
Swap: 0 kB
Pss:  51340 kB

As for libraries:
= grep / smaps  | awk '{print $NF}' | sort | uniq
(deleted)
/lib64/ld-2.5.so
/lib64/libc-2.5.so
/lib64/libcom_err.so.2.1
/lib64/libcrypt-2.5.so
/lib64/libcrypto.so.0.9.8e
/lib64/libdl-2.5.so
/lib64/libkeyutils-1.2.so
/lib64/libm-2.5.so
/lib64/libnss_files-2.5.so
/lib64/libresolv-2.5.so
/lib64/libselinux.so.1
/lib64/libsepol.so.1
/lib64/libssl.so.0.9.8e
/opt/pgbrew/9.1.6/bin/postgres
/opt/pgbrew/9.1.6/lib/postgresql/auto_explain.so
/opt/pgbrew/9.1.6/lib/postgresql/plpgsql.so
/usr/lib64/gconv/gconv-modules.cache
/usr/lib64/libgssapi_krb5.so.2.2
/usr/lib64/libk5crypto.so.3.1
/usr/lib64/libkrb5.so.3.3
/usr/lib64/libkrb5support.so.0.1
/usr/lib64/libxml2.so.2.6.26
/usr/lib64/libz.so.1.2.3
/usr/lib/locale/locale-archive

the (deleted) is shared memory file.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What is pg backend using memory for?

2013-04-09 Thread hubert depesz lubaczewski
Hi,
So, I checked a backend on Linux, and found such thing:
2ba63c797000-2ba63fa68000 rw-p 2ba63c797000 00:00 0
Size: 52036 kB
Rss:  51336 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean:0 kB
Private_Dirty:51336 kB
Swap: 0 kB
Pss:  51336 kB

(this is part of /proc/pid/smaps).

This is not shared memory, so it's local. It's not related to any files (in 
such case first line would have path to file).

What's more - this backend, during getting smaps copy was idle, and it's not 
stats manager, or anything like this.

How can this be diagnosed, to find out why there is so much private
memory?

In case it matters: it's pg 9.1.6 on linux 2.6.18-164.2.1.el5

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ts_tovector() to_query()

2013-03-29 Thread hubert depesz lubaczewski
On Thu, Mar 28, 2013 at 08:50:50PM +, Severn, Chris wrote:
 What I want to do is return items that have 'Robocop' or 'Robocop and
 DVD' or 'Robocop and Collection' or 'Robocop and DVD and collection'

Based on the criteria above, I would say that:
SELECT m.* FROM movies m WHERE to_tsvector(m.item_title) @@ 
to_tsquery('Robocop')

will do what you need, since dvd and collection are irrelevant for
the results.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql 9.2 build error

2013-01-13 Thread hubert depesz lubaczewski
On Sun, Jan 13, 2013 at 10:18:50AM -0500, AI Rumman wrote:
   I am trying to build Postgresql 9.2
 
   ./configure --prefix=/usr/pgsql-9.2  --with-ossp-uuid --with-libxml
 
   Got the error at config.log:
 
   configure:9747: result: no
 configure:9752: checking for uuid_export in -luuid
 configure:9787: gcc -o conftest -O2 -Wall -Wmissing-prototypes
 -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
 -Wmissing-format-attribute -Wformat-security -fno-stri
 ct-aliasing -fwrapv -I/usr/local/include -D_GNU_SOURCE
 -I/usr/include/libxml2  -L/usr/lib64  conftest.c -luuid  -lxml2 -lz
 -lreadline -ltermcap -lcrypt -ldl -lm  5
 /usr/bin/ld: cannot find -luuid
 collect2: ld returned 1 exit status
 configure:9794: $? = 1
 
  What should I do?

Install uuid library.
Depending on your system/distribution, the method might be different,
but generally you do it using yum/pacman/apt-get - something like this.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql 9.2 build error

2013-01-13 Thread hubert depesz lubaczewski
On Sun, Jan 13, 2013 at 11:36:11AM -0500, AI Rumman wrote:
 Its already installed.
 I am runninf Postgresql 9.0 with uuid successfully in this server.

Most likely you installed just part of the library. Not sure what
OS/distribution you're using, but on debian, for example - there is
distinction between library package needed for running programs, and
headers for library needed for program compilation.
headers are usually in package named package-dev or package-devel.

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query by partial timestamp

2013-01-09 Thread hubert depesz lubaczewski
On Tue, Jan 08, 2013 at 04:19:59PM -0600, Kirk Wythers wrote:
 I have a column of type TIMESTAMP, I'd like to query all records from 2011. 
 If it were text I could use a partial such as:
 
 WHERE
   text ~ '2011'
 
 There must be a simple way to pull the year part out of a timestamp format. 
 Thanks in advance.

using partial checks (like extract, date_part, or even casting field to
date) will have problem with index usage.
the best way to handle it, is to write the parameters using date
arithmetic.
like:
where column = '2011-01-01' and column  '2012-01-01'

do not be tempted to do:
where column = '2011-01-01' and column ='2011-12-31'
which is very bad idea, and will cause data loss.

More on index usage:
http://www.depesz.com/2010/09/09/why-is-my-index-not-being-used/

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [COMMITTERS] pgsql: Allow a streaming replication standby to follow a timeline switc

2012-12-22 Thread hubert depesz lubaczewski
On Thu, Dec 20, 2012 at 02:50:12PM +0200, Heikki Linnakangas wrote:
 Hmm, that's a different error than you got before. Thom also
 reported a requested WAL segment ... has already been removed
 error, but in his test case, and as far as I could reproduce it, the
 error doesn't reoccur when the standby reconnects. In other words,
 it eventually worked despite that error. In any case, I just
 committed a fix for the scenario that Thom reported. Can you try
 again with a fresh checkout?

Tested today with checkout of 1ff92eea140ccf0503b7399549031976e5c6642e

All worked fine. Thanks a lot.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   >