Re: Observations in Parallel Append

2017-12-23 Thread Robert Haas
On Fri, Dec 22, 2017 at 6:18 AM, Amit Kapila  wrote:
> There doesn't seem to be any need for including spin.h.  I think some
> prior version of the patch might need it.  Patch attached to remove
> it.

OK, good catch.

> The code and comment don't seem to match.  The comments indicate that
> after reaching the end of the list, we loop back to first nonpartial
> plan whereas code indicates that we loop back to first partial plan.
> I think one of those needs to be changed unless I am missing something
> obvious.

Yeah, the header comment should say partial, not nonpartial.

> 3.
> +cost_append(AppendPath *apath)
> {
> ..
> +   /*
> +* Apply parallel divisor to non-partial subpaths.  Also add the
> +* cost of partial paths to the total cost, but ignore non-partial
> +* paths for now.
> +*/
> +   if (i < apath->first_partial_path)
> +   apath->path.rows += subpath->rows / parallel_divisor;
> +   else
> +   {
> +   apath->path.rows += subpath->rows;
> +   apath->path.total_cost += subpath->total_cost;
> +   }
> ..
> }
>
> I think it is better to use clamp_row_est for rows for the case where
> we use parallel_divisor so that the value of rows is always sane.

Good point.

> Also, don't we need to use parallel_divisor for partial paths instead
> of non-partial paths as those will be actually distributed among
> workers?

Uh, that seems backwards to me.  We're trying to estimate the average
number of rows per worker.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2017-12-23 Thread Craig Ringer
On 23 December 2017 at 12:57, Tomas Vondra 
wrote:

> Hi all,
>
> Attached is a patch series that implements two features to the logical
> replication - ability to define a memory limit for the reorderbuffer
> (responsible for building the decoded transactions), and ability to
> stream large in-progress transactions (exceeding the memory limit).
>
> I'm submitting those two changes together, because one builds on the
> other, and it's beneficial to discuss them together.
>
>
> PART 1: adding logical_work_mem memory limit (0001)
> ---
>
> Currently, limiting the amount of memory consumed by logical decoding is
> tricky (or you might say impossible) for several reasons:
>
> * The value is hard-coded, so it's not quite possible to customize it.
>
> * The amount of decoded changes to keep in memory is restricted by
> number of changes. It's not very unclear how this relates to memory
> consumption, as the change size depends on table structure, etc.
>
> * The number is "per (sub)transaction", so a transaction with many
> subtransactions may easily consume significant amount of memory without
> actually hitting the limit.
>

Also, even without subtransactions, we assemble a ReorderBufferTXN per
transaction. Since transactions usually occur concurrently, systems with
many concurrent txns can face lots of memory use.

We can't exclude tables that won't actually be replicated at the reorder
buffering phase either. So txns use memory whether or not they do anything
interesting as far as a given logical decoding session is concerned. Even
if we'll throw all the data away we must buffer and assemble it first so we
can make that decision.

Because logical decoding considers snapshots and cid increments even from
other DBs (at least when the txn makes catalog changes) the memory use can
get BIG too. I was recently working with a system that had accumulated 2GB
of snapshots ... on each slot. With 7 slots, one for each DB.

So there's lots of room for difficulty with unpredictable memory use.

So the patch does two things. Firstly, it introduces logical_work_mem, a
> GUC restricting memory consumed by all transactions currently kept in
> the reorder buffer
>

Does this consider the (currently high, IIRC) overhead of tracking
serialized changes?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2017-12-23 Thread Tomas Vondra


On 12/23/2017 11:23 PM, Erik Rijkers wrote:
> On 2017-12-23 21:06, Tomas Vondra wrote:
>> On 12/23/2017 03:03 PM, Erikjan Rijkers wrote:
>>> On 2017-12-23 05:57, Tomas Vondra wrote:
 Hi all,

 Attached is a patch series that implements two features to the logical
 replication - ability to define a memory limit for the reorderbuffer
 (responsible for building the decoded transactions), and ability to
 stream large in-progress transactions (exceeding the memory limit).

>>>
>>> logical replication of 2 instances is OK but 3 and up fail with:
>>>
>>> TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
>>> "reorderbuffer.c", Line: 1773)
>>>
>>> I can cobble up a script but I hope you have enough from the assertion
>>> to see what's going wrong...
>>
>> The assertion says that the iterator produces changes in order that does
>> not correlate with LSN. But I have a hard time understanding how that
>> could happen, particularly because according to the line number this
>> happens in ReorderBufferCommit(), i.e. the current (non-streaming) case.
>>
>> So instructions to reproduce the issue would be very helpful.
> 
> Using:
> 
> 0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch
> 0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch
> 0003-Issue-individual-invalidations-with-wal_level-log-v2.patch
> 0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch
> 0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch
> 0006-Add-support-for-streaming-to-built-in-replication-v2.patch
> 
> As you expected the problem is the same with these new patches.
> 
> I have now tested more, and seen that it not always fails.  I guess that
> it here fails 3 times out of 4.  But the laptop I'm using at the moment
> is old and slow -- it may well be a factor as we've seen before [1].
> 
> Attached is the bash that I put together.  I tested with
> NUM_INSTANCES=2, which yields success, and NUM_INSTANCES=3, which fails
> often.  This same program run with HEAD never seems to fail (I tried a
> few dozen times).
> 

Thanks. Unfortunately I still can't reproduce the issue. I even tried
running it in valgrind, to see if there are some memory access issues
(which should also slow it down significantly).

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] [COMMITTERS] pgsql: Fix freezing of a dead HOT-updated tuple

2017-12-23 Thread Michael Paquier
On Sat, Dec 23, 2017 at 05:26:22PM -0300, Alvaro Herrera wrote:
> I noticed that I'm committer for this patch in the commitfest, though I
> don't remember setting that.  Are you expecting me to commit it?  I
> thought you'd do it, but if you want me to assume the responsibility I
> can do that.

I tend to create CF entries for all patches that we need to track as bug
fixes. No things lost this way. Alvaro, you have been marked as a committer
of this patch as you were the one to work and push the first version that
has finished reverted.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] [COMMITTERS] pgsql: Fix freezing of a dead HOT-updated tuple

2017-12-23 Thread Alvaro Herrera
Andres Freund wrote:

> On December 23, 2017 9:26:22 PM GMT+01:00, Alvaro Herrera 
>  wrote:
> >I noticed that I'm committer for this patch in the commitfest, though I
> >don't remember setting that.  Are you expecting me to commit it?  I
> >thought you'd do it, but if you want me to assume the responsibility I
> >can do that.
> 
> I thought I pushed it to all branches? Do you see anything missing?
> Didn't know there's a CF entry...

Ah, no, looks correct to me in all branches.  Updated the CF now too.

Thanks!

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2017-12-23 Thread Erik Rijkers

On 2017-12-23 21:06, Tomas Vondra wrote:

On 12/23/2017 03:03 PM, Erikjan Rijkers wrote:

On 2017-12-23 05:57, Tomas Vondra wrote:

Hi all,

Attached is a patch series that implements two features to the 
logical

replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).



logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
"reorderbuffer.c", Line: 1773)

I can cobble up a script but I hope you have enough from the assertion
to see what's going wrong...


The assertion says that the iterator produces changes in order that 
does

not correlate with LSN. But I have a hard time understanding how that
could happen, particularly because according to the line number this
happens in ReorderBufferCommit(), i.e. the current (non-streaming) 
case.


So instructions to reproduce the issue would be very helpful.


Using:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch
0003-Issue-individual-invalidations-with-wal_level-log-v2.patch
0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch
0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch
0006-Add-support-for-streaming-to-built-in-replication-v2.patch

As you expected the problem is the same with these new patches.

I have now tested more, and seen that it not always fails.  I guess that 
it here fails 3 times out of 4.  But the laptop I'm using at the moment 
is old and slow -- it may well be a factor as we've seen before [1].


Attached is the bash that I put together.  I tested with 
NUM_INSTANCES=2, which yields success, and NUM_INSTANCES=3, which fails 
often.  This same program run with HEAD never seems to fail (I tried a 
few dozen times).


thanks,

Erik Rijkers


[1] 
https://www.postgresql.org/message-id/3897361c7010c4ac03f358173adbcd60%40xs4all.nl


#!/bin/bash
unset PGSERVICE PGSERVICEFILE PGDATA PGPORT PGDATABASE
# PGPASSFILE must be set and have the appropriate entries

env | grep ^PG

  PROJECT=large_logical
# PROJECT=HEAD

BIN_DIR=$HOME/pg_stuff/pg_installations/pgsql.$PROJECT/bin
POSTGRES=$BIN_DIR/postgres
INITDB=$BIN_DIR/initdb
TMP_DIR=$HOME'/tmp/'$PROJECT
devel_file=${TMP_DIR}'/.devel'
NUM_INSTANCES=3
BASE_PORT=6015  #   ports 6015, 6016, 6017 
port1=$(( $BASE_PORT + 0 ))
port2=$(( $port1 + 1 ))
port3=$(( $port1 + 2 ))
scale=1  dbname=postgres  pubname=pub1  subname=sub1
if [[ ! -d $TMP_DIR ]]; then mkdir $TMP_DIR; fi
echo 's3kr1t' > $devel_file
  max_wal_senders=10  # publication side
max_replication_slots=10  # publication side and subscription side
 max_worker_processes=12  # subscription side
  max_logical_replication_workers=10  # subscription side
max_sync_workers_per_subscription=4   # subscription side
for n in `seq 1 $NUM_INSTANCES`; do
  port=$(( $BASE_PORT + $n -1 ))
data_dir=$TMP_DIR/pgsql.instance${n}/data
  server_dir=$TMP_DIR/pgsql.instance${n}
  $INITDB --pgdata=$data_dir --encoding=UTF8 --auth=scram-sha-256 --pwfile=$devel_file  # --waldir=$xlog_dir
 ( $POSTGRES -D $data_dir -p $port \
--wal_level=logical \
--max_replication_slots=$max_replication_slots \
--max_worker_processes=$max_worker_processes \
--max_logical_replication_workers=$max_logical_replication_workers \
--max_wal_senders=$max_wal_senders \
--max_sync_workers_per_subscription=$max_sync_workers_per_subscription \
--logging_collector=on \
--log_directory=${server_dir} \
--log_filename=logfile.${port} \
--log_replication_commands=on \
--autovacuum=off & )
#   --logical_work_mem=128MB & )
#   pg_isready -d $dbname --timeout=60 -p $port
done
#sleep $NUM_INSTANCES
#pg_isready -d $dbname -qp 6015 --timeout=60
#pg_isready -d $dbname -qp 6016 --timeout=60
num_loop=$(( $NUM_INSTANCES - 1 ))
$BIN_DIR/pgbench --port=$BASE_PORT --quiet --initialize --scale=$scale $dbname
echo "alter table pgbench_history add column hid serial primary key" | $BIN_DIR/psql -d $dbname -p $BASE_PORT -X
#pg_isready -d $dbname -qp 6015 --timeout=60
#pg_isready -d $dbname -qp 6016 --timeout=60
for n in `seq 1 $num_loop`; do
  target_port=$(( $BASE_PORT + $n ))
  pg_dump -Fc -p $BASE_PORT \
--exclude-table-data=pgbench_history  --exclude-table-data=pgbench_accounts \
--exclude-table-data=pgbench_branches --exclude-table-data=pgbench_tellers \
-tpgbench_history -tpgbench_accounts \
-tpgbench_branches -tpgbench_tellers \
$dbname | pg_restore -1 -p $target_port -d $dbname
done

#echo "sleep 2 (after dump/restore)"; sleep 2

for n in `seq 1 $num_loop`; do
  pubport=$(( $BASE_PORT + $n - 1 ))
  subport=$(( $BASE_PORT + $n ))
  appname='casc:'${subport}'<'${pubport}
  echo "create publication  $pubname for all tables" | psql -d $dbname -p $pubport -X
  echo "create subscription $subname
  

parallel append vs. simple UNION ALL

2017-12-23 Thread Robert Haas
As I mentioned in the commit message for the Parallel Append commit
(ab72716778128fb63d54ac256adf7fe6820a1185), it's kind of sad that this
doesn't work with UNION ALL queries, which are an obvious candidate
for such parallelization.  It turns out that it actually does work to
a limited degree: assuming that the UNION ALL query can be converted
to a simple appendrel, it can consider a parallel append of
non-partial paths only.  The attached patch lets it consider a
parallel append of partial paths by doing the following things:

1. Teaching set_subquery_pathlist to create *partial* SubqueryScan
paths as well as non-partial ones.
2. Teaching grouping_planner to create partial paths for the final rel
if not at the outermost query level.
3. Modifying finalize_plan to allow the gather_param to be passed
across subquery boundaries.

#3 is the only part I'm really unsure about; the other stuff looks
pretty cut and dried.

I have a draft patch that handles the case where the union can't be
converted to a simple appendrel, too, but that's not quite baked
enough to post yet.

For those for whom the above may be too technical to follow, here's an example:

pgbench -i 40
explain (costs off) select a.bid from pgbench_accounts a,
pgbench_branches b where a.bid = b.bid and aid % 1000 = 0 union all
select a.bid from pgbench_accounts a where aid % 1000 = 0;

Unpatched:

 Append
   ->  Gather
 Workers Planned: 2
 ->  Hash Join
   Hash Cond: (a.bid = b.bid)
   ->  Parallel Seq Scan on pgbench_accounts a
 Filter: ((aid % 1000) = 0)
   ->  Hash
 ->  Seq Scan on pgbench_branches b
   ->  Gather
 Workers Planned: 2
 ->  Parallel Seq Scan on pgbench_accounts a_1
   Filter: ((aid % 1000) = 0)

Patched:

 Gather
   Workers Planned: 2
   ->  Parallel Append
 ->  Hash Join
   Hash Cond: (a.bid = b.bid)
   ->  Parallel Seq Scan on pgbench_accounts a
 Filter: ((aid % 1000) = 0)
   ->  Hash
 ->  Seq Scan on pgbench_branches b
 ->  Parallel Seq Scan on pgbench_accounts a_1
   Filter: ((aid % 1000) = 0)

In this particular case the change doesn't buy very much, but the
second plan is better because avoid shutting down one set of workers
and starting a new set.  That's more efficient, plus it allows the two
branches to be worked in parallel rather than serially.  On a small
enough scale factor, even without the patch, you get this...

 Gather
   Workers Planned: 2
   ->  Parallel Append
 ->  Nested Loop
   Join Filter: (a.bid = b.bid)
   ->  Seq Scan on pgbench_branches b
   ->  Seq Scan on pgbench_accounts a
 Filter: ((aid % 1000) = 0)
 ->  Seq Scan on pgbench_accounts a_1
   Filter: ((aid % 1000) = 0)

...but that's not good because now we have regular sequential scans
instead of partial sequential scans.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


subquery-smarts.patch
Description: Binary data


Re: AS OF queries

2017-12-23 Thread konstantin knizhnik

On Dec 23, 2017, at 2:08 AM, Greg Stark wrote:

> On 20 December 2017 at 12:45, Konstantin Knizhnik
>  wrote:
> 
>> It seems to me that it will be not so difficult to implement them in
>> Postgres - we already have versions of tuples.
>> Looks like we only need to do three things:
>> 1. Disable autovacuum (autovacuum = off)
> 
> "The Wheel of Time turns, and Ages come and pass, leaving memories
> that become legend. Legend fades to myth, and even myth is long
> forgotten when the Age that gave it birth comes again"
> 
> I think you'll find it a lot harder to get this to work than just
> disabling autovacuum. Notably HOT updates can get cleaned up (and even
> non-HOT updates can now leave tombstone dead line pointers iirc) even
> if vacuum hasn't run.
> 

Yeh, I suspected that just disabling autovacuum was not enough.
I heard (but do no know too much) about microvacuum and hot updates.
This is why I was a little bit surprised when me test didn't show lost of 
updated versions.
May be it is because of vacuum_defer_cleanup_age.

> We do have the infrastructure to deal with that. c.f.
> vacuum_defer_cleanup_age. So in _theory_ you could create a snapshot
> with xmin older than recent_global_xmin as long as it's not more than
> vacuum_defer_cleanup_age older. But the devil will be in the details.
> It does mean that you'll be making recent_global_xmin move backwards
> which it has always been promised to *not* do

But what if I just forbid to change recent_global_xmin?
If it is stalled at FirstNormalTransactionId and never changed?
Will it protect all versions from been deleted?

> 
> Then there's another issue that logical replication has had to deal
> with -- catalog changes. You can't start looking at tuples that have a
> different structure than the current catalog unless you can figure out
> how to use the logical replication infrastructure to use the old
> catalogs. That's a huge problem to bite off and probably can just be
> left for another day if you can find a way to reliably detect the
> problem and raise an error if the schema is inconsistent.


Yes, catalog changes this is another problem of time travel.
I do not know any suitable way to handle several different catalog snapshots in 
one query.
But I think that there are a lot of cases where time travels without 
possibility of database schema change still will be useful.
The question is how we should handle such catalog changes if them are happen. 
Ideally we should not allow to move back beyond  this point.
Unfortunately it is not so easy to implement.


> 
> Postgres used to have time travel. I think it's come up more than once
> in the pasts as something that can probably never come back due to
> other decisions made. If more decisions have made it possible again
> that will be fascinating.
> 
> -- 
> greg




Re: [HACKERS] [COMMITTERS] pgsql: Fix freezing of a dead HOT-updated tuple

2017-12-23 Thread Andres Freund


On December 23, 2017 9:26:22 PM GMT+01:00, Alvaro Herrera 
 wrote:
>I noticed that I'm committer for this patch in the commitfest, though I
>don't remember setting that.  Are you expecting me to commit it?  I
>thought you'd do it, but if you want me to assume the responsibility I
>can do that.

I thought I pushed it to all branches? Do you see anything missing? Didn't know 
there's a CF entry...

Andres

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [HACKERS] [COMMITTERS] pgsql: Fix freezing of a dead HOT-updated tuple

2017-12-23 Thread Alvaro Herrera
I noticed that I'm committer for this patch in the commitfest, though I
don't remember setting that.  Are you expecting me to commit it?  I
thought you'd do it, but if you want me to assume the responsibility I
can do that.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: PoC: custom signal handler for extensions

2017-12-23 Thread Maksim Milyutin

23.12.17 12:58, legrand legrand wrote:


+1
if this permits to use extension  pg_query_state
  , that would be great !



Yes, attached patch is the single critical point. It allows to loose 
pg_query_state from the need to patch postgres core.


--
Regards,
Maksim Milyutin




Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2017-12-23 Thread Tomas Vondra


On 12/23/2017 03:03 PM, Erikjan Rijkers wrote:
> On 2017-12-23 05:57, Tomas Vondra wrote:
>> Hi all,
>>
>> Attached is a patch series that implements two features to the logical
>> replication - ability to define a memory limit for the reorderbuffer
>> (responsible for building the decoded transactions), and ability to
>> stream large in-progress transactions (exceeding the memory limit).
>>
> 
> logical replication of 2 instances is OK but 3 and up fail with:
> 
> TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
> "reorderbuffer.c", Line: 1773)
> 
> I can cobble up a script but I hope you have enough from the assertion
> to see what's going wrong...

The assertion says that the iterator produces changes in order that does
not correlate with LSN. But I have a hard time understanding how that
could happen, particularly because according to the line number this
happens in ReorderBufferCommit(), i.e. the current (non-streaming) case.

So instructions to reproduce the issue would be very helpful.

Attached is v2 of the patch series, fixing two bugs I discovered today.
I don't think any of these is related to your issue, though.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch.gz
Description: application/gzip


0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch.gz
Description: application/gzip


0003-Issue-individual-invalidations-with-wal_level-log-v2.patch.gz
Description: application/gzip


0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch.gz
Description: application/gzip


0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch.gz
Description: application/gzip


0006-Add-support-for-streaming-to-built-in-replication-v2.patch.gz
Description: application/gzip


Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2017-12-23 Thread Erikjan Rijkers

On 2017-12-23 05:57, Tomas Vondra wrote:

Hi all,

Attached is a patch series that implements two features to the logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).



logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File: 
"reorderbuffer.c", Line: 1773)


I can cobble up a script but I hope you have enough from the assertion 
to see what's going wrong...




Re: Add hint about replication slots when nearing wraparound

2017-12-23 Thread Michael Paquier
On Fri, Dec 22, 2017 at 07:55:19AM +0100, Feike Steenbergen wrote:
> On 21 December 2017 at 05:32, Michael Paquier  
> wrote:
> 
> > Don't you want to put that in its own  block? That's rather
> > important not to miss for administrators.
> 
> I didn't want to add yet another block on that documentation page,
> as it already has 2, however it may be good to upgrade the
> note to a caution, similar to the prepared transaction caution.

Yes, I agree with this position.
--
Michael


signature.asc
Description: PGP signature


Re: Fix permissions check on pg_stat_get_wal_senders

2017-12-23 Thread Michael Paquier
On Fri, Dec 22, 2017 at 07:49:34AM +0100, Feike Steenbergen wrote:
> On 21 December 2017 at 14:11, Michael Paquier  
> wrote:
> > You mean a WAL receiver here, not a WAL sender.
> 
> Fixed, thanks

[nit]
/*
 -   * Only superusers can see details. Other users only get the 
pid value
+* Only superusers and members of pg_read_all_stats can see 
details.
+* Other users only get the pid value
 * to know whether it is a WAL receiver, but no details.
 */

Incorrect comment format.
[/nit]

Committers run pgindent on each patch before committing anyway, and what
you are proposing here looks good to me, so I am marking that as ready for
committer. Simon, as the original committer of 25fff407, could you look
at what is proposed here?
--
Michael


signature.asc
Description: PGP signature


Re: PoC: custom signal handler for extensions

2017-12-23 Thread legrand legrand
+1
if this permits to use extension  pg_query_state
  , that would be great !





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html