Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Pavel Stehule
2010/4/12 Robert Haas robertmh...@gmail.com:
 On Sun, Apr 11, 2010 at 5:24 AM, Greg Smith g...@2ndquadrant.com wrote:
 From the rest of your comments, I'm comfortable that you're in sync with the
 not necessarily obvious risky spots here I wanted to raise awareness of.
  It's unreasonable to expect we'll have exactly the same priorities  here,
 and I doubt it's useful to debate how I perceive the merit of various
 development subsets here compared to yourself.  I don't think it's really
 important whether anyone agrees with me or not about exactly the value of a
 full table lock implementation.  The main thing I'm concerned about is just
 that it's noted as a known risky part, one that could end up blocking the
 project's ability to commit even a subset of the proposed patch here.

 I think that one of the things that we need to get our hands around is
 how we're going to distinguish the snapshot flavor of materialized
 view from the continuous update flavor.  By definition, the latter
 will only ever be supportable for a fairly restricted subset of all
 possible queries, and I am assuming that we will not want to decide
 what the behavior is going to be based on the query but rather based
 on what the user specifies.  Anything else seems like it would be have
 the potential for severe POLA violations.  So we need to think now
 about how we'll distinguish between the two flavors.  I imagine some
 sort of syntactic marker would be appropriate; not sure what.

 Reading this thread, I'm starting to grow concerned that some people
 may feel that manually refreshed materialized views are not even worth
 bothering with, because (the argument goes) you could just use some
 table and write a function that updates it.  There's probably some
 truth to that, but I guess my thought is that it would have some value
 as a convenience feature; and eventually we might optimize it to the
 point where it would make more sense to use the built-in feature
 rather than rolling your own.  However, if we're going to have
 complaints that manually refreshed materialized views suck and we
 should only ever support materialized views to the extent that we can
 make them automatically update on-the-fly, then let's have those
 complaints now before someone spends several months of their life on
 the project only to be told that we don't want it.  Let's be clear: I
 think it's useful, but, if other people disagree, we need to iron that
 out now.

 ...Robert

I thing so manually refreshed materialized views has sense. It is
similar to replication - there was replications like slony, but for
some people is more important integrated replication in 9.0. More -
manually refreshed (periodically refreshed) views can share lot if
infrastructure with dynamically actualised views. I am sure so
dynamical materialised views is bad task for GSoC - it is too large,
too complex. Manually refreshed views is adequate to two months work
and it has sense.

Regards
Pavel Stehule


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing hot standby

2010-04-12 Thread Fujii Masao
On Sat, Apr 10, 2010 at 5:39 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
 i'm startint to try Hot Standby  Streaming Replication, so i started
 a replication:

Great!

 but, my main concern is why it was asking for
 00010006? is this normal?

The standby server tries to replay all of the available WAL files in the
archive and pg_xlog directory at first. Then, when an invalid record is
found or no more WAL file is available, it starts streaming replication
and tries to read the missing WAL files from the primary.

In your testing, an invalid record was found in 00010006,
then the standby was waiting for it to be shipped from the primary.

 is this standby's way of
 saying i'm working but i have nothing to do?

Since SR is a record-based log-shipping, the standby can receive the WAL
records from the primary before the WAL file has been filled up. So, in
your case, I guess the standby was receiving the WAL records which belong
to 00010006 from the primary.

 when that happens after a standby restart, is normal that i have to
 wait until the file is created before it can accept connections?

No. All the WAL records for the standby to accept connections should be
shipped before any additional WAL records are written. Didn't the standby
accept connections before executing pgbench?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing hot standby

2010-04-12 Thread Jaime Casanova
On Fri, Apr 9, 2010 at 3:39 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:

 but, my main concern is why it was asking for
 00010006? is this normal? is this standby's way of
 saying i'm working but i have nothing to do?
 when that happens after a standby restart, is normal that i have to
 wait until the file is created before it can accept connections?


ok, i see this again in a new env. seems like this happen when i
shutdown standby and primary (in that order) after making some
WAL-logged action on the primary an then start again primary and
standby (in that order)... it doesn't occur always but it does occur
too often, still i'm not sure what is the key factor that triggers
this

standby waits for a file that doesn't exist to reach a consistent
state (last time i wait for an hour after i force a WAL-logged
action), here is an extract of the message on standby's log:

postg...@casanova1:/usr/local/pgsql/9.0slave$ cat
data/pg_log/postgresql-2010-04-12_000947.log
LOG:  database system was interrupted while in recovery at log time
2010-04-11 20:44:09 GMT
HINT:  If this has occurred more than once some data might be
corrupted and you might need to choose an earlier recovery target.
LOG:  entering standby mode
LOG:  restored log file 0001000E0014 from archive
LOG:  redo starts at E/1488
LOG:  consistent recovery state reached at E/1500
cp: no se puede efectuar `stat' sobre
«/usr/local/pgsql/wal_archive/0001000E0015»: No existe el
fichero o el directorio
LOG:  unexpected pageaddr D/EE00 in log file 14, segment 21, offset 0
cp: no se puede efectuar `stat' sobre
«/usr/local/pgsql/wal_archive/0001000E0015»: No existe el
fichero o el directorio
LOG:  streaming replication successfully connected to primary



another point, what happened with this:
http://archives.postgresql.org/message-id/1229549172.4793.105.ca...@ebony.2ndquadrant?
Obviously we still have the problem with hash indexes, and in that
thread Tom advice was just to document the issue and while that could
be fine at least we should be emitting better messages, consider this
one that i got on the standby server (where 4658650 is the oid of a
hash index):

mic=# explain analyze select * from tt1 where col1 = 5000;
ERROR:  could not read block 0 in file base/21958/4658650: read only
0 of 8192 bytes


-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing hot standby

2010-04-12 Thread Jaime Casanova
On Mon, Apr 12, 2010 at 1:21 AM, Fujii Masao masao.fu...@gmail.com wrote:
 Didn't the standby
 accept connections before executing pgbench?


nop, and last time i try it was in that state for an hour (without
accepting connections)... after that i execute on the primary: CREATE
TABLE tt2 AS SELECT generate_series(1, 100) as i
After that, the standby start accepting connections

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing hot standby

2010-04-12 Thread Fujii Masao
On Mon, Apr 12, 2010 at 3:29 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
 On Mon, Apr 12, 2010 at 1:21 AM, Fujii Masao masao.fu...@gmail.com wrote:
 Didn't the standby
 accept connections before executing pgbench?


 nop, and last time i try it was in that state for an hour (without
 accepting connections)... after that i execute on the primary: CREATE
 TABLE tt2 AS SELECT generate_series(1, 100) as i
 After that, the standby start accepting connections

OK. Your reproduction scenario is the following?
If not, could you show me the complete scenario?

1. start the primary
2. pg_start_backup()
3. copy $PGDATA from the primary to the standby
4. pg_stop_backup();
5. create the recovery.conf and start the standby
6. shutdown (smart mode) the standby
7. start the standby again
   -- cannot accept connnections until new WAL has been created

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl stop -m immediate on the primary server inflates sequences

2010-04-12 Thread Boszormenyi Zoltan
Martijn van Oosterhout írta:
 On Sat, Apr 10, 2010 at 02:36:41PM +0200, Boszormenyi Zoltan wrote:
   
 The above is quite reproducable, pg_ctl stop -m immediate
 usually inflated my serial sequence, but I had two occasions
 when not. The 69 - 70 was one. The inflated increase is always 33:
 
 AFAIKS sequences are pre-logged with 32 values to WAL to avoid
 overhead. I suspect this is why you are seeing those gaps.
   
 Then it should happen all the time, even with -m fast or -m smart, no?
 

 Nope, because on a normal shutdown it writes out the actual value. When
 you say immediate you mean right now, don't bother with anything not
 important, like for example gaps in sequences. You're essentially
 crashing the DB.

 Have a ncie day,
   

OK, thanks for the info.

-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Fujii Masao
On Sat, Apr 10, 2010 at 8:23 AM, Erik Rijkers e...@xs4all.nl wrote:
 I understand that in the scale=1000 case, there is a huge
 cache effect, but why doesn't that apply to the pgbench runs
 against the standby?  (and for the scale=10_000 case the
 differences are still rather large)

I guess that this performance degradation happened because a number of
buffer replacements caused UpdateMinRecoveryPoint() often. So I think
increasing shared_buffers would improve the performance significantly.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and a disk full in primary

2010-04-12 Thread Heikki Linnakangas
Fujii Masao wrote:
 doc/src/sgml/config.sgml
 -archival or to recover from a checkpoint. If standby_keep_segments
 +archival or to recover from a checkpoint. If
 varnamestandby_keep_segments/
 
 The word standby_keep_segments always needs the varname tag, I think.

Thanks, fixed.

 We should remove the document 25.2.5.2. Monitoring?

I updated it to no longer claim that the primary can run out of disk
space because of a hung WAL sender. The information about calculating
the lag between primary and standby still seems valuable, so I didn't
remove the whole section.

 Why is standby_keep_segments used even if max_wal_senders is zero?
 In that case, ISTM we don't need to keep any WAL files in pg_xlog
 for the standby.

True. I don't think we should second guess the admin on that, though.
Perhaps he only set max_wal_senders=0 temporarily, and will be
disappointed if the the logs are no longer there when he sets it back to
non-zero and restarts the server.

 When XLogRead() reads two WAL files and only the older of them is recycled
 during being read, it might fail in checking whether the read data is valid.
 This is because the variable recptr can advance to the newer WAL file
 before the check.

Thanks, fixed.

 When walreceiver has gotten stuck for some reason, walsender would be
 unable to pass through the send() system call, and also get stuck.
 In the patch, such a walsender cannot exit forever because it cannot
 call XLogRead(). So I think that the bgwriter needs to send the
 exit-signal to such a too lagged walsender. Thought?

Any backend can get stuck like that.

 The shmem of latest recycled WAL file is updated before checking whether
 it's already been archived. If archiving is not working for some reason,
 the WAL file which that shmem indicates might not actually have been
 recycled yet. In this case, the standby cannot obtain the WAL file from
 the primary because it's been marked as latest recycled, and from the
 archive because it's not been archived yet. This seems to be a big problem.
 How about moving the update of the shmem to after calling 
 XLogArchiveCheckDone()
 in RemoveOldXlogFiles()?

Good point. It's particularly important considering that if a segment
hasn't been archived yet, it's not available to the standby from the
archive either. I changed that.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing hot standby

2010-04-12 Thread Heikki Linnakangas
Jaime Casanova wrote:
 On Fri, Apr 9, 2010 at 3:39 PM, Jaime Casanova
 jcasa...@systemguards.com.ec wrote:
 but, my main concern is why it was asking for
 00010006? is this normal? is this standby's way of
 saying i'm working but i have nothing to do?

Yes.

 when that happens after a standby restart, is normal that i have to
 wait until the file is created before it can accept connections?
 
 ok, i see this again in a new env. seems like this happen when i
 shutdown standby and primary (in that order) after making some
 WAL-logged action on the primary an then start again primary and
 standby (in that order)... it doesn't occur always but it does occur
 too often, still i'm not sure what is the key factor that triggers
 this

Perhaps you're being bitten by the can't start hot standby from a
shutdown checkpoint issue I've complained for a long time. There's a
pending patch for that, see
http://archives.postgresql.org/pgsql-hackers/2010-04/msg00222.php.

If that's what's hitting you, the way to reproduce is:

1. shut down primary
2. shut down standby
3. start standby
4. start primary.

The standby will get hung until it receives an online checkpoint record
from the primary. Connecting to the primary and issuing a manual
CHECKPOINT helps.

That's not the order of shutdowns you described, though...

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 2:16 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 I am sure so
 dynamical materialised views is bad task for GSoC - it is too large,
 too complex. Manually refreshed views is adequate to two months work
 and it has sense.

That is my feeling also - though I fear that even the simplest
possible implementation of this feature may be a stretch.  Anyway we
agree: keep it simple.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 5:06 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Sat, Apr 10, 2010 at 8:23 AM, Erik Rijkers e...@xs4all.nl wrote:
 I understand that in the scale=1000 case, there is a huge
 cache effect, but why doesn't that apply to the pgbench runs
 against the standby?  (and for the scale=10_000 case the
 differences are still rather large)

 I guess that this performance degradation happened because a number of
 buffer replacements caused UpdateMinRecoveryPoint() often. So I think
 increasing shared_buffers would improve the performance significantly.

I think we need to investigate this more.  It's not going to look good
for the project if people find that a hot standby server runs two
orders of magnitude slower than the primary.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] walreceiver is uninterruptible on win32

2010-04-12 Thread Fujii Masao
On Thu, Apr 8, 2010 at 5:01 PM, Fujii Masao masao.fu...@gmail.com wrote:
 If it does, there should be
 some way to get PGXS to execute that rule as well, I'm sure.

 If we can copy/link the source file defining new PQexec when
 we compile the dblink, DLL doesn't seem to be required. So I
 stop creating new DLL for PGXS.

On second thought, ISTM that we cannot use any source files which exist
in places other than contrib/dblink and installation directory when we
compile dblink under USE_PGXS=1. But we can put the file implementing
new PQexec on those neither. So I'm thinking again that it should be
provided as the shared library and be linked from walreceiver and dblink.
Is this right?

If adding new shared library is too big change at this point, I think
that we should postpone the fix only for dblink to 9.1 or later. Since
no one has complained about this long-term problem of dblink, I'm not
sure it really should be fixed right now. Thought?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Erik Rijkers
On Sat, April 10, 2010 01:23, Erik Rijkers wrote:
 Using 9.0devel cvs HEAD, 2010.04.08.

 I am trying to understand the performance difference
 between primary and standby under a standard pgbench
 read-only test.

 server has 32 GB, 2 quadcores.

 primary:
   tps = 34606.747930 (including connections establishing)
   tps = 34527.078068 (including connections establishing)
   tps = 34654.297319 (including connections establishing)

 standby:
   tps = 700.346283 (including connections establishing)
   tps = 717.576886 (including connections establishing)
   tps = 740.522472 (including connections establishing)

 transaction type: SELECT only
 scaling factor: 1000
 query mode: simple
 number of clients: 20
 number of threads: 1
 duration: 900 s

 both instances have
   max_connections = 100
   shared_buffers = 256MB
   checkpoint_segments = 50
   effective_cache_size= 16GB

 See also:

 http://archives.postgresql.org/pgsql-testers/2010-04/msg5.php
  (differences with scale 10_000)


To my surprise, I have later seen the opposite behaviour with the standby 
giving fast runs, and
the primary slow.

FWIW, I've overnight run a larget set of tests. (against same 9.0devel
instances as the ones from the earlier email).

These results are generally more balanced.

for scale in
for clients in 1 5 10 20
for port in 6565 6566 -- primaryport standbyport
for run in `seq 1 3`
pgbench ...
sleep ((scale / 10) * 60)
done
done
done
done

(so below, alternating 3 primary, followed by 3 standby runs)

scale: 10  clients:  1  tps = 15219.019272  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 15301.847615  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 15238.907436  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 12129.928289  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 12151.711589  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 12203.494512  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  5  tps = 60248.120599  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 60827.949875  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 61167.447476  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 50750.385403  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 50600.891436  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 50486.857610  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients: 10  tps = 60307.739327  pgbench -h /tmp -p 6565 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 60264.230349  pgbench -h /tmp -p 6565 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 60146.370598  pgbench -h /tmp -p 6565 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 50455.537671  pgbench -h /tmp -p 6566 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 49877.000813  pgbench -h /tmp -p 6566 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 50097.949766  pgbench -h /tmp -p 6566 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 20  tps = 43355.220657  pgbench -h /tmp -p 6565 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 43352.725422  pgbench -h /tmp -p 6565 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 43496.085623  pgbench -h /tmp -p 6565 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 37169.126299  pgbench -h /tmp -p 6566 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 37100.260450  pgbench -h /tmp -p 6566 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 37342.758507  pgbench -h /tmp -p 6566 -n -S 
-c 20 -T 900 -j 1
scale: 100 clients:  1  tps = 12514.185089  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 12542.842198  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 12595.688640  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 10435.681851  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 10456.983353  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 10434.213044  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  5  tps = 48682.166988  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 100 clients:  5  tps = 48656.883485  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 100 clients:  5  tps = 48687.894655  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 100 clients:  5  tps = 41901.629933  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 100 clients:  5  tps = 41953.386791  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 100 clients:  5  tps = 

Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Jim Mlodgenski
On Mon, Apr 12, 2010 at 7:07 AM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Apr 12, 2010 at 5:06 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Sat, Apr 10, 2010 at 8:23 AM, Erik Rijkers e...@xs4all.nl wrote:
 I understand that in the scale=1000 case, there is a huge
 cache effect, but why doesn't that apply to the pgbench runs
 against the standby?  (and for the scale=10_000 case the
 differences are still rather large)

 I guess that this performance degradation happened because a number of
 buffer replacements caused UpdateMinRecoveryPoint() often. So I think
 increasing shared_buffers would improve the performance significantly.

 I think we need to investigate this more.  It's not going to look good
 for the project if people find that a hot standby server runs two
 orders of magnitude slower than the primary.
As a data point, I did a read only pgbench test and found that the
standby runs about 15% slower than the primary with identical hardware
and configs.

 ...Robert

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
--
Jim Mlodgenski
EnterpriseDB (http://www.enterprisedb.com)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and a disk full in primary

2010-04-12 Thread Fujii Masao
On Mon, Apr 12, 2010 at 7:41 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 We should remove the document 25.2.5.2. Monitoring?

 I updated it to no longer claim that the primary can run out of disk
 space because of a hung WAL sender. The information about calculating
 the lag between primary and standby still seems valuable, so I didn't
 remove the whole section.

Yes.

 !  An important health indicator of streaming replication is the amount
 !  of WAL records generated in the primary, but not yet applied in the
 !  standby.

Since pg_last_xlog_receive_location doesn't let us know the WAL location
not yet applied, we should use pg_last_xlog_replay_location instead. How
How about?:


  An important health indicator of streaming replication is the amount
  of WAL records generated in the primary, but not yet applied in the
  standby. You can calculate this lag by comparing the current WAL write
- location on the primary with the last WAL location received by the
+ location on the primary with the last WAL location replayed by the
  standby. They can be retrieved using
  functionpg_current_xlog_location/ on the primary and the
- functionpg_last_xlog_receive_location/ on the standby,
+ functionpg_last_xlog_replay_location/ on the standby,
  respectively (see xref linkend=functions-admin-backup-table and
  xref linkend=functions-recovery-info-table for details).
- The last WAL receive location in the standby is also displayed in the
- process status of the WAL receiver process, displayed using the
- commandps/ command (see xref linkend=monitoring-ps for details).
 /para
/sect3


 Why is standby_keep_segments used even if max_wal_senders is zero?
 In that case, ISTM we don't need to keep any WAL files in pg_xlog
 for the standby.

 True. I don't think we should second guess the admin on that, though.
 Perhaps he only set max_wal_senders=0 temporarily, and will be
 disappointed if the the logs are no longer there when he sets it back to
 non-zero and restarts the server.

OK. Since the behavior is not intuitive for me, I'd like to add the note
into the end of the description about standby_keep_segments. How about?:


This setting has effect if max_wal_senders is zero.


 When walreceiver has gotten stuck for some reason, walsender would be
 unable to pass through the send() system call, and also get stuck.
 In the patch, such a walsender cannot exit forever because it cannot
 call XLogRead(). So I think that the bgwriter needs to send the
 exit-signal to such a too lagged walsender. Thought?

 Any backend can get stuck like that.

OK.

 + },
 +
 + {
 + {standby_keep_segments, PGC_SIGHUP, WAL_CHECKPOINTS,
 + gettext_noop(Sets the number of WAL files held for 
 standby servers),
 + NULL
 + },
 + StandbySegments,
 + 0, 0, INT_MAX, NULL, NULL

We should s/WAL_CHECKPOINTS/WAL_REPLICATION ?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Erik Rijkers
On Mon, April 12, 2010 14:22, Erik Rijkers wrote:
 On Sat, April 10, 2010 01:23, Erik Rijkers wrote:

Oops, typos in that pseudo loop:
of course there was a pgbench init step after that first line.

 for scale in 10 100 500 1000
  pgbench ...   # initialise
  sleep ((scale / 10) * 60)
 for clients in 1 5 10 20
 for port in 6565 6566 -- primaryport standbyport
 for run in `seq 1 3`
 pgbench ...
  sleep 120
 done
 done
 done
 done





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and a disk full in primary

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 6:41 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Why is standby_keep_segments used even if max_wal_senders is zero?
 In that case, ISTM we don't need to keep any WAL files in pg_xlog
 for the standby.

 True. I don't think we should second guess the admin on that, though.
 Perhaps he only set max_wal_senders=0 temporarily, and will be
 disappointed if the the logs are no longer there when he sets it back to
 non-zero and restarts the server.

If archive_mode is off and max_wal_senders = 0, then the WAL that's
being generated won't be usable for streaming anyway, right?

I think this is another manifestation of the problem I was complaining
about over the weekend: there's no longer a single GUC that controls
what type of information we emit as WAL.  In previous releases,
archive_mode served that function, but now it's much more complicated
and, IMHO, not very comprehensible.

http://archives.postgresql.org/pgsql-hackers/2010-04/msg00509.php

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 8:32 AM, Jim Mlodgenski jimm...@gmail.com wrote:
 I think we need to investigate this more.  It's not going to look good
 for the project if people find that a hot standby server runs two
 orders of magnitude slower than the primary.
 As a data point, I did a read only pgbench test and found that the
 standby runs about 15% slower than the primary with identical hardware
 and configs.

Hmm.  That's not great, but it's a lot better than 50x.  I wonder what
was different in Erik's environment.  Does running in standby mode use
more memory, such that it might have pushed the machine over the line
into swap?

Or if it's CPU load, maybe Erik could gprof it?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Aidan Van Dyk
* Robert Haas robertmh...@gmail.com [100412 07:10]:
 
 I think we need to investigate this more.  It's not going to look good
 for the project if people find that a hot standby server runs two
 orders of magnitude slower than the primary.

Yes, it's not good, but it's a known problem.  We've had people
complaining that wal-replay can't keep up with a wal stream from a heavy
server.

The master producing the wal stream has $XXX seperate read/modify
processes working over the data dir, and is bottle-necked by the
serialized WAL stream.  All the seek+read delays are parallized and
overlapping.

But on the slave (traditionally PITR slave, now also HS/SR), has al
lthat read-modify-write happening in a single thread fasion, meaning
that WAL record $X+1 waits until the buffer $X needs to modify is read
in.  All the seek+read delays are serialized.

You can optimize that by keepdng more of them in buffers (shared, or OS
cache), but the WAL producer, by it's very nature being a
multi-task-io-load producing random read/write is always going to go
quicker than single-stream random-io WAL consumer...

a.

-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Aidan Van Dyk
And I see now that he's doing a stream of read-only queries on a slave,
presumably with no WAL even being replayed...

Sorry for the noise

a.

* Aidan Van Dyk ai...@highrise.ca [100412 09:40]:
 * Robert Haas robertmh...@gmail.com [100412 07:10]:
  
  I think we need to investigate this more.  It's not going to look good
  for the project if people find that a hot standby server runs two
  orders of magnitude slower than the primary.
 
 Yes, it's not good, but it's a known problem.  We've had people
 complaining that wal-replay can't keep up with a wal stream from a heavy
 server.
 
 The master producing the wal stream has $XXX seperate read/modify
 processes working over the data dir, and is bottle-necked by the
 serialized WAL stream.  All the seek+read delays are parallized and
 overlapping.
 
 But on the slave (traditionally PITR slave, now also HS/SR), has al
 lthat read-modify-write happening in a single thread fasion, meaning
 that WAL record $X+1 waits until the buffer $X needs to modify is read
 in.  All the seek+read delays are serialized.
 
 You can optimize that by keepdng more of them in buffers (shared, or OS
 cache), but the WAL producer, by it's very nature being a
 multi-task-io-load producing random read/write is always going to go
 quicker than single-stream random-io WAL consumer...
 
 a.
 
 -- 
 Aidan Van Dyk Create like a god,
 ai...@highrise.ca   command like a king,
 http://www.highrise.ca/   work like a slave.



-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Kevin Grittner
Aidan Van Dyk ai...@highrise.ca wrote:
 
 We've had people complaining that wal-replay can't keep up with a
 wal stream from a heavy server.
 
I thought this thread was about the slow performance running a mix
of read-only queries on the slave versus the master, which doesn't
seem to have anything to do with the old issue you're describing.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Erik Rijkers
resending this message, as it seems to have bounced.

(below, I did fix the typo in the pseudocode loop)

 Original Message 

Subject: Re: [HACKERS] testing HS/SR - 1 vs 2 performance
From:Erik Rijkers e...@xs4all.nl
Date:Mon, April 12, 2010 14:22
To:  pgsql-hackers@postgresql.org
--

On Sat, April 10, 2010 01:23, Erik Rijkers wrote:
 Using 9.0devel cvs HEAD, 2010.04.08.

 I am trying to understand the performance difference
 between primary and standby under a standard pgbench
 read-only test.

 server has 32 GB, 2 quadcores.

 primary:
   tps = 34606.747930 (including connections establishing)
   tps = 34527.078068 (including connections establishing)
   tps = 34654.297319 (including connections establishing)

 standby:
   tps = 700.346283 (including connections establishing)
   tps = 717.576886 (including connections establishing)
   tps = 740.522472 (including connections establishing)

 transaction type: SELECT only
 scaling factor: 1000
 query mode: simple
 number of clients: 20
 number of threads: 1
 duration: 900 s

 both instances have
   max_connections = 100
   shared_buffers = 256MB
   checkpoint_segments = 50
   effective_cache_size= 16GB

 See also:

 http://archives.postgresql.org/pgsql-testers/2010-04/msg5.php
  (differences with scale 10_000)


To my surprise, I have later seen the opposite behaviour with the standby 
giving fast runs, and
the primary slow.

FWIW, I've overnight run a larget set of tests. (against same 9.0devel
instances as the ones from the earlier email).

These results are generally more balanced.

for scale in 10 100 500 1000
pgbench ...   # initialise
sleep ((scale / 10) * 60)
for clients in 1 5 10 20
for port in 6565 6566 -- primaryport standbyport
for run in `seq 1 3`
pgbench ...
sleep ((scale / 10) * 60)
done
done
done
done

(so below, alternating 3 primary, followed by 3 standby runs)

scale: 10  clients:  1  tps = 15219.019272  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 15301.847615  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 15238.907436  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 12129.928289  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 12151.711589  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  1  tps = 12203.494512  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 10  clients:  5  tps = 60248.120599  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 60827.949875  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 61167.447476  pgbench -h /tmp -p 6565 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 50750.385403  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 50600.891436  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients:  5  tps = 50486.857610  pgbench -h /tmp -p 6566 -n -S 
-c 5 -T 900 -j 1
scale: 10  clients: 10  tps = 60307.739327  pgbench -h /tmp -p 6565 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 60264.230349  pgbench -h /tmp -p 6565 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 60146.370598  pgbench -h /tmp -p 6565 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 50455.537671  pgbench -h /tmp -p 6566 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 49877.000813  pgbench -h /tmp -p 6566 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 10  tps = 50097.949766  pgbench -h /tmp -p 6566 -n -S 
-c 10 -T 900 -j 1
scale: 10  clients: 20  tps = 43355.220657  pgbench -h /tmp -p 6565 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 43352.725422  pgbench -h /tmp -p 6565 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 43496.085623  pgbench -h /tmp -p 6565 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 37169.126299  pgbench -h /tmp -p 6566 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 37100.260450  pgbench -h /tmp -p 6566 -n -S 
-c 20 -T 900 -j 1
scale: 10  clients: 20  tps = 37342.758507  pgbench -h /tmp -p 6566 -n -S 
-c 20 -T 900 -j 1
scale: 100 clients:  1  tps = 12514.185089  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 12542.842198  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 12595.688640  pgbench -h /tmp -p 6565 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 10435.681851  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 10456.983353  pgbench -h /tmp -p 6566 -n -S 
-c 1 -T 900 -j 1
scale: 100 clients:  1  tps = 10434.213044  pgbench -h 

Re: [HACKERS] testing hot standby

2010-04-12 Thread Jaime Casanova
On Mon, Apr 12, 2010 at 1:48 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Mon, Apr 12, 2010 at 3:29 PM, Jaime Casanova
 jcasa...@systemguards.com.ec wrote:
 On Mon, Apr 12, 2010 at 1:21 AM, Fujii Masao masao.fu...@gmail.com wrote:
 Didn't the standby
 accept connections before executing pgbench?


 nop, and last time i try it was in that state for an hour (without
 accepting connections)... after that i execute on the primary: CREATE
 TABLE tt2 AS SELECT generate_series(1, 100) as i
 After that, the standby start accepting connections

 OK. Your reproduction scenario is the following?
 If not, could you show me the complete scenario?

 1. start the primary
 2. pg_start_backup()
 3. copy $PGDATA from the primary to the standby
 4. pg_stop_backup();
 5. create the recovery.conf and start the standby

execute some WAL-logged action (i've seen this happen even with no
WAL-logged action if i wait for a while before shutdown servers)

 6. shutdown (smart mode) the standby

shutdown (smart) the primary
start the primary again

 7. start the standby again
   -- cannot accept connnections until new WAL has been created


a manual CHECKPOINT on the primary does help

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain and PARAM_EXEC

2010-04-12 Thread Yeb Havinga
The patch I sent earlier is flaud with respect to subplan parameter 
numbering, I counted from zero where the parParam list had to be used.


Yeb Havinga wrote: 

See patch below against HEAD.

Example of query against catalog:

postgres=# explain verbose select oid::int + 1,(select oid from 
pg_class a where a.oid = b.relfilenode and a.relnamespace = 
b.relnamespace) from pg_class b;
  QUERY 
PLAN  
 


Seq Scan on pg_catalog.pg_class b  (cost=0.00..2459.64 rows=296 width=12)
  Output: ((b.oid)::integer + 1), SubPlan 1 ($0 := b.relfilenode, $1 
:= b.relnamespace)

  SubPlan 1
-  Index Scan using pg_class_oid_index on pg_catalog.pg_class a  
(cost=0.00..8.27 rows=1 width=4)

  Output: a.oid
  Index Cond: (a.oid = $0)
  Filter: (a.relnamespace = $1)
(7 rows)

Explain of alternative subplan (query comes from aggregates.sql in 
regression).


regression=# explain verbose select ten, sum(distinct four) from onek a
group by ten
having exists (select 1 from onek b where sum(distinct a.four) = b.four);
 QUERY 
PLAN 
-- 


GroupAggregate  (cost=197.83..216.52 rows=10 width=8)
  Output: a.ten, sum(DISTINCT a.four)
  Filter: (alternatives: SubPlan 1 ($0 := sum(DISTINCT a.four)) or 
hashed SubPlan 2)

  -  Sort  (cost=197.83..200.33 rows=1000 width=8)
Output: a.ten, a.four
Sort Key: a.ten
-  Seq Scan on public.onek a  (cost=0.00..148.00 rows=1000 
width=8)

  Output: a.ten, a.four
  SubPlan 1
-  Seq Scan on public.onek b  (cost=0.00..150.50 rows=250 width=0)
  Filter: ($0 = b.four)
  SubPlan 2
-  Seq Scan on public.onek b  (cost=0.00..148.00 rows=1000 width=4)
  Output: b.four
(14 rows)

Would the explain above be better if the filter with subplans 
arguments showed the EXISTS keyword? Much code from get_sublink_expr 
could be reused to show additional info depending on the sublink type 
and testexpr.


Other info: there are now only appends to the context-buf. The 
problems with OUTER var I mentioned earlier are gone now arguments are 
deparsed using get_rule_expr instead of deparse_expression.


regards,
Yeb Havinga




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] walreceiver is uninterruptible on win32

2010-04-12 Thread Joseph Conway
Fujii Masao wrote:
 If adding new shared library is too big change at this point, I think
 that we should postpone the fix only for dblink to 9.1 or later. Since
 no one has complained about this long-term problem of dblink, I'm not
 sure it really should be fixed right now. Thought?

I would agree with this. No one has ever complained that I am aware of.

Joe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] non-reproducible failure of random test on HEAD

2010-04-12 Thread Kevin Grittner
I just did a checkout from HEAD (a few minutes ago) and ran this:
 
make distclean ; ./configure --prefix=/usr/local/pgsql-serializable
--enable-integer-datetimes --enable-debug --enable-cassert
--enable-depend --with-libxml  make check
 
I got a failure on the random test.  Unfortunately I didn't check
the log before typing:
 
make check
 
Nothing compiled or linked, but all tests passed.  I tried a couple
more times with everything passing.
 
This is 32-bit kubuntu 9.10 on a lenovo 8808-8NU, which has two
Intel(R) Pentium(R) D CPU 3.40GHz and 3GB RAM.

I have never seen this before. Is there something I should do to
follow up on it (other than not blowing away the evidence if I see
it again)?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing hot standby

2010-04-12 Thread Jaime Casanova
On Mon, Apr 12, 2010 at 9:27 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
 On Mon, Apr 12, 2010 at 1:48 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Mon, Apr 12, 2010 at 3:29 PM, Jaime Casanova
 jcasa...@systemguards.com.ec wrote:
 On Mon, Apr 12, 2010 at 1:21 AM, Fujii Masao masao.fu...@gmail.com wrote:
 Didn't the standby
 accept connections before executing pgbench?


 nop, and last time i try it was in that state for an hour (without
 accepting connections)... after that i execute on the primary: CREATE
 TABLE tt2 AS SELECT generate_series(1, 100) as i
 After that, the standby start accepting connections

 OK. Your reproduction scenario is the following?
 If not, could you show me the complete scenario?

 1. start the primary
 2. pg_start_backup()
 3. copy $PGDATA from the primary to the standby
 4. pg_stop_backup();
 5. create the recovery.conf and start the standby

 execute some WAL-logged action (i've seen this happen even with no
 WAL-logged action if i wait for a while before shutdown servers)

 6. shutdown (smart mode) the standby

 shutdown (smart) the primary
 start the primary again

 7. start the standby again

i guess, this is because the primary is in recovery when the standby
tries to connect to it, and it should wait until the primary is ready
but seems like the primary is failing to advertise itself and the
standby doesn't recheck the condition... could be?

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] walreceiver is uninterruptible on win32

2010-04-12 Thread Magnus Hagander
On Mon, Apr 12, 2010 at 13:54, Fujii Masao masao.fu...@gmail.com wrote:
 On Thu, Apr 8, 2010 at 5:01 PM, Fujii Masao masao.fu...@gmail.com wrote:
 If it does, there should be
 some way to get PGXS to execute that rule as well, I'm sure.

 If we can copy/link the source file defining new PQexec when
 we compile the dblink, DLL doesn't seem to be required. So I
 stop creating new DLL for PGXS.

 On second thought, ISTM that we cannot use any source files which exist
 in places other than contrib/dblink and installation directory when we
 compile dblink under USE_PGXS=1. But we can put the file implementing
 new PQexec on those neither. So I'm thinking again that it should be
 provided as the shared library and be linked from walreceiver and dblink.
 Is this right?

 If adding new shared library is too big change at this point, I think
 that we should postpone the fix only for dblink to 9.1 or later. Since
 no one has complained about this long-term problem of dblink, I'm not
 sure it really should be fixed right now. Thought?

+1. Let's fix walreceiver for now, and we can revisit dblink later.
Since we haven't had any complaints so far...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Josh Berkus
On 4/9/10 1:36 PM, pavelbaros wrote:
 2) change rewriter
 - usually, view is relation with defined rule and when rewriting, rule
 is fired and relation (view) is replaced by definition of view. If
 relation do not have rule, planner and executor behave to it as physical
 table (relation). In case of materialized view we want to rewrite select
 statement only in case when we refreshing MV. In other cases rewriter
 should skip rewriting and pick up physical relation. Exclude situation
 when other rewrite rules which are not related to MV definition are
 specified.

This was done (although not completed) against PostgreSQL 7.1 by
students in Georgia, USA, I believe.  It might be worthwhile looking at
their work if I can find it (if nowhere else, it should be in the ACM).

There are basically 2 major parts for materialized views:

A) Planner: Getting the query planner to swap in the MatView for part of
a query automatically for query plan portions which the MatView supports;

B) Maintenance: maintaining the MatView data according to the programmed
scheme (synch, asynch, periodic).

I do not believe it is possible to do both of the above in one summer.
Of the two, (A) would be more useful since it is possible to manually
implement (B) using triggers, queues and cron jobs today.


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 1:50 PM, Josh Berkus j...@agliodbs.com wrote:
 On 4/9/10 1:36 PM, pavelbaros wrote:
 2) change rewriter
 - usually, view is relation with defined rule and when rewriting, rule
 is fired and relation (view) is replaced by definition of view. If
 relation do not have rule, planner and executor behave to it as physical
 table (relation). In case of materialized view we want to rewrite select
 statement only in case when we refreshing MV. In other cases rewriter
 should skip rewriting and pick up physical relation. Exclude situation
 when other rewrite rules which are not related to MV definition are
 specified.

 This was done (although not completed) against PostgreSQL 7.1 by
 students in Georgia, USA, I believe.  It might be worthwhile looking at
 their work if I can find it (if nowhere else, it should be in the ACM).

 There are basically 2 major parts for materialized views:

 A) Planner: Getting the query planner to swap in the MatView for part of
 a query automatically for query plan portions which the MatView supports;

 B) Maintenance: maintaining the MatView data according to the programmed
 scheme (synch, asynch, periodic).

 I do not believe it is possible to do both of the above in one summer.
 Of the two, (A) would be more useful since it is possible to manually
 implement (B) using triggers, queues and cron jobs today.

I don't believe that it's possible to do EITHER of those things in one
summer.  I believe that a basic implementation that has NO bells and
whistles at all, as originally proposed, is going to be a Very Hard
Project.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Greg Smith

Josh Berkus wrote:

There are basically 2 major parts for materialized views:
A) Planner: Getting the query planner to swap in the MatView for part of
a query automatically for query plan portions which the MatView supports;
B) Maintenance: maintaining the MatView data according to the programmed
scheme (synch, asynch, periodic).
  


I'm run more into problems where it's perfectly fine to specify using 
the materialized view directly in the query, but keeping that view up to 
date usefully was the real problem.  The whole idea of getting a MV used 
automatically is valuable, but far down the roadmap as I see it.


Not everyone would agree of course, and your description does suggest a 
better way to organize a high-level summary though; here's a first cut:


1) Creation of materalized view
Current state:  using CREATE TABLE AS or similar mechanism, maintain 
manually
Optimal:  CREATE MATERIALIZED VIEW grammar, metadata to store MV data, 
dump/reload support


2) Updating materialized views
Current state:  periodically create new snapshots, or maintain using 
triggers
Optimal:  Built-in refresh via multiple strategies, with minimal locking 
as to improve concurrent access


3) Using materialized views in the planner
Current state:  specify the manually created MV in queries that can use it
Optimal:  Automatically accelerate queries that could be satisfied by 
substituting available MVs


With (1) being what I think is the only GSoC sized subset here.

I'm not saying someone can't jump right into (3), using the current 
implementations for (1) and (2) that are floating around out there.  I 
just think it would end up wasting a fair amount of work on prototypes 
that don't work quite the same way as the eventual fully integrated 
version.  You certainly can start working on (3) without a fully fleshed 
out implementation of (2), I don't know that it makes sense to work on 
before (1) though.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 3:43 PM, Greg Smith g...@2ndquadrant.com wrote:
 Josh Berkus wrote:

 There are basically 2 major parts for materialized views:
 A) Planner: Getting the query planner to swap in the MatView for part of
 a query automatically for query plan portions which the MatView supports;
 B) Maintenance: maintaining the MatView data according to the programmed
 scheme (synch, asynch, periodic).


 I'm run more into problems where it's perfectly fine to specify using the
 materialized view directly in the query, but keeping that view up to date
 usefully was the real problem.  The whole idea of getting a MV used
 automatically is valuable, but far down the roadmap as I see it.

 Not everyone would agree of course, and your description does suggest a
 better way to organize a high-level summary though; here's a first cut:

 1) Creation of materalized view
 Current state:  using CREATE TABLE AS or similar mechanism, maintain
 manually
 Optimal:  CREATE MATERIALIZED VIEW grammar, metadata to store MV data,
 dump/reload support

 2) Updating materialized views
 Current state:  periodically create new snapshots, or maintain using
 triggers
 Optimal:  Built-in refresh via multiple strategies, with minimal locking as
 to improve concurrent access

 3) Using materialized views in the planner
 Current state:  specify the manually created MV in queries that can use it
 Optimal:  Automatically accelerate queries that could be satisfied by
 substituting available MVs

 With (1) being what I think is the only GSoC sized subset here.

 I'm not saying someone can't jump right into (3), using the current
 implementations for (1) and (2) that are floating around out there.  I just
 think it would end up wasting a fair amount of work on prototypes that don't
 work quite the same way as the eventual fully integrated version.  You
 certainly can start working on (3) without a fully fleshed out
 implementation of (2), I don't know that it makes sense to work on before
 (1) though.

Good summary.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Virtual Private Database

2010-04-12 Thread Josh Berkus
On 4/10/10 7:00 AM, Jean-Gérard Pailloncy wrote:
 Hello,
 
 1) VPD: Virtual Private Database
 I would appreciate to have a new feature in PostgreSQL.
 This is an oracle-like feature that implement Row Level Security.
 This feature may be emulated by using VIEW/RULE but this is very time
 consuming and error prone.
 
 I would appreciated to have an estimated of the faisability and the cost
 to implement it.

See the Veil project, and the SEPostgres project:
http://veil.projects.postgresql.org/
http://code.google.com/p/sepgsql/


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Josh Berkus
Greg,

 I'm not saying someone can't jump right into (3), using the current
 implementations for (1) and (2) that are floating around out there.  I
 just think it would end up wasting a fair amount of work on prototypes
 that don't work quite the same way as the eventual fully integrated
 version.  You certainly can start working on (3) without a fully fleshed
 out implementation of (2), I don't know that it makes sense to work on
 before (1) though.

What would be the use case for (1) by itself?


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Greg Smith

Josh Berkus wrote:

What would be the use case for (1) by itself?
  


There isn't any use case for just working on the infrastructure, just 
like there's no use case for Syntax for partitioning on its own.  That 
why people rarely work on that part of these problems--it's boring and 
produces no feature of value on its own.  I believe that in both cases, 
attempts to build the more complicated parts, ones that don't first 
address some of the core infrastructure first, will continue to produce 
only prototypes.


I don't want to see Materialized Views wander down the same path as 
partitioning, where lots of people produce fun parts patches, while 
ignoring the grunt work of things like production quality catalog 
support for the feature.  I think Pavel's proposal got that part right 
by starting with the grammar and executor setup trivia.  And Robert's 
comments about the details in that area it's easy to forget about hit 
the mark too.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG check variables hidden by locals v2

2010-04-12 Thread Bruce Momjian

FYI, I think Michael Meskes applied this patch, though I didn't see you
emailed that it was applied.

---

Boszormenyi Zoltan wrote:
 Hi,
 
 here's a little beautified patch:
 - more logical parameter order in ECPGdump_a_type()
 - use mm_strdup() instead of strdup() (I notoriously forget this)
 - actually bail out with ET_FATAL if the local variable is
   of a different type than the global variable that was used in
   the DECLARE in the global scope
 
 Although with this patch we can only detect variables under
 DECLARE SECTIONs, so we can't detect the scenario
 in the attached test case. Should we? This test code would be
 a good candidate for the Underhanded C Contest. :-)
 
 Best regards,
 Zolt?n B?sz?rm?nyi
 
 -- 
 Bible has answers for everything. Proof:
 But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
 than these cometh of evil. (Matthew 5:37) - basics of digital technology.
 May your kingdom come - superficial description of plate tectonics
 
 --
 Zolt?n B?sz?rm?nyi
 Cybertec Sch?nig  Sch?nig GmbH
 http://www.postgresql.at/
 

[ Attachment, skipping... ]

[ Attachment, skipping... ]

 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] walreceiver is uninterruptible on win32

2010-04-12 Thread Fujii Masao
On Tue, Apr 13, 2010 at 1:56 AM, Magnus Hagander mag...@hagander.net wrote:
 If adding new shared library is too big change at this point, I think
 that we should postpone the fix only for dblink to 9.1 or later. Since
 no one has complained about this long-term problem of dblink, I'm not
 sure it really should be fixed right now. Thought?

 +1. Let's fix walreceiver for now, and we can revisit dblink later.
 Since we haven't had any complaints so far...

OK. I'll focus on walreceiver now.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Naming of new EXCLUDE constraints

2010-04-12 Thread Bruce Momjian
Simon Riggs wrote:
 On Sun, 2010-04-04 at 22:12 -0400, Bruce Momjian wrote:
  Simon Riggs wrote:
   
   How about we call it exclusivity constraints.
   
   Not much of a change, but helps to differentiate.
  
  Well, the keyword is EXCLUDE so we could call it EXCLUDE contraints.
 
 If that is the keyword then that is what people will use, agreed.
 
 That is poor English, but I think we can reword the sentences to allow
 that phrase to make sense.
 
 e.g. Added capability for EXCLUDE constraints.

I have modified the documentation with the attached patch to call this
new features exclude constraints.  Is this what everyone wants?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
Index: doc/src/sgml/catalogs.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/catalogs.sgml,v
retrieving revision 2.225
diff -c -c -r2.225 catalogs.sgml
*** doc/src/sgml/catalogs.sgml	3 Apr 2010 07:22:52 -	2.225
--- doc/src/sgml/catalogs.sgml	13 Apr 2010 01:26:31 -
***
*** 1701,1707 
  
para
 The catalog structnamepg_constraint/structname stores check, primary
!key, unique, foreign key, and exclusion constraints on tables.
 (Column constraints are not treated specially.  Every column constraint is
 equivalent to some table constraint.)
 Not-null constraints are represented in the structnamepg_attribute/
--- 1701,1707 
  
para
 The catalog structnamepg_constraint/structname stores check, primary
!key, unique, foreign key, and exclude constraints on tables.
 (Column constraints are not treated specially.  Every column constraint is
 equivalent to some table constraint.)
 Not-null constraints are represented in the structnamepg_attribute/
***
*** 1757,1763 
  literalp/ = primary key constraint,
  literalu/ = unique constraint,
  literalt/ = constraint trigger,
! literalx/ = exclusion constraint
/entry
   /row
  
--- 1757,1763 
  literalp/ = primary key constraint,
  literalu/ = unique constraint,
  literalt/ = constraint trigger,
! literalx/ = exclude constraint
/entry
   /row
  
***
*** 1794,1800 
entrytypeoid/type/entry
entryliterallink linkend=catalog-pg-classstructnamepg_class/structname/link.oid/literal/entry
entryThe index supporting this constraint, if it's a unique, primary
!key, foreign key, or exclusion constraint; else 0/entry
   /row
  
   row
--- 1794,1800 
entrytypeoid/type/entry
entryliterallink linkend=catalog-pg-classstructnamepg_class/structname/link.oid/literal/entry
entryThe index supporting this constraint, if it's a unique, primary
!key, foreign key, or exclude constraint; else 0/entry
   /row
  
   row
***
*** 1902,1908 
entrystructfieldconexclop/structfield/entry
entrytypeoid[]/type/entry
entryliterallink linkend=catalog-pg-operatorstructnamepg_operator/structname/link.oid//entry
!   entryIf an exclusion constraint, list of the per-column exclusion operators/entry
   /row
  
   row
--- 1902,1908 
entrystructfieldconexclop/structfield/entry
entrytypeoid[]/type/entry
entryliterallink linkend=catalog-pg-operatorstructnamepg_operator/structname/link.oid//entry
!   entryIf an exclude constraint, list of the per-column exclusion operators/entry
   /row
  
   row
***
*** 1923,1929 
/table
  
para
!In the case of an exclusion constraint, structfieldconkey/structfield
 is only useful for constraint elements that are simple column references.
 For other cases, a zero appears in structfieldconkey/structfield
 and the associated index must be consulted to discover the expression
--- 1923,1929 
/table
  
para
!In the case of an exclude constraint, structfieldconkey/structfield
 is only useful for constraint elements that are simple column references.
 For other cases, a zero appears in structfieldconkey/structfield
 and the associated index must be consulted to discover the expression
***
*** 1946,1952 
  literalpg_class.relchecks/literal needs to agree with the
  number of check-constraint entries found in this table for each
  relation.  Also, literalpg_class.relhasexclusion/literal must
! be true if there are any exclusion-constraint entries for the relation.
 /para
/note
  
--- 1946,1952 
  literalpg_class.relchecks/literal needs to agree with the
  number of check-constraint entries found in this table for each
  relation.  Also, literalpg_class.relhasexclusion/literal must
! be true if there are any exclude-constraint entries for the relation.
 /para
/note
  
Index: 

Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Josh Berkus

 I don't want to see Materialized Views wander down the same path as
 partitioning, where lots of people produce fun parts patches, while
 ignoring the grunt work of things like production quality catalog
 support for the feature.  I think Pavel's proposal got that part right
 by starting with the grammar and executor setup trivia.  And Robert's
 comments about the details in that area it's easy to forget about hit
 the mark too.

Good point.  And GSoC may be one of the few times we can get people to
do that kind of work.  Other than Simon, of course.  ;-)

I just worry about any feature which doesn't get as far as a
user-visible implementation.  If someone doesn't do the rest of the
parts soon, such features tend to atrophy because nobody is using them.


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing hot standby

2010-04-12 Thread Fujii Masao
On Mon, Apr 12, 2010 at 11:27 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
 1. start the primary
 2. pg_start_backup()
 3. copy $PGDATA from the primary to the standby
 4. pg_stop_backup();
 5. create the recovery.conf and start the standby

 execute some WAL-logged action (i've seen this happen even with no
 WAL-logged action if i wait for a while before shutdown servers)

 6. shutdown (smart mode) the standby

 shutdown (smart) the primary
 start the primary again

 7. start the standby again
   -- cannot accept connnections until new WAL has been created

I was not able to reproduce the problem using the above scenario.
But when I did one more restart of the primary and standby, I was
able to observe the problem. If this is the same as you encountered,
it would be the can't start hot standby from a shutdown checkpoint
issue that Heikki pointed out. So it's very helpful to check whether
the posted patch fixes your problem or not.
http://archives.postgresql.org/pgsql-hackers/2010-04/msg00407.php

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Naming of new EXCLUDE constraints

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 9:28 PM, Bruce Momjian br...@momjian.us wrote:
 Simon Riggs wrote:
 On Sun, 2010-04-04 at 22:12 -0400, Bruce Momjian wrote:
  Simon Riggs wrote:
  
   How about we call it exclusivity constraints.
  
   Not much of a change, but helps to differentiate.
 
  Well, the keyword is EXCLUDE so we could call it EXCLUDE contraints.

 If that is the keyword then that is what people will use, agreed.

 That is poor English, but I think we can reword the sentences to allow
 that phrase to make sense.

 e.g. Added capability for EXCLUDE constraints.

 I have modified the documentation with the attached patch to call this
 new features exclude constraints.  Is this what everyone wants?

I don't think we should be changing this without input from a lot more
people.  We had a very, very long dicussion of this when this was
initially under development.  Changing it now seems like a good way to
reopen a can of worms.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Naming of new EXCLUDE constraints

2010-04-12 Thread Bruce Momjian
Robert Haas wrote:
 On Mon, Apr 12, 2010 at 9:28 PM, Bruce Momjian br...@momjian.us wrote:
  Simon Riggs wrote:
  On Sun, 2010-04-04 at 22:12 -0400, Bruce Momjian wrote:
   Simon Riggs wrote:
   
How about we call it exclusivity constraints.
   
Not much of a change, but helps to differentiate.
  
   Well, the keyword is EXCLUDE so we could call it EXCLUDE contraints.
 
  If that is the keyword then that is what people will use, agreed.
 
  That is poor English, but I think we can reword the sentences to allow
  that phrase to make sense.
 
  e.g. Added capability for EXCLUDE constraints.
 
  I have modified the documentation with the attached patch to call this
  new features exclude constraints. ?Is this what everyone wants?
 
 I don't think we should be changing this without input from a lot more
 people.  We had a very, very long dicussion of this when this was
 initially under development.  Changing it now seems like a good way to
 reopen a can of worms.

Fine, then we will just have to live with exclusion constraints and
contraint exclusion.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Naming of new EXCLUDE constraints

2010-04-12 Thread Robert Haas
On Mon, Apr 12, 2010 at 11:03 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Mon, Apr 12, 2010 at 9:28 PM, Bruce Momjian br...@momjian.us wrote:
  Simon Riggs wrote:
  On Sun, 2010-04-04 at 22:12 -0400, Bruce Momjian wrote:
   Simon Riggs wrote:
   
How about we call it exclusivity constraints.
   
Not much of a change, but helps to differentiate.
  
   Well, the keyword is EXCLUDE so we could call it EXCLUDE contraints.
 
  If that is the keyword then that is what people will use, agreed.
 
  That is poor English, but I think we can reword the sentences to allow
  that phrase to make sense.
 
  e.g. Added capability for EXCLUDE constraints.
 
  I have modified the documentation with the attached patch to call this
  new features exclude constraints. ?Is this what everyone wants?

 I don't think we should be changing this without input from a lot more
 people.  We had a very, very long dicussion of this when this was
 initially under development.  Changing it now seems like a good way to
 reopen a can of worms.

 Fine, then we will just have to live with exclusion constraints and
 contraint exclusion.

I am not necessarily 100% averse to changing it... just saying that it
shouldn't be done unless we have a clear consensus to overrule the
previous consensus.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] debugger question

2010-04-12 Thread Murali M. Krishna
Hello:

I am brand new to Postgresql.

I ran the following commands.
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data logfile 21 
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test

I would like to start using gdb.

What is the simplest way of doing this? I read the instructions
on this page

http://sites.google.com/a/cs.berkeley.edu/cs186-fa09/home/assignments/assignment2/pggdb

, but this is what I get.


gdb) attach 1731
Attaching to program: /usr/local/pgsql/bin/postgres, process 1731
ptrace: Operation not permitted.
(gdb) break cost_seqscan
Breakpoint 1 at 0x81cdf97: file costsize.c, line 163.
(gdb) c
The program is not being run.

-

Please help.

Thanks.

MMK.













-
Please visit NumberFest.com for educational number puzzles  mind exercises for 
all ages! And please tell your friends about it. Thank You!



  

Re: [HACKERS] debugger question

2010-04-12 Thread tomas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, Apr 12, 2010 at 08:31:38PM -0700, Murali M. Krishna wrote:
 Hello:
 
 I am brand new to Postgresql.
 
 I ran the following commands.
 ./configure
 gmake
 su
 gmake install
 adduser postgres
 mkdir /usr/local/pgsql/data
 chown postgres /usr/local/pgsql/data
 su - postgres
 /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
 /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data logfile 21 
 /usr/local/pgsql/bin/createdb test
 /usr/local/pgsql/bin/psql test
 
 I would like to start using gdb.
 
 What is the simplest way of doing this? I read the instructions
 on this page
 
 http://sites.google.com/a/cs.berkeley.edu/cs186-fa09/home/assignments/assignment2/pggdb
 
 , but this is what I get.
 
 
 gdb) attach 1731
 Attaching to program: /usr/local/pgsql/bin/postgres, process 1731
 ptrace: Operation not permitted.
 (gdb) break cost_seqscan
 Breakpoint 1 at 0x81cdf97: file costsize.c, line 163.
 (gdb) c
 The program is not being run.

Hm. Seems you got the right PID (gdb finds the executable after all).
Are you perhaps running under SELinux? (i just boldly assumed some
GNU/Linux). Which distribution, which kernel version (there seems to be
a bug in 2.4-ish Linux kernels which manifests itself like that, but
that's quite a while ago).

Next time, please tell us what OS is under you (although it might be fun
to watch people make wild guesses :)

Regards
- -- tomás
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFLw/LTBcgs9XrR2kYRArYMAJ9JHu/Sl5JWSAv77om9HXHIzZtrDACZAWWu
fpk1yLbio8KOcWjTEWCXrK4=
=z0qo
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG check variables hidden by locals v2

2010-04-12 Thread Boszormenyi Zoltan
Yes, he applied the first version without seeing this one,
then he asked for a re-diff privately.

Bruce Momjian írta:
 FYI, I think Michael Meskes applied this patch, though I didn't see you
 emailed that it was applied.

 ---

 Boszormenyi Zoltan wrote:
   
 Hi,

 here's a little beautified patch:
 - more logical parameter order in ECPGdump_a_type()
 - use mm_strdup() instead of strdup() (I notoriously forget this)
 - actually bail out with ET_FATAL if the local variable is
   of a different type than the global variable that was used in
   the DECLARE in the global scope

 Although with this patch we can only detect variables under
 DECLARE SECTIONs, so we can't detect the scenario
 in the attached test case. Should we? This test code would be
 a good candidate for the Underhanded C Contest. :-)

 Best regards,
 Zolt?n B?sz?rm?nyi

 -- 
 Bible has answers for everything. Proof:
 But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
 than these cometh of evil. (Matthew 5:37) - basics of digital technology.
 May your kingdom come - superficial description of plate tectonics

 --
 Zolt?n B?sz?rm?nyi
 Cybertec Sch?nig  Sch?nig GmbH
 http://www.postgresql.at/

 

 [ Attachment, skipping... ]

 [ Attachment, skipping... ]

   
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 

   


-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - proposal - Materialized Views in PostgreSQL

2010-04-12 Thread Greg Smith

Josh Berkus wrote:

I just worry about any feature which doesn't get as far as a
user-visible implementation.  If someone doesn't do the rest of the
parts soon, such features tend to atrophy because nobody is using them.
  


While they're limited, there are complexly viable prototype quality 
implementations possible here without a large amount of work to get them 
started.  I'm not worried too much about this feature being unused.  As 
I was just reminded when assembling an page on the wiki about it:  
http://wiki.postgresql.org/wiki/Materalized_Views it's currently ranked 
#1--by a large margin--on the UserVoice feature request survey that 
Peter kicked off.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] debugger question

2010-04-12 Thread Murali M. Krishna
The OS is

Fedora 12.






-
Please visit NumberFest.com for educational number puzzles  mind exercises for 
all ages! And please tell your friends about it. Thank You!


--- On Mon, 4/12/10, to...@tuxteam.de to...@tuxteam.de wrote:

From: to...@tuxteam.de to...@tuxteam.de
Subject: Re: [HACKERS] debugger question
To: Murali M. Krishna murali1...@yahoo.com
Cc: pgsql-hackers@postgresql.org
Date: Monday, April 12, 2010, 9:28 PM

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, Apr 12, 2010 at 08:31:38PM -0700, Murali M. Krishna wrote:
 Hello:
 
 I am brand new to Postgresql.
 
 I ran the following commands.
 ./configure
 gmake
 su
 gmake install
 adduser postgres
 mkdir /usr/local/pgsql/data
 chown postgres /usr/local/pgsql/data
 su - postgres
 /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
 /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data logfile 21 
 /usr/local/pgsql/bin/createdb test
 /usr/local/pgsql/bin/psql test
 
 I would like to start using gdb.
 
 What is the simplest way of doing this? I read the instructions
 on this page
 
 http://sites.google.com/a/cs.berkeley.edu/cs186-fa09/home/assignments/assignment2/pggdb
 
 , but this is what I get.
 
 
 gdb) attach 1731
 Attaching to program: /usr/local/pgsql/bin/postgres, process 1731
 ptrace: Operation not permitted.
 (gdb) break cost_seqscan
 Breakpoint 1 at 0x81cdf97: file costsize.c, line 163.
 (gdb) c
 The program is not being run.

Hm. Seems you got the right PID (gdb finds the executable after all).
Are you perhaps running under SELinux? (i just boldly assumed some
GNU/Linux). Which distribution, which kernel version (there seems to be
a bug in 2.4-ish Linux kernels which manifests itself like that, but
that's quite a while ago).

Next time, please tell us what OS is under you (although it might be fun
to watch people make wild guesses :)

Regards
- -- tomás
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFLw/LTBcgs9XrR2kYRArYMAJ9JHu/Sl5JWSAv77om9HXHIzZtrDACZAWWu
fpk1yLbio8KOcWjTEWCXrK4=
=z0qo
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



  

Re: [HACKERS] testing HS/SR - 1 vs 2 performance

2010-04-12 Thread Heikki Linnakangas
I could reproduce this on my laptop, standby is about 20% slower. I ran
oprofile, and what stands out as the difference between the master and
standby is that on standby about 20% of the CPU time is spent in
hash_seq_search(). The callpath is GetSnapshotDat() -
KnownAssignedXidsGetAndSetXmin() - hash_seq_search(). That explains the
difference in performance.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers