Hi
I would like to develop function for 'Online base backup from the
hot-standby' in PostgreSQL 9.2.
Todo : Allow hot file system backups on standby servers
(http://wiki.postgresql.org/wiki/Todo)
[GOAL]
* Make pg_basebackup to execute to the hot-standby server
and acquire
On Fri, May 27, 2011 at 2:31 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Peter Eisentraut pete...@gmx.net writes:
On tor, 2011-05-26 at 12:14 -0400, Tom Lane wrote:
I tried this on my HP-UX 10.20 box, and it didn't work very nicely:
configure decided that the compiler accepted +Olibmerrno, so I
Hello,
I am working on testing Orafce for PostgreSQL 9.1. I found a issue.
I cannot directly call a function lower.
Is it correct?
select dbms_assert.enquote_name('''AAA');
! enquote_name.
! --
! 'aaa
! (1 row)
!.
select dbms_assert.enquote_name('''AAA', false);
On May7, 2011, at 03:50 , Robert Haas wrote:
On Sat, Jan 15, 2011 at 10:44 AM, Florian Pflug f...@phlo.org wrote:
I've realized that POSIX actually *does* provide a way to receive a signal -
the SIGIO machinery. I've modified my test case do to that. To simplify
things,
I've removed support
On Fri, May 20, 2011 at 09:37:20AM +0100, Leonardo Francalanci wrote:
I'll try to sum up what I understood:
1) the standby keeps the lock, so no problem with
stray files coming from the unlogged-logged log
reply, as the table can't be read during the operation
2) calling
From: Noah Misch n...@leadboat.com
- the patch is missing the send all table pages to the
standby part; is there some code I can use as base?
Nothing comes to mind as especially similar.
I guess I have to generate some special log type that
is only played by standby servers.
On 27.05.2011 12:06, Pavel Stehule wrote:
Hello,
I am working on testing Orafce for PostgreSQL 9.1. I found a issue.
I cannot directly call a function lower.
See DirectFunctionCall1Coll()
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing
2011/5/27 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
On 27.05.2011 12:06, Pavel Stehule wrote:
Hello,
I am working on testing Orafce for PostgreSQL 9.1. I found a issue.
I cannot directly call a function lower.
See DirectFunctionCall1Coll()
ook
Thank you
Pavel
--
On Fri, May 27, 2011 at 10:49:13AM +0100, Leonardo Francalanci wrote:
From: Noah Misch n...@leadboat.com
- the patch is missing the send all table pages to the
standby part; is there some code I can use as base?
Nothing comes to mind as especially similar.
I guess I have to
On 27.05.2011 09:09, Jun Ishiduka wrote:
STEP1: Make startup process to acquire backup-end-position from
not only backup-end record but also backup-history-file .
* startup process allows to acquire backup-end-position
from backup-history-file .
*
On 27 May 2011 10:01, Florian Pflug f...@phlo.org wrote:
Anyway, I'm glad to see that Peter Geoghegan has picked this up
any turned this into an actual patch.
Extremely cool!
Thanks Florian.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training
Hello,
I've encountered a problem of PostgreSQL startup, and I can think of a
simple solution for that. However, I don't yet have much knowledge about
PostgreSQL implementation, I'd like to ask you about what is the best and
easiest solution. If it is easy for me to work on during my spare
Hello,
I posted a patch for bug #6011 to pgsql-hackers several days ago. How can I
check the status of bug fixes? I'm worried that the patch might be
forgotten, because bug #5842 was missed for two months until Bruce noticed
it.
Regards
MauMau
--
Sent via pgsql-hackers mailing list
On 05/27/2011 08:36 AM, MauMau wrote:
Hello,
I posted a patch for bug #6011 to pgsql-hackers several days ago. How
can I check the status of bug fixes? I'm worried that the patch might
be forgotten, because bug #5842 was missed for two months until Bruce
noticed it.
In the immortal
Something odd is happening on buildfarm member wombat, a PPC970MP box
running Gentoo. We're getting dblink test failures. On the one I looked
at more closely I saw this:
[4ddf2c59.7aec:153] LOG: disconnection: session time: 0:00:00.444 user=markwkm
database=contrib_regression host=[local]
I wonder if we have tested the reasoning behind having
SKIP_PAGES_THRESHOLD and the magic number of 32 assigned to it
currently. While looking at the code after a long time and doing some
tests, I realized that a manual VACUUM would always scan first 31
pages of a relation which has not received
Pavan Deolasee pavan.deola...@gmail.com writes:
My statistical skills are limited, but wouldn't that mean that for a
fairly well distributed write activity across a large table, if there
are even 3-4% update/deletes, we would most likely hit a
not-all-visible page for every 32 pages scanned ?
2011/5/27 Pavan Deolasee pavan.deola...@gmail.com:
I wonder if we have tested the reasoning behind having
SKIP_PAGES_THRESHOLD and the magic number of 32 assigned to it
currently. While looking at the code after a long time and doing some
tests, I realized that a manual VACUUM would always
On 27.05.2011 16:52, Pavan Deolasee wrote:
On closer inspection, I realized that we have
deliberately put in this hook to ensure that we use visibility maps
only when we see at least SKIP_PAGES_THRESHOLD worth of all-visible
sequential pages to take advantage of possible OS seq scan
MauMau maumau...@gmail.com writes:
The bad thing is that pg_ctl continues to wait until the specified duration
passes, even if postgres fails to start. For example, it is naturally
desirable for pg_ctl to terminate when postgresql.conf contains a syntax
error.
Hmm, I thought we'd fixed
2011/5/27 Cédric Villemain cedric.villemain.deb...@gmail.com:
2011/5/27 Pavan Deolasee pavan.deola...@gmail.com:
I wonder if we have tested the reasoning behind having
SKIP_PAGES_THRESHOLD and the magic number of 32 assigned to it
currently. While looking at the code after a long time and
On Thu, May 26, 2011 at 5:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
When we prune or vacuum a page, I don't suppose we have enough
information about that page's previous state to calculate a tuple
count delta, do we? That would allow a far
On Fri, May 27, 2011 at 7:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Pavan Deolasee pavan.deola...@gmail.com writes:
My statistical skills are limited, but wouldn't that mean that for a
fairly well distributed write activity across a large table, if there
are even 3-4% update/deletes, we would
On Fri, May 27, 2011 at 8:44 AM, Andrew Dunstan and...@dunslane.net wrote:
Something odd is happening on buildfarm member wombat, a PPC970MP box
running Gentoo. We're getting dblink test failures. On the one I looked at
more closely I saw this:
[4ddf2c59.7aec:153] LOG: disconnection:
On Fri, May 27, 2011 at 7:11 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Well, as with normal queries, it's usually faster to just seqscan the whole
table if you need to access more than a few percent of the pages, because
sequential I/O is so much faster than random I/O.
On Fri, May 27, 2011 at 7:41 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 27.05.2011 16:52, Pavan Deolasee wrote:
On closer inspection, I realized that we have
deliberately put in this hook to ensure that we use visibility maps
only when we see at least
Robert Haas robertmh...@gmail.com wrote:
Andrew Dunstan and...@dunslane.net wrote:
Something odd is happening on buildfarm member wombat, a PPC970MP
box running Gentoo. We're getting dblink test failures. On the
one I looked at more closely I saw this:
[4ddf2c59.7aec:153] LOG:
Excerpts from Andrew Dunstan's message of vie may 27 08:53:50 -0400 2011:
In the immortal words of Robert Haas: Hey, look! An elephant!
This is Robert's $1000 tshirt, I think.
--
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication,
On 05/27/2011 05:36 AM, MauMau wrote:
Hello,
I posted a patch for bug #6011 to pgsql-hackers several days ago. How
can I check the status of bug fixes? I'm worried that the patch might be
forgotten, because bug #5842 was missed for two months until Bruce
noticed it.
The joke that my
Joshua D. Drake j...@commandprompt.com writes:
You have done what you need to do to check the status. Someone who knows
something about the bug should speak up at some point.
That patch is waiting for a committer who knows something about Windows
to pick it up.
regards,
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Robert Haas robertmh...@gmail.com wrote:
Around when did it start failing?
According to the buildfarm logs the first failure was roughly 1 day
10 hours 40 minutes before this post.
See
I wrote:
grebe, which is also a PPC64 machine, isn't showing the bug. And I just
failed to reproduce the problem on a RHEL6 PPC64 box. About to go try
it on RHEL5, which has a gcc version much closer to what wombat says
it's using, but I'm not very hopeful about that.
Nope, no luck there
On 11-05-27 12:35 PM, Tom Lane wrote:
grebe, which is also a PPC64 machine, isn't showing the bug. And I just
failed to reproduce the problem on a RHEL6 PPC64 box. About to go try
it on RHEL5, which has a gcc version much closer to what wombat says
it's using, but I'm not very hopeful about
On Fri, May 27, 2011 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Joshua D. Drake j...@commandprompt.com writes:
You have done what you need to do to check the status. Someone who knows
something about the bug should speak up at some point.
That patch is waiting for a committer who knows
On Fri, May 27, 2011 at 11:10 AM, Greg Stark gsst...@mit.edu wrote:
It would be nice if the VM had a bit for all-frozen but that
wouldn't help much except in the case of truly cold data. We could
perhaps keep the frozen data per segment or per VM page (which covers
a large section of the
Robert Haas robertmh...@gmail.com writes:
On Fri, May 27, 2011 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
That patch is waiting for a committer who knows something about Windows
to pick it up.
It might be useful, in this situation, for the OP to add this patch to
the CommitFest
Robert Haas robertmh...@gmail.com writes:
Still, maybe we don't have a better option. If it were me, I'd add an
additional safety valve: use your formula if the percentage of the
relation scanned is above some threshold where there's unlikely to be
too much skew. But if the percentage
On Fri, May 27, 2011 at 2:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Fri, May 27, 2011 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
That patch is waiting for a committer who knows something about Windows
to pick it up.
It might be useful, in
On Thu, May 26, 2011 at 12:38 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, May 26, 2011 at 1:18 PM, Robert Haas robertmh...@gmail.com wrote:
The replay of the WAL record for A doesn't rely on the content of chunk 1
which B modified. So I don't think that partial page writes has such
a
On Wed, May 25, 2011 at 01:29:05PM -0400, Robert Haas wrote:
On Wed, May 25, 2011 at 1:06 PM, Greg Smith g...@2ndquadrant.com wrote:
On 05/24/2011 04:34 PM, Robert Haas wrote:
I've been looking into a similar refactoring of the names here, where we
bundle all of these speed over safety
Hi,
One of our customers is interested in being able to store original
timezone along with a certain timestamp.
It is currently possible to store a TZ in a separate column, but this is
a bit wasteful and not very convenient anyway.
There are all sorts of UI issues that need to be resolved in
On Tue, May 24, 2011 at 10:03 AM, Noah Misch n...@leadboat.com wrote:
On Tue, May 24, 2011 at 08:53:11AM -0400, Robert Haas wrote:
On Tue, May 24, 2011 at 5:07 AM, Noah Misch n...@leadboat.com wrote:
This drops the part about only transferring fast-path entries once when a
strong_lock_counts
Robert Haas robertmh...@gmail.com writes:
When a strong lock is taken or released, we have to increment or
decrement strong_lock_counts[fasthashpartition]. Here's the question:
is that atomic? In other words, suppose that strong_lock_counts[42]
starts out at 0, and two backends both do
I intend to have a look at this patch and hopefully fix the outstanding
issues.
--
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-hackers mailing list
Alvaro Herrera alvhe...@commandprompt.com writes:
Right now we rely on the tzdata files on disk for things like
pg_timezone_names and other accesses of TZ data; so the files are the
authoritative source of TZ info. So we need to ensure that whenever the
files are updated, the catalogs are
On May 27, 2011, at 1:43 PM, Alvaro Herrera wrote:
Right now we rely on the tzdata files on disk for things like
pg_timezone_names and other accesses of TZ data; so the files are the
authoritative source of TZ info. So we need to ensure that whenever the
files are updated, the catalogs are
On fre, 2011-05-27 at 13:55 -0400, Robert Haas wrote:
Also, I think it's about time we got ourselves some kind of bug
tracker. I have no idea how to make that work without breaking
workflow that works now, but a quick survey of my pgsql-bugs email
suggests that this is far from the only thing
David E. Wheeler da...@kineticode.com wrote:
I like it, but what do you do when a TZ has been renamed or has
ceased to exist. Or, worse, existed last week, so last week's
dates might still use it, but next week's must not?
I think the key thing is that the timestamp portion of it would be
David E. Wheeler da...@kineticode.com writes:
I like it, but what do you do when a TZ has been renamed or has ceased
to exist.
As far as that goes, I think nothing is a sufficient answer. There's
no requirement that an OID in the mapping table correspond to a live TZ.
It's just a more compact
On May 27, 2011, at 2:35 PM, Tom Lane wrote:
David E. Wheeler da...@kineticode.com writes:
I like it, but what do you do when a TZ has been renamed or has ceased
to exist.
As far as that goes, I think nothing is a sufficient answer. There's
no requirement that an OID in the mapping table
On Fri, May 27, 2011 at 04:55:07PM -0400, Robert Haas wrote:
When a strong lock is taken or released, we have to increment or
decrement strong_lock_counts[fasthashpartition]. Here's the question:
is that atomic? In other words, suppose that strong_lock_counts[42]
starts out at 0, and two
From a language definition perspective, it is helpful to have a name for
every case instead of an implicit fallback, without any word to describe
it. See for instance CREATE USER CREATEDB/NOCREATEDB or CREATE RULE ...
DO ALSO/INSTEAD for similar occurences of naming default cases.
Oddly
On Friday, May 27, 2011 20:39:26 Robert Haas wrote:
On Fri, May 27, 2011 at 2:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Fri, May 27, 2011 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
That patch is waiting for a committer who knows something
On tor, 2011-05-26 at 17:31 -0400, Tom Lane wrote:
We could also do that globally, but that would probably be something
for
the next release.
Hmm. I'm a bit scared of how much might break. I don't think the
autoconf tests are generally designed to guarantee no warnings.
Yeah, I think
On Fri, May 27, 2011 at 9:24 PM, Peter Eisentraut pete...@gmx.net wrote:
On fre, 2011-05-27 at 13:55 -0400, Robert Haas wrote:
Also, I think it's about time we got ourselves some kind of bug
tracker. I have no idea how to make that work without breaking
workflow that works now, but a quick
On Fri, May 27, 2011 at 10:06 AM, Steve Singer ssin...@ca.afilias.info wrote:
As another data point, the dblink regression tests work fine for me on a
PPC32 debian (squeeze,gcc 4.4.5) based system.
Given that it's dblink my guess is that it's picking up the wrong
version of libpq somehow.
--
On Fri, May 27, 2011 at 2:32 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
I think the key thing is that the timestamp portion of it would be
identical to our current TIMESTAMP WITH TIME ZONE -- always store it
in the value UTC zone.
Fwiw our timestamp with time zone stores seconds
On 05/27/2011 01:43 PM, Alvaro Herrera wrote:
Hi,
One of our customers is interested in being able to store original
timezone along with a certain timestamp.
I am very interested in the use-case for this (in part as I'm working on
a PG related time talk). My experience thus far is that people
On Fri, May 27, 2011 at 4:13 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
I am very interested in the use-case for this (in part as I'm working on a
PG related time talk). My experience thus far is that people who want this
do not fully understand the nature of date-time
On 05/27/2011 04:29 PM, Greg Stark wrote:
On Fri, May 27, 2011 at 4:13 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
I am very interested in the use-case for this (in part as I'm working on a
PG related time talk). My experience thus far is that people who want this
do not fully
On May 27, 2011, at 6:29 PM, Greg Stark wrote:
Both of these two cases can be handled differently. The former by
storing the raw text inputs and then storing the interpreted value as
a derived column separetly, and the latter by storing the local time
zone to use for display as an additional
Greg Stark gsst...@mit.edu writes:
On Fri, May 27, 2011 at 10:06 AM, Steve Singer ssin...@ca.afilias.info
wrote:
As another data point, the dblink regression tests work fine for me on a
PPC32 debian (squeeze,gcc 4.4.5) based system.
Given that it's dblink my guess is that it's picking up
Excerpts from Cédric Villemain's message of vie may 27 18:37:05 -0400 2011:
2011/4/21 Robert Haas rh...@postgresql.org:
Modified Files
--
doc/src/sgml/ref/alter_table.sgml | 26 +++
src/backend/commands/tablecmds.c | 277
+++--
From: Tom Lane t...@sss.pgh.pa.us
MauMau maumau...@gmail.com writes:
The bad thing is that pg_ctl continues to wait until the specified
duration
passes, even if postgres fails to start. For example, it is naturally
desirable for pg_ctl to terminate when postgresql.conf contains a syntax
error.
From: Peter Eisentraut pete...@gmx.net
On fre, 2011-05-27 at 13:55 -0400, Robert Haas wrote:
Also, I think it's about time we got ourselves some kind of bug
tracker. I have no idea how to make that work without breaking
workflow that works now, but a quick survey of my pgsql-bugs email
65 matches
Mail list logo