On Thu, May 23, 2013 at 10:10 AM, Amit Langote amitlangot...@gmail.com wrote:
A PostgreSQL user recently reported on pgsql-admin about an issue:
when he changed wal_level from 'minimal' to 'hot_standby', the WAL
segment sequence rewound, that is, it started using old names. A
snippet of his ls
I think these are the WAL files that were preallocated by WAL
recycling but have not
been used yet.
# WAL after wal_level changed from 'minimal' to 'hot_standby'
-rw--- 1 postgres postgres 16777216 May 21 12:27 0001000E007B
-rw--- 1 postgres postgres 16777216 May 21
On Thu, May 23, 2013 at 5:01 PM, Amit Langote amitlangot...@gmail.com wrote:
I think these are the WAL files that were preallocated by WAL
recycling but have not
been used yet.
# WAL after wal_level changed from 'minimal' to 'hot_standby'
-rw--- 1 postgres postgres 16777216 May 21 12:27
Can pre-allocation go that further? for example, assuming
0001000E0080 is currently being used, then is it possible
that a segment named/numbered 00010010007E (which does
exist in his pg_xlog as he reported in pgsql-admin thread) is
pre-allocated already?
Yes, if
On Thu, May 23, 2013 at 1:25 AM, Amit Langote amitlangot...@gmail.com wrote:
Okay, now I understand. Also, looking at his ls -l pg_xlog, I could
find that modified timestamps of all those pre-allocated segments are
about similar (around 12:10), whereas the latest modified time (15:37)
is of
Hello,
While reading code, I noticed a small typo in syncrep.h. It
says SyncRepUpdateSyncStandbysDefined() is called by wal writer whereas its
called by the checkpointer process, at least in the HEAD. Previously, it
was being called by the bgwriter process.
Attached patches fix this in the
Hi all,
I was searching for an existing mechanism which ensures that a crashed
process releases a content lock held on a page in buffer cache. Also,
a similar mechanism for refcount of the page.
I seem to have missed it. Do we have a mechanism,and if yes,where can I find it?
Regards,
Atri
--
Hello,
I am facing a problem with pgstat as my subject says. I known, some topics
are open about that, but I would like to go deeper.
Some person told that the better way to don't have this message anymore is
to configure pgstat.stat to be loaded in the RAM with a tmpfs mount point.
What are
On Thursday, May 23, 2013 3:35 PM Atri Sharma wrote:
Hi all,
I was searching for an existing mechanism which ensures that a crashed
process releases a content lock held on a page in buffer cache. Also,
a similar mechanism for refcount of the page.
I seem to have missed it. Do we have a
Hi,
I've been hacking on a tool to allow resynchronizing an old master
server after failover. The need to do a full backup/restore has been a
common complaint ever since we've had streaming replication. I saw on
the wiki that this was discussed in the dev meeting; too bad I couldn't
make it.
Sent from my iPad
On 23-May-2013, at 16:27, Amit Kapila amit.kap...@huawei.com wrote:
On Thursday, May 23, 2013 3:35 PM Atri Sharma wrote:
Hi all,
I was searching for an existing mechanism which ensures that a crashed
process releases a content lock held on a page in buffer cache. Also,
On Thu, May 23, 2013 at 7:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
1. Scan the WAL log of the old cluster, starting from the point where
the new cluster's timeline history forked off from the old cluster. For each
WAL record, make a note of the data blocks that are touched. This
On 23 May 2013 12:10, Heikki Linnakangas hlinnakan...@vmware.com wrote:
Please take a look: https://github.com/vmware/pg_rewind
The COPYRIGHT file shows that VMware is claiming copyright on unstated
parts of the code for this. As such, its not a normal submission to
the PostgreSQL project,
On 15 May 2013 16:35, Heikki Linnakangas hlinnakan...@vmware.com wrote:
Shared memory space is limited, but we only need the watermarks for any
in-progress truncations. Let's keep them in shared memory, in a small
fixed-size array. That limits the number of concurrent truncations that can
be
Hello,
I am facing a problem with pgstat as my subject says. I known, some topics
are open about that, but I would like to go deeper.
Some person told that the better way to don't have this message anymore is
to configure pgstat.stat to be loaded in the RAM with a tmpfs mount point.
What are
On Wed, May 22, 2013 at 11:11 PM, Andres Freund and...@2ndquadrant.com wrote:
Make that actually having acquired an xid. We skip a large part of the
work if a transaction doesn't yet have one afair. I don't think the mere
presence of 600 idle connections without an xid in contrast to just
On Thu, May 23, 2013 at 5:29 AM, Sergey Konoplev gray...@gmail.com wrote:
On Thu, May 23, 2013 at 1:25 AM, Amit Langote amitlangot...@gmail.com
wrote:
Okay, now I understand. Also, looking at his ls -l pg_xlog, I could
find that modified timestamps of all those pre-allocated segments are
On 23 May 2013 07:10, Heikki Linnakangas hlinnakan...@vmware.com wrote:
Hi,
I've been hacking on a tool to allow resynchronizing an old master server
after failover. The need to do a full backup/restore has been a common
complaint ever since we've had streaming replication. I saw on the wiki
Hey all,
I find the current LISTEN / NOTIFY rather limited in the context of
databases with multiple roles. As it stands it is not possible to restrict
the use of LISTEN or NOTIFY to specific roles, and therefore notifications
(and their payloads) cannot really be trusted as coming from any
Atri Sharma atri.j...@gmail.com writes:
On a different note, shouldn't we have a time out for a content lock
in buffer cache?
No; the overhead of setting up and canceling such a timeout would
greatly outweigh any possible benefit.
Generally speaking, LWLocks are not meant to be used in
On Thu, May 23, 2013 at 8:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Atri Sharma atri.j...@gmail.com writes:
On a different note, shouldn't we have a time out for a content lock
in buffer cache?
No; the overhead of setting up and canceling such a timeout would
greatly outweigh any possible
Atri Sharma atri.j...@gmail.com writes:
I was musing over a possible condition where a rogue client gets the
backend to process queries which take a *lot* of time(note, this is
only in my head atm.I may be completely wrong here).
Wouldnt something on the lines of a timeout help here?
You can
On Thu, May 23, 2013 at 8:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Atri Sharma atri.j...@gmail.com writes:
I was musing over a possible condition where a rogue client gets the
backend to process queries which take a *lot* of time(note, this is
only in my head atm.I may be completely wrong
Atri Sharma atri.j...@gmail.com writes:
BTW, what is your opinion on a rogue client's damaging capabilities?
Theoretically, what can a client which tries to stall the backend
target?
If you let an uncooperative user issue arbitrary SQL queries, he can
do any number of things to put server
For instance, take out exclusive locks on all your tables and just
go to sleep (although I think this is limited by table permissions in
recent PG versions).
This is what I have been worried about. The locks(exclusive locks or
content locks in buffers) can be held and stalled. Cant we do
On Tue, May 21, 2013 at 3:06 AM, Amit Kapila amit.kap...@huawei.com wrote:
Here are the results. The first field in each line is the number of
clients. The second number is the scale factor. The numbers after
master and patched are the median of three runs.
01 100 master 1433.297699 patched
Atri Sharma atri.j...@gmail.com writes:
For instance, take out exclusive locks on all your tables and just
go to sleep (although I think this is limited by table permissions in
recent PG versions).
This is what I have been worried about. The locks(exclusive locks or
content locks in buffers)
If you let an uncooperative user issue arbitrary SQL queries, he can
do any number of things to put server performance into the tank.
For instance, take out exclusive locks on all your tables and just
go to sleep (although I think this is limited by table permissions in
recent PG versions).
No, you're not following. These would be table-level heavyweight locks,
and there already are timeout mechanisms that work at that level.
Oh,right.Sorry,I missed that. I will go and research heavyweight locks now.
Thanks a ton!
Regards,
Atri
--
Regards,
Atri
l'apprenant
--
Sent via
On Tue, May 21, 2013 at 11:59 AM, Benedikt Grundmann
bgrundm...@janestreet.com wrote:
We are seeing these errors on a regular basis on the testing box now. We
have even changed the backup script to
shutdown the hot standby, take lvm snapshot, restart the hot standby, rsync
the lvm snapshot.
On Thu, May 23, 2013 at 8:52 PM, Amit Langote amitlangot...@gmail.com wrote:
If you let an uncooperative user issue arbitrary SQL queries, he can
do any number of things to put server performance into the tank.
For instance, take out exclusive locks on all your tables and just
go to sleep
On Thu, May 23, 2013 at 8:55 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, May 23, 2013 at 7:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
1. Scan the WAL log of the old cluster, starting from the point where
the new cluster's timeline history forked off from the old cluster.
On Thu, May 23, 2013 at 4:44 AM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
While reading code, I noticed a small typo in syncrep.h. It says
SyncRepUpdateSyncStandbysDefined() is called by wal writer whereas its
called by the checkpointer process, at least in the HEAD. Previously, it was
Atri Sharma atri.j...@gmail.com writes:
I am not sure, but does statement_timeout depend on *what* the query
is doing internally(i.e. if it is holding lots of locks,pins etc)?
A little bit --- the timeout won't actually kill the query until the
next time control reaches a CHECK_FOR_INTERRUPTS
A little bit --- the timeout won't actually kill the query until the
next time control reaches a CHECK_FOR_INTERRUPTS macro that's not inside
a critical section. We've had issues in the past with particular code
paths that failed to include such a check in a long-running loop, and
there
On Thu, May 23, 2013 at 10:43 AM, Atri Sharma atri.j...@gmail.com wrote:
A little bit --- the timeout won't actually kill the query until the
next time control reaches a CHECK_FOR_INTERRUPTS macro that's not inside
a critical section. We've had issues in the past with particular code
paths
Atri Sharma atri.j...@gmail.com writes:
Right.I believe this is part of the standard way in which we handle
interrupts,right? Making sure that we cancel a query when the backend
is in a state to do so,not when the interrupt actually comes in,right?
Right, the actual signal handler will only
Right, the actual signal handler will only kill the query immediately
if the backend is in a safe state (eg, while it's waiting for a
heavyweight lock). Otherwise it just sets a flag that's checked by
CHECK_FOR_INTERRUPTS. See StatementCancelHandler in postgres.c.
Roger that, I will
On Thu, May 23, 2013 at 11:34 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, May 23, 2013 at 8:55 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, May 23, 2013 at 7:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
1. Scan the WAL log of the old cluster, starting from the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi All!
We will be upgrading gemulon.postgresql.org during the next few hours
to the current release of debian (wheezy/7.0) as discussed with
various people.
To prevent any kind of issues we will be locking out commiters for a
brief amount of time
On 23.05.2013 07:55, Robert Haas wrote:
On Thu, May 23, 2013 at 7:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
1. Scan the WAL log of the old cluster, starting from the point where
the new cluster's timeline history forked off from the old cluster. For each
WAL record, make a note
On 23.05.2013 08:03, Simon Riggs wrote:
On 23 May 2013 12:10, Heikki Linnakangashlinnakan...@vmware.com wrote:
Please take a look: https://github.com/vmware/pg_rewind
The COPYRIGHT file shows that VMware is claiming copyright on unstated
parts of the code for this. As such, its not a normal
Hi,
after having discussed $subject shortly over dinner yesterday, while I
should have been preparing the slides for my talk I noticed that there
might be a rather easy way to get rid of freezing.
I think that the existence of hint bits and the crash safe visibility
maps should provide
On Thu, May 23, 2013 at 11:10 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 23.05.2013 07:55, Robert Haas wrote:
On Thu, May 23, 2013 at 7:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
1. Scan the WAL log of the old cluster, starting from the point where
the new
On 2013-05-23 19:51:48 +0200, Andres Freund wrote:
I think that the existence of hint bits and the crash safe visibility
maps should provide sufficient tooling to make freezing unneccessary
without loosing much information for debugging if we modify the way
vacuum works a bit.
That way we
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/23/2013 12:20 PM, Stefan Kaltenbrunner wrote:
Hi All!
We will be upgrading gemulon.postgresql.org during the next few
hours to the current release of debian (wheezy/7.0) as discussed
with various people. To prevent any kind of issues we
Mathieu Guerin escribió:
Hello,
I am facing a problem with pgstat as my subject says. I known, some topics
are open about that, but I would like to go deeper.
Some person told that the better way to don't have this message anymore is
to configure pgstat.stat to be loaded in the RAM with a
On 2013-05-23 19:51:48 +0200, Andres Freund wrote:
We currently need to make sure we scanned the whole relation and have
frozen everything to have a sensible relfrozenxid for a relation.
So, what I propose instead is basically:
1) only vacuum non-all-visible pages, even when doing it for
Thanks for the response.
I have some evidence against an issue in the backup procedure (though I'm
not ruling it out). We moved back to taking the backup off of the primary
and all errors for all three clusters went away. All of the hardware is
the same, OS and postgres versions are largely the
On 5/14/13 2:13 PM, Greg Smith wrote:
It is possible that we are told to put something in the freelist that
is already in it; don't screw up the list if so.
I don't see where the code does anything to handle that though. What was your
intention here?
IIRC, the code that pulls from the
On Thu, May 23, 2013 at 6:18 AM, German Becker german.bec...@gmail.com wrote:
Let me describe the process I follow to get to this. What I am doing is
testing a migration from 8.3 to 9.1. They way I plan to do it is the
following.
1) Create the schema
2) import the biggest tables, which are
Hi,
Please find attached a patch to take code-coverage of ALTER OPERATOR
FAMILY.. ADD / DROP (src/backend/commands/opclasscmds.c) from 50% to 87%.
Any and all feedback is welcome.
--
Robins Tharakan
regress_opclass.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list
On Thu, May 23, 2013 at 9:31 PM, Mathieu Guerin
mathieu.gueri...@gmail.comwrote:
What are the consequences ? Because this file will be remove if the server
reboot.
Those temporary statistics are stored in global directory when server shuts
down, so the risk here would be to lose a portion of
On 5/20/13 7:51 AM, Heikki Linnakangas wrote:
The way that MarkDirty requires this specific underlying storage manager
behavior to work properly strikes me as as a bit of a layering violation
too. I'd like the read and write paths to have a similar API, but here
they don't even operate on the
Hi,
Please find attached a patch to take code-coverage of LOCK TABLE (
src/backend/commands/lockcmds.c) from 57% to 84%.
Any and all feedback is welcome.
--
Robins Tharakan
regress_lock.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
I'm working on a new project here that I wanted to announce, just to
keep from duplicating effort in this area. I've started to add a cost
limit delay for regular statements. The idea is that you set a new
statement_cost_delay setting before running something, and it will
restrict total
On Thu, May 23, 2013 at 8:27 PM, Greg Smith g...@2ndquadrant.com wrote:
The main unintended consequences issue I've found so far is when a cost
delayed statement holds a heavy lock. Autovacuum has some protection
against letting processes with an exclusive lock on a table go to sleep. It
On 5/23/13 7:34 PM, Claudio Freire wrote:
Why not make the delay conditional on the amount of concurrency, kinda
like the commit_delay? Although in this case, it should only count
unwaiting connections.
The test run by commit_delay is way too heavy to run after every block
is processed. That
On Thu, May 23, 2013 at 8:46 PM, Greg Smith g...@2ndquadrant.com wrote:
On 5/23/13 7:34 PM, Claudio Freire wrote:
Why not make the delay conditional on the amount of concurrency, kinda
like the commit_delay? Although in this case, it should only count
unwaiting connections.
The test run by
On 5/23/13 7:56 PM, Claudio Freire wrote:
Besides of the obvious option of making a lighter check (doesn't have
to be 100% precise), wouldn't this check be done when it would
otherwise sleep? Is it so heavy still in that context?
A commit to typical 7200RPM disk is about 10ms, while
On 23.05.2013 19:10, Greg Smith wrote:
On 5/20/13 7:51 AM, Heikki Linnakangas wrote:
The way that MarkDirty requires this specific underlying storage
manager behavior to work properly strikes me as as a bit of a
layering violation too. I'd like the read and write paths to have
a similar API,
On Thu, May 23, 2013 at 1:51 PM, Andres Freund and...@2ndquadrant.com wrote:
So, what I propose instead is basically:
1) only vacuum non-all-visible pages, even when doing it for
anti-wraparound
Check. We might want an option to force a scan of the whole relation.
2) When we can set
Hi all,
The proposed patch add some files to ignore in .gitignore:
- tags (produced by src/tools/make_ctags)
- TAGS (produced bu src/tools/make_etags)
- .*.swp (may appear in source tree if vi/vim was killed by some reason)
Regards,
--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
There hasn't been general agreement on the merits of particular .gitignore
rules of this sort.
You could hide your own favorite patterns by putting this into your
~/.gitignore that isn't part of the repo, configuring this globally, thus:
git config --global core.excludesfile '~/.gitignore'
That
On Fri, May 24, 2013 at 12:04 AM, Christopher Browne cbbro...@gmail.comwrote:
There hasn't been general agreement on the merits of particular .gitignore
rules of this sort.
You could hide your own favorite patterns by putting this into your
~/.gitignore that isn't part of the repo,
On 05/23/2013 10:03 PM, Andres Freund wrote:
On 2013-05-23 19:51:48 +0200, Andres Freund wrote:
We currently need to make sure we scanned the whole relation and have
frozen everything to have a sensible relfrozenxid for a relation.
So, what I propose instead is basically:
1) only vacuum
Re: Cédric Villemain 2013-05-17 201305171642.59241.ced...@2ndquadrant.com
If it seems to be on the right way, I'll keep fixing EXTENSION building with
VPATH.
I haven't tried the patch, but let me just say that Debian (and
apt.postgresql.org) would very much like the VPATH situation getting
67 matches
Mail list logo