On Mon, May 21, 2012 at 10:44 AM, Josh Berkus j...@agliodbs.com wrote:
Right. So what I'm trying to figure out is why counting an index which
fits in ram (and I've confirmed via EXPLAIN ( buffers on ) ) is not
being heap-fetched or read from disk would take 25% as long as counting
a table
On Mon, May 21, 2012 at 1:42 PM, Josh Berkus j...@agliodbs.com wrote:
Earlier you said that this should be an ideal setup for IOS. But it
isn't really--the ideal set up is one in which the alternative to an
IOS is a regular index scan which makes many uncached scattered reads
into the heap.
On Mon, May 21, 2012 at 2:29 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Mon, May 21, 2012 at 4:17 PM, Jeff Janes jeff.ja...@gmail.com wrote:
For vaguely real life, take your example of pgbench -i -s200 -F 50,
and I have 2Gig RAM, which seems to be the same as you do.
With select only
Now that there are index only scans, there is a use case for having a
composite index which has the primary key or a unique key as the
prefix column(s) but with extra columns after that. Currently you
would also need another index with exactly the primary/unique key,
which seems like a waste of
On Tue, May 22, 2012 at 10:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
Now that there are index only scans, there is a use case for having a
composite index which has the primary key or a unique key as the
prefix column(s) but with extra columns after
On Tue, May 22, 2012 at 11:01 AM, Robert Haas robertmh...@gmail.com wrote:
On Tue, May 22, 2012 at 1:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
Now that there are index only scans, there is a use case for having a
composite index which has the primary key
On Wed, May 23, 2012 at 10:33 AM, Amit Kapila amit.kap...@huawei.com wrote:
I don't think there is a clear picture yet of what benchmark to use for
testing changes here.
I will first try to generate such a scenario(benchmark). I have still not
thought completely.
However the idea in my mind is
On Wed, May 23, 2012 at 8:36 AM, Amit Kapila amit.kap...@huawei.com wrote:
And besides
if the decrements are decoupled from the allocation requests it's no
longer obvious that the algorithm is even an approximation of LRU.
I was trying to highlight that we can do the clocksweep in bgwriter and
On Wed, May 23, 2012 at 11:40 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
One thing I wanted to play with is having newly read buffers get a
usage count of 0 rather than 1. The problem is that there is no way
to test it in enough different situations
On Mon, May 21, 2012 at 9:22 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Sat, May 19, 2012 at 1:23 AM, Jeff Janes jeff.ja...@gmail.com wrote:
I've been testing the crash recovery of REL9_2_BETA1, using the same
method I posted in the Scaling XLog insertion thread. I have the
checkpointer
On Wed, May 23, 2012 at 1:10 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
It looks to me like the SIGQUIT from the postmaster is simply getting
lost. And from what little I understand of signal handling, this is a
known race with system(3
On Thu, May 24, 2012 at 6:24 AM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
Hi,
I've been running some tests on pg 9.2beta1 and in particular a set
of queries like
...
And I noticed than when I run the query like the one shown above in parallel
(in multiple connections for ZZZ=0...8) the
On Wed, May 23, 2012 at 2:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I wrote:
Jeff Janes jeff.ja...@gmail.com writes:
But what happens if the SIGQUIT is blocked before the system(3) is
invoked? Does the ignore take precedence over the block, or does the
block take precedence over the ignore
On Thu, May 24, 2012 at 12:46 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Thu, May 24, 2012 at 2:24 PM, Merlin Moncure mmonc...@gmail.com wrote:
As you can see, raw performance isn't much worse with the larger data
sets, but scalability at high connection counts is severely degraded
once
On Thu, May 24, 2012 at 3:36 PM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
Hi,
On Thu, 24 May 2012, Robert Haas wrote:
Not sure. It might be some other LWLock, but it's hard to tell which
one from the information provided.
If you could tell what's the best way to find out the info
On Thu, May 24, 2012 at 4:26 PM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
On Thu, 24 May 2012, Jeff Janes wrote:
Add
#define LWLOCK_STATS
near the top of:
src/backend/storage/lmgr/lwlock.c
and recompile and run a reduced-size workload. When the processes
exits, they will dump a lot
If I invoke vacuum manually and do so with VacuumCostDelay == 0, I
have basically declared my intentions to get this pain over with as
fast as possible even if it might interfere with other processes.
Under that condition, shouldn't it use BAS_BULKWRITE rather than
BAS_VACUUM? The smaller ring
On Tue, May 29, 2012 at 5:19 PM, Robert Haas robertmh...@gmail.com wrote:
I ran a SELECT-only pgbench test today on the IBM POWER7 box with 64
concurrent clients and got roughly 305,000 tps. Then, I created a
hash index on pgbench_accounts (aid), dropped the primary key, and
reran the test.
On Wed, May 30, 2012 at 4:10 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 30.05.2012 03:40, Sergey Koposov wrote:
I was running some tests on PG9.2beta where I'm creating and dropping
large number of tables (~ 2).
And I noticed that table dropping was extremely
Currently the resource owner does not remember what locks it holds.
When a resource owner wants to release its locks or reassign them to
its parent, it just digs through the backends entire
LockMethodLocalHash table. When that table is very large, but the
current owner owns only a small fraction
On Wed, May 30, 2012 at 9:56 AM, Bruce Momjian br...@momjian.us wrote:
As part of a blog, I started looking at how a user could measure the
pressure on shared buffers, e.g. how much are they being used, recycled,
etc.
They way you normally do it on older operating systems is to see how
many
On Wed, May 30, 2012 at 10:57 AM, Bruce Momjian br...@momjian.us wrote:
On Wed, May 30, 2012 at 10:38:10AM -0700, Jeff Janes wrote:
On Wed, May 30, 2012 at 9:56 AM, Bruce Momjian br...@momjian.us wrote:
As part of a blog, I started looking at how a user could measure the
pressure on shared
On Wed, May 30, 2012 at 11:23 AM, Bruce Momjian br...@momjian.us wrote:
On Wed, May 30, 2012 at 11:06:45AM -0700, Jeff Janes wrote:
On Wed, May 30, 2012 at 10:57 AM, Bruce Momjian br...@momjian.us wrote:
On Wed, May 30, 2012 at 10:38:10AM -0700, Jeff Janes wrote:
Isn't that what
On Wed, May 30, 2012 at 11:45 AM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
On Wed, 30 May 2012, Merlin Moncure wrote:
Hm, why aren't we getting a IOS? Just for kicks (assuming this is
test data), can we drop the index on just transitid, leaving the index
on transitid, healpixid? Is
On Wed, May 30, 2012 at 2:55 PM, Bruce Momjian br...@momjian.us wrote:
On Wed, May 30, 2012 at 11:51:23AM -0700, Jeff Janes wrote:
On Wed, May 30, 2012 at 11:23 AM, Bruce Momjian br...@momjian.us wrote:
On Wed, May 30, 2012 at 11:06:45AM -0700, Jeff Janes wrote:
On Wed, May 30, 2012 at 10:57
On Wed, May 30, 2012 at 4:16 PM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
But the question now is whether there is a *PG* problem here or not, or is
it Intel's or Linux's problem ? Because still the slowdown was caused by
locking. If there wouldn't be locking there wouldn't be any problems
On Wed, May 30, 2012 at 7:00 PM, Stephen Frost sfr...@snowman.net wrote:
Robert,
* Robert Haas (robertmh...@gmail.com) wrote:
On Wed, May 30, 2012 at 9:10 PM, Sergey Koposov kopo...@ast.cam.ac.uk
wrote:
I understand the need of significant locking when there concurrent writes,
but not
On Sun, Jun 19, 2011 at 3:30 PM, Greg Smith g...@2ndquadrant.com wrote:
I applied Jeff's patch but changed this to address concerns about the
program getting stuck running for too long in the function:
#define plpgsql_loops 512
This would be better named as plpgsql_batch_size or something
On Sun, May 27, 2012 at 11:45 AM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
Hi,
I did another test using the same data and the same code, which I've
provided before and the performance of the single thread seems to be
degrading quadratically with the number of threads.
Here are the
On Thu, May 31, 2012 at 9:17 AM, Robert Haas robertmh...@gmail.com wrote:
Oh, ho. So from this we can see that the problem is that we're
getting huge amounts of spinlock contention when pinning and unpinning
index pages.
It would be nice to have a self-contained reproducible test case for
On Thu, May 31, 2012 at 11:50 AM, Robert Haas robertmh...@gmail.com wrote:
This test case is unusual because it hits a whole series of buffers
very hard. However, there are other cases where this happens on a
single buffer that is just very, very hot, like the root block of a
btree index,
On Thu, May 31, 2012 at 11:09 AM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
On Thu, 31 May 2012, Simon Riggs wrote:
That struck me as a safe and easy optimisation. This was a problem I'd
been trying to optimise for 9.2, so I've written a patch that appears
simple and clean enough to be
On Wed, May 30, 2012 at 6:10 PM, Sergey Koposov kopo...@ast.cam.ac.uk wrote:
On Wed, 30 May 2012, Jeff Janes wrote:
But anyway, is idt_match a fairly static table? If so, I'd partition
that into 16 tables, and then have each one of your tasks join against
a different one of those tables
On Fri, Jun 1, 2012 at 10:51 AM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jun 1, 2012 at 8:47 AM, Florian Pflug f...@phlo.org wrote:
We'd drain the unpin queue whenever we don't expect a PinBuffer() request
to happen for a while. Returning to the main loop is an obvious such place,
On Thu, May 31, 2012 at 5:04 AM, Simon Riggs si...@2ndquadrant.com wrote:
On 30 May 2012 12:10, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Hmm, we do this in smgrDoPendingDeletes:
for (i = 0; i = MAX_FORKNUM; i++)
{
smgrdounlink(srel, i, false);
}
So we drop
On Wed, May 23, 2012 at 11:05 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, May 23, 2012 at 2:00 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I'm running some tests where I mix the work load of pgbench by doing
TPC-B (sort of) transaction mixed in with a variable number of
SELECT-only
As discussed in several different email threads here and on
performance , when using pg_dump a on large number of objects, the
server has a quadratic behavior in LockReassignCurrentOwner where it
has to dig through the entire local lock table to push one or two
locks up from the portal being
On Sun, Jun 10, 2012 at 11:28 PM, Amit Kapila amit.kap...@huawei.com wrote:
I have few doubts regarding logic of ResourceOwnerRememberLock() and
ResourceOwnerForgetLock():
1. In function ResourceOwnerRememberLock(), when lock count is
MAX_RESOWNER_LOCKS, it will not add the lock to lock array
On Wed, May 30, 2012 at 3:14 PM, Robert Haas robertmh...@gmail.com wrote:
I developed the attached patch to avoid taking a heavyweight lock on
the metapage of a hash index. Instead, an exclusive buffer content
lock is viewed as sufficient permission to modify the metapage, and a
shared buffer
On Mon, Jun 11, 2012 at 9:30 PM, Amit Kapila amit.kap...@huawei.com wrote:
Yes, that means the list has over-flowed. Once it is over-flowed, it
is now invalid for the reminder of the life of the resource owner.
Don't we need any logic to clear the reference of locallock in owner-locks
array.
On Fri, Jun 15, 2012 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
On Mon, Jun 11, 2012 at 9:30 PM, Amit Kapila amit.kap...@huawei.com wrote:
MAX_RESOWNER_LOCKS - How did you arrive at number 10 for it. Is there any
specific reason for 10.
I
On Thu, Jun 14, 2012 at 2:39 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jan 11, 2012 at 8:48 PM, Robert Haas robertmh...@gmail.com wrote:
I've had cause, a few times this development cycle, to want to measure
the amount of spinning on each lwlock in the system. To that end,
I've
On Thu, Jun 14, 2012 at 3:42 PM, Peter Eisentraut pete...@gmx.net wrote:
Here is my first patch for the transforms feature. This is a mechanism
to adapt data types to procedural languages. The previous proposal was
here: http://archives.postgresql.org/pgsql-hackers/2012-05/msg00728.php
When
On Sat, Jun 16, 2012 at 7:15 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Thu, Jun 14, 2012 at 3:42 PM, Peter Eisentraut pete...@gmx.net wrote:
Here is my first patch for the transforms feature. This is a mechanism
to adapt data types to procedural languages. The previous proposal was
here
On Sat, Jun 16, 2012 at 9:00 PM, nik9...@gmail.com wrote:
I've always used -1-f - file.sql. It is confusing that -1 doesn't warn you
when it wont work though.
Yeah, I just got bitten by that one. Definitely violates the POLA.
Cheers,
Jeff
--
Sent via pgsql-hackers mailing list
There was a regression introduced in 9.2 that effects the creation and
loading of lots of small tables in a single transaction.
It affects the loading of a pg_dump file which has a large number of
small tables (10,000 schemas, one table per schema, 10 rows per
table). I did not test other schema
On Tue, Jun 19, 2012 at 2:38 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Jun 19, 2012 at 4:33 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jun 18, 2012 at 8:42 PM, Jeff Janes jeff.ja...@gmail.com wrote:
There was a regression introduced in 9.2 that effects the creation
On Tue, Jun 19, 2012 at 8:06 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Jun 19, 2012 at 10:56 PM, Jeff Janes jeff.ja...@gmail.com wrote:
But in the 9.2 branch, the slow phenotype was re-introduced in
1575fbcb795fc331f4, although perhaps the details of who is locking
what differs. I
On Thu, Jun 21, 2012 at 5:32 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 18.06.2012 13:59, Heikki Linnakangas wrote:
On 10.06.2012 23:39, Jeff Janes wrote:
I found the interface between resowner.c and lock.c a bit confusing.
resowner.c would sometimes call
On Mon, Jun 18, 2012 at 5:42 PM, Robert Haas robertmh...@gmail.com wrote:
Hmm. That was actually a gloss I added on existing code to try to
convince myself that it was safe; I don't think that the changes I
made make that any more or less safe than it was before.
Right, sorry. I thought
On Tue, Jun 26, 2012 at 3:58 PM, Nils Goroll sl...@schokola.de wrote:
It's
still unproven whether it'd be an improvement, but you could expect to
prove it one way or the other with a well-defined amount of testing.
I've hacked the code to use adaptive pthread mutexes instead of spinlocks. see
On Thu, Jun 28, 2012 at 5:16 AM, David E. Wheeler da...@justatheory.com wrote:
Hackers,
Very interesting design document for SQLite 4:
http://www.sqlite.org/src4/doc/trunk/www/design.wiki
I'm particularly intrigued by covering indexes. For example:
CREATE INDEX cover1 ON table1(a,b)
On Thu, Jun 28, 2012 at 8:26 AM, Robert Haas robertmh...@gmail.com wrote:
3. Consider adjusting the logic inside initdb. If this works
everywhere, the code for determining how to set shared_buffers should
become pretty much irrelevant. Even if it only works some places, we
could add 64MB or
On Thu, Jun 28, 2012 at 9:12 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Tom Lane's message of jue jun 28 12:07:58 -0400 2012:
When this came up a couple weeks ago, the argument that was made for it
was that you could attach non-significant columns to an index that *is*
On Fri, Jun 29, 2012 at 10:07 AM, Nils Goroll sl...@schokola.de wrote:
On 06/28/12 05:21 PM, Jeff Janes wrote:
It looks like the hacked code is slower than the original. That
doesn't seem so good to me. Am I misreading this?
No, you are right - in a way. This is not about maximizing tps
On Wed, Jun 20, 2012 at 12:32 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 01.06.2012 03:02, Jeff Janes wrote:
I've attached a new patch which addresses several of your concerns,
and adds the documentation. The description is much longer than the
descriptions of other
On Thu, Jun 28, 2012 at 6:57 PM, Josh Berkus j...@agliodbs.com wrote:
A second obstacle to opportunistic wraparound vacuum is that
wraparound vacuum is not interruptable. If you have to kill it off and
do something else for a couple hours, it can't pick up where it left
off; it needs to scan
On Sun, Jul 1, 2012 at 2:28 PM, Nils Goroll sl...@schokola.de wrote:
Hi Jeff,
It looks like the hacked code is slower than the original. That
doesn't seem so good to me. Am I misreading this?
No, you are right - in a way. This is not about maximizing tps, this is
about
maximizing
On Sun, May 22, 2011 at 3:10 PM, Robert Haas robertmh...@gmail.com wrote:
...
However, in this case, there was only one client, so that's not the
problem. I don't really see how to get a big win here. If we want to
be 4x faster, we'd need to cut time per query by 75%. That might
require 75
If you use pgbench -S -M prepared at a scale where all data fits in
memory, most of what you are benchmarking is network/IPC chatter, and
table locking. Which is fine if that is what you want to do. This
patch adds a new transaction type of -P, which does the same thing as
-S but it moves the
On Sun, May 15, 2011 at 11:19 AM, Robert Haas robertmh...@gmail.com wrote:
I don't think there's any need for this to get data into
shared_buffers at all. Getting it into the OS cache oughta be plenty
sufficient, no?
ISTM that a very simple approach here would be to save the contents of
On Mon, Jun 6, 2011 at 11:27 AM, Simon Riggs si...@2ndquadrant.com wrote:
But that even assumes we write the unzeroed data at the end of the
buffer. We don't. We only write data up to the end of the WAL record
on the current page, unless we do a continuation record,
I see no codepath in
On Sun, May 29, 2011 at 7:04 PM, Greg Smith g...@2ndquadrant.com wrote:
On 05/29/2011 03:11 PM, Jeff Janes wrote:
If you use pgbench -S -M prepared at a scale where all data fits in
memory, most of what you are benchmarking is network/IPC chatter, and
table locking.
If you profile
On Sun, Jun 12, 2011 at 2:39 PM, Robert Haas robertmh...@gmail.com wrote:
...
Profiling reveals that the system spends enormous amounts of CPU time
in s_lock. LWLOCK_STATS reveals that the only lwlock with significant
amounts of blocking is the BufFreelistLock;
This is curious. Clearly the
On Mon, Jun 13, 2011 at 7:03 AM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
...
so it seems that sysbench is actually significantly less overhead than
pgbench and the lower throughput at the higher conncurency seems to be
cause by sysbench being able to stress the backend even more
On Mon, Jun 13, 2011 at 9:09 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
I noticed that pgbench's doCustom (the function highest in the profile
posted) returns doing nothing if the connection is supposed to be
sleeping; seems an open door for busy waiting. I didn't check the
rest of
On Sun, Jun 19, 2011 at 3:30 PM, Greg Smith g...@2ndquadrant.com wrote:
...
Things to fix in the patch before it would be a commit candidate:
-Adjust the loop size/name, per above
-Reformat some of the longer lines to try and respect the implied right
margin in the code formatting
-Don't
On Tue, Jun 28, 2011 at 10:21 AM, Alexander Korotkov
aekorot...@gmail.com wrote:
Actually, there is no more direct need of this patch because I've rewrote
insert function for fast build. But there are still two points for having
this changes:
1) As it was noted before, it simplifies code a
On Mon, Jun 13, 2011 at 7:03 AM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
On 06/13/2011 01:55 PM, Stefan Kaltenbrunner wrote:
[...]
all those tests are done with pgbench running on the same box - which
has a noticable impact on the results because pgbench is using ~1 core
per 8
On Sun, Jun 19, 2011 at 9:34 PM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
On Mon, Jun 20, 2011 at 07:30, Greg Smith g...@2ndquadrant.com wrote:
I applied Jeff's patch but changed this to address concerns about the
program getting stuck running for too long in the function:
#define
On Wed, Aug 3, 2011 at 11:21 AM, Robert Haas robertmh...@gmail.com wrote:
About nine months ago, we had a discussion of some benchmarking that
was done by the mosbench folks at MIT:
http://archives.postgresql.org/pgsql-hackers/2010-10/msg00160.php
Although the authors used PostgreSQL as a
On Wed, Aug 3, 2011 at 3:21 PM, Jim Nasby j...@nasby.net wrote:
On Aug 3, 2011, at 1:21 PM, Robert Haas wrote:
1. We configure PostgreSQL to use a 2 Gbyte application-level cache
because PostgreSQL protects its free-list with a single lock and thus
scales poorly with smaller caches. This is a
On Tue, Jan 29, 2013 at 3:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
David Rowley dgrowle...@gmail.com writes:
If pg_dump was to still follow the dependencies of objects, would there be
any reason why it shouldn't backup larger tables first?
Pretty much every single discussion/complaint about
On Wed, Jan 30, 2013 at 6:55 AM, Andres Freund and...@2ndquadrant.com wrote:
c.f.
vacuum_set_xid_limits:
/*
* Determine the table freeze age to use: as specified by the
caller,
* or vacuum_freeze_table_age, but in any case not more than
On Fri, Feb 1, 2013 at 2:34 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-02-01 14:05:46 -0800, Jeff Janes wrote:
As far as I can tell this bug kicks in when your cluster gets to be
older than freeze_min_age, and then lasts forever after. After that
point pretty much every auto
On Monday, January 28, 2013, Kevin Grittner wrote:
IMO, anything which changes an anti-wraparound vacuum of a
bulk-loaded table from read the entire table and rewrite nearly
the complete table with WAL-logging to rewriting a smaller portion
of the table with WAL-logging is an improvement.
On Sat, Jan 26, 2013 at 11:25 PM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
On Thu, Jan 24, 2013 at 9:31 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Thu, Jan 24, 2013 at 1:28 AM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
Good idea. Even though the cost of pinning/unpinning may
On Sat, Jan 5, 2013 at 8:03 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 3.1.2013 20:33, Magnus Hagander wrote:
Yeah, +1 for a separate directory not in global.
OK, I moved the files from global/stat to stat.
This has a warning:
pgstat.c:5132: warning: 'pgstat_write_statsfile_needed' was used
On Sat, Jan 5, 2013 at 8:03 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 3.1.2013 20:33, Magnus Hagander wrote:
Yeah, +1 for a separate directory not in global.
OK, I moved the files from global/stat to stat.
Why stat rather than pg_stat?
The existence of global and base as exceptions already
On Sat, Feb 2, 2013 at 2:33 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sat, Jan 5, 2013 at 8:03 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 3.1.2013 20:33, Magnus Hagander wrote:
Yeah, +1 for a separate directory not in global.
OK, I moved the files from global/stat to stat.
This has
On Sun, Feb 3, 2013 at 9:25 AM, Kevin Grittner kgri...@ymail.com wrote:
I was able to confirm two cases where this was a consequence of the
lazy truncate logic which Jan recently fixed, but there are clearly
other problems which I didn't have much of a grasp on prior to this
thread. The only
On Sat, Feb 2, 2013 at 5:25 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-02-01 15:09:34 -0800, Jeff Janes wrote:
On Fri, Feb 1, 2013 at 2:34 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-02-01 14:05:46 -0800, Jeff Janes wrote:
As far as I can tell this bug kicks in when
On Sun, Feb 3, 2013 at 4:51 PM, Tomas Vondra t...@fuzzy.cz wrote:
LOG: last_statwrite 11133-08-28 19:22:31.711744+02 is later than
collector's time 2013-02-04 00:54:21.113439+01 for db 19093
WARNING: pgstat wait timeout
LOG: last_statwrite 39681-12-23 18:48:48.9093+01 is later than
On Tue, Feb 5, 2013 at 2:31 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 5.2.2013 19:23, Jeff Janes wrote:
If I shutdown the server and blow away the stats with rm
data/pg_stat/*, it recovers gracefully when I start it back up. If a
do rm -r data/pg_stat then it has problems the next time I shut
While stress testing Pavan's 2nd pass vacuum visibility patch, I realized
that vacuum/visibility was busted. But it wasn't his patch that busted it.
As far as I can tell, the bad commit was in the
range 692079e5dcb331..168d3157032879
Since a run takes 12 to 24 hours, it will take a while to
On Thu, Feb 7, 2013 at 1:44 AM, Pavan Deolasee pavan.deola...@gmail.com wrote:
On Thu, Feb 7, 2013 at 2:25 PM, Pavan Deolasee pavan.deola...@gmail.com
wrote:
Will look more into it, but thought this might be useful for others to
spot the problem.
And here is some more forensic info about
On Thu, Feb 7, 2013 at 12:55 AM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
On Thu, Feb 7, 2013 at 11:09 AM, Jeff Janes jeff.ja...@gmail.com wrote:
While stress testing Pavan's 2nd pass vacuum visibility patch, I realized
that vacuum/visibility was busted. But it wasn't his patch
On Thu, Feb 7, 2013 at 9:32 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Thu, Feb 7, 2013 at 12:55 AM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
Index scans do not return any duplicates and you need to force a seq
scan to see them. Assuming that the page level VM bit might be
corrupted
On Thu, Feb 7, 2013 at 10:09 AM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
Right. I don't have the database handy at this moment, but earlier in
the day I ran some queries against it and found that most of the
duplicates which are not accessible via indexes have xmin very close
to
On Fri, Feb 8, 2013 at 2:37 AM, Pavan Deolasee pavan.deola...@gmail.com wrote:
I was looking at the vacuum/visibility bug that Jeff Janes reported
and brought up the server with the data directory he has shared. With
his configuration,
3092 2013-02-08 02:30:31.327 PST:LOG: checkpoints
On Thu, Feb 7, 2013 at 8:38 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Alvaro Herrera escribió:
Alvaro Herrera escribió:
Hm, if the foreign key patch is to blame, this sounds like these tuples
had a different set of XMAX hint bits and a different Xmax, and they
were clobbered by
On Fri, Feb 8, 2013 at 7:20 AM, Peter Eisentraut pete...@gmx.net wrote:
On 2/8/13 5:23 AM, Magnus Hagander wrote:
But do you have any actual proof that the problem is in we
loose reviewers because we're relying on email?
Here is one: Me.
Just yesterday I downloaded a piece of software that
On Fri, Feb 8, 2013 at 2:23 AM, Magnus Hagander mag...@hagander.net wrote:
On Fri, Feb 8, 2013 at 1:32 AM, Josh Berkus j...@agliodbs.com wrote:
8. Send it to pgsql-hackers
8.a. this requires you to be subscribed to pgsql-hackers.
No, it does not. It will get caught in the moderation queue
commit 381d4b70a9854a7b5b9f12d828a0824f8564f1e7 introduced some
compiler warnings:
assert.c:26: warning: no previous prototype for 'ExceptionalCondition'
elog.c: In function 'pg_re_throw':
elog.c:1628: warning: implicit declaration of function 'ExceptionalCondition'
elog.c:1630: warning:
On Tue, Feb 5, 2013 at 2:31 PM, Tomas Vondra t...@fuzzy.cz wrote:
We do not already have this. There is no relevant spec. I can't see
how this could need pg_dump support (but what about pg_upgrade?)
pg_dump - no
pg_upgrage - IMHO it should create the pg_stat directory. I don't think
it
While looking at some proposed patches and pondering some questions on
performance, I realized I desperately needed ways to run benchmarks with
different settings without needing to edit postgresql.conf and
restart/reload the server each time.
Most commonly, I want to run with synchronous_commit
On Mon, Feb 18, 2013 at 7:50 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
So here's v11. I intend to commit this shortly. (I wanted to get it
out before lunch, but I introduced a silly bug that took me a bit to
fix.)
On Windows with Mingw I get this:
pgstat.c:4389:8: warning:
On Wed, Feb 20, 2013 at 7:54 AM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Feb 19, 2013 at 5:48 PM, Simon Riggs si...@2ndquadrant.com wrote:
I agree with Merlin and Joachim - if we have the call in one place, we
should have it in both.
We might want to assess whether we even want to
On Thu, Feb 21, 2013 at 12:39 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Are you talking about the patch to avoid restored WAL segments from being
re-archived (commit 6f4b8a4f4f7a2d683ff79ab59d3693714b965e3d), or the bug
that that unarchived WALs were deleted after crash (commit
On Sat, Feb 23, 2013 at 9:00 AM, Jov am...@amutu.com wrote:
when build the head contrib,I get follow error:
make[1]: Entering directory `/data/myenv/postgresql/contrib/pgstattuple'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory
301 - 400 of 1437 matches
Mail list logo