populated, but not
completely empty, index pages; which your insertions will then never
reuse because they never again insert values in that key range?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
establishing)
or 33 us/call.
So unless your server is a lot slower than mine, I think your client
may be the bottleneck. What is your client program? what does top
show as the relative CPU usage of your client program vs the postgres
... [local] program to which it is connected?
Cheers,
Jeff
-
Sent via
On Thu, Aug 9, 2012 at 4:00 AM, Stefan Keller sfkel...@gmail.com wrote:
Hi
2012/8/8 Jeff Janes jeff.ja...@gmail.com:
On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller sfkel...@gmail.com wrote:
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having
have?
What happens if you set enable_seqscan=off and run the query with
the very large list? (This is an experiment, not a recommendation for
production use)
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
On Tue, Aug 7, 2012 at 9:06 AM, Ioannis Anagnostopoulos
ioan...@anatec.com wrote:
On 07/08/2012 17:00, Jeff Janes wrote:
What happens if you set enable_seqscan=off and run the query with
the very large list? (This is an experiment, not a recommendation for
production use)
Cheers,
Jeff
://archives.postgresql.org/pgsql-general/2005-01/msg01347.php
From: Jeff Davis
It got me curious enough that I tested it, and apparently droping a
trigger locks the table. Any actions on that table must wait until the
transaction that drops the trigger finishes.
So, technically my system works
On Tue, Aug 7, 2012 at 2:39 PM, Craig James cja...@emolecules.com wrote:
On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure mmonc...@gmail.com wrote:
IF current_user = 'bulk_writer' THEN
return new;
END IF;
expensive stuff
school and 1 pharmacy within distance. There can't be 1
unless there is =1, but the join possibilities for =1 (i.e. where
exists rather than where (select count(*)...)1 ) are much more
attractive than the ones for 1.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance
caching than the other.
Did you try running the queries in alternating order, to average out
caching effects?
Could you run the explain (analyze, buffers) on those to get a
better picture of the buffer effects?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance
! Many thanks. But how comes, the queries are also
fast
after a restore without the cluster?
Probably fewer buffers needed to be touched.
Running explain (analyze, buffers) would give information on how
many buffers were touched.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql
for every 4 or 5 leaf pages,
you still get substantial improvement by doing so in sorted order.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
increase the insert performance by imposing a small cost on each
query. The total cost is at least as great, but you have re-arranged
how the cost is amortized into a more acceptable shape.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
maintenance on an index) it
often doesn't even attempt to cost that at all as there is no choice.
So it is not just a matter of units.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
many rounds of truncation does one rake do? I.e. how many
truncations are occurring over the course of that 1 minute or 15
minutes?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
is PGC_SIGHUP it is not clear how you could
safely turn it back on.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
are.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET scail...@alaloop.com wrote:
Hi,
Thank you all for your help.
@Jeff : my daemon creates these tables at start time so it doesn't do
anything else at the same time. The CPU is loaded between 20% and 25%.
How does it decide which tables
, too.
Sounds like this hack may become unnecessary in 9.2 though.
Index only scans in 9.2 are nice, but they don't fundamentally change
this type of thing.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
RAM back when you were running PG 8.3?
Any suggestions. I'm willing and able to profile, or whatever.
Who much RAM do you have? What are your settings for /proc/sys/vm/dirty_* ?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
that you can TRUNCATE many tables in one run, right?
TRUNCATE TABLE a, b, c, d, e, f, g;
This still calls DropRelFileNodeAllBuffers once for each table (and
each index), even if the table is empty.
With large shared_buffers, this can be relatively slow.
Cheers,
Jeff
--
Sent via pgsql
in a single folder is a
too many for OS ?
I doubt that that is a problem on any reasonably modern Linux.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
you if it is coming from
the shared_buffers cache.
It is harder to see if it is coming from the file system cache. If
the server is mostly idle other than your stuff, you can run vmstat
and see how much physical IO is caused by your activity.
Cheers,
Jeff
--
Sent via pgsql-performance mailing
On Tue, Jun 12, 2012 at 1:54 AM, Tatsuo Ishii is...@postgresql.org wrote:
On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii is...@postgresql.org wrote:
Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii is...@postgresql.org wrote:
Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
management in the server. What I fixed so far on the pg_dump side
should
where the index doesn't fit in memory. Mark, can you please try your
experiments on the 9.2beta and tell us whether that helps you?
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
took 13 minutes in total, now
it only takes 3 seconds. Comments?
Could you rebase this? I tried doing it myself, but must have messed
it up because it got slower rather than faster.
Thanks,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
backup
first!):
http://www.postgresql.org/docs/9.1/static/app-pgresetxlog.html
And try setting the current transaction ID to just before the delete
ran. Then you may be able to use pg_dump or otherwise export the deleted
rows.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list
pg_dump with lots of objects. Using a
preliminary version for this, in conjunction with -Fc, reduced the
dump time to 3.5 minutes.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
On Thu, May 24, 2012 at 8:54 PM, Bruce Momjian br...@momjian.us wrote:
On Thu, May 24, 2012 at 08:20:34PM -0700, Jeff Janes wrote:
pg_dump itself seems to have a lot of quadratic portions (plus another
one on the server which it hits pretty heavily), and it hard to know
where to start
| psql -d foo /dev/null ;
time pg_dump foo -Fc -n foo1 | wc -c;
done dump_one_schema_timing
To show the overall dump speed problem, drop the -n foo1, and change
the step size from 1/ down to 1000/999
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance
On Fri, May 25, 2012 at 9:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
For dumping entire databases, It looks like the biggest problem is
going to be LockReassignCurrentOwner in the server. And that doesn't
seem to be easy to fix, as any change
On Fri, May 25, 2012 at 1:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
For dumping entire databases, It looks like the biggest problem is
going to be LockReassignCurrentOwner in the server. And that doesn't
seem to be easy to fix, as any change
. If they try to do conflicting things, they don't
continue operating concurrently.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
to know--whether the total number of
relations changed, or just their distribution amongst the schemata.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
unmanageable.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
try -j 16 or -j 32?
Also, -t 2000 is mighty low.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Apr 6, 2012 at 3:09 PM, Kim Hansen k...@rthansen.dk wrote:
Hi all
On Fri, Apr 6, 2012 at 19:11, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Apr 4, 2012 at 6:47 AM, Kim Hansen k...@rthansen.dk wrote:
Hi All
I have a query where the planner makes a wrong cost estimate, it looks
all the needed buffers into cache,
and the index scan then got to enjoy that cache.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that makes it much faster, TCP must not have been your
bottleneck.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
=off to simulate a BBU I
have no trouble hitting 11k tps on a single SATA disk.
fsync=off might be a better way to simulate a BBU.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
is not
aware of that. So the whole thing is rather sub-optimal.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Feb 29, 2012 at 7:28 AM, Stefan Keller sfkel...@gmail.com wrote:
2012/2/29 Stefan Keller sfkel...@gmail.com:
2012/2/29 Jeff Janes jeff.ja...@gmail.com:
It's quite possible the vacuum full is thrashing your disk cache due
to maintainance_work_mem. You can overcome this issue
in the OS cache and into shared_buffers.
Doesn't it use a ring buffer strategy, so it would load to OS, but
probably not to shared_buffers?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
remains in cache (although I've seen nfs
implementations where that was not the case).
It is hard to figure out what problem you are facing. Is your data
not getting loaded into cache, or is it not staying there?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance
On Sun, Feb 26, 2012 at 2:56 AM, Stefan Keller sfkel...@gmail.com wrote:
Hi Jeff and Wales,
2012/2/26 Jeff Janes jeff.ja...@gmail.com wrote:
The problem is that the initial queries are too slow - and there is no
second chance. I do have to trash the buffer every night. There is
enough main
. But it wouldn't help you get the
indexes into cache.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.
But how long should each setting be tested for? If a different
setting causes certain index to start being used, then performance
would go down until those indexes get cached and then increase from
there. But how long is long enough to allow this to happen?
Thanks,
Jeff
--
Sent via pgsql
need.
Maybe with the timing = off feature,it would might make sense to just
preemptively analyze everything.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
production situation.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
of records: approx 227 million, comprising 16GB of storage.
Why the huge discrepancy?
Maintaining indices when rows are inserted in a random order generates
a huge amount of scattered I/O.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
greatly on your IO subsystem.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
you provide.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mon, Jan 30, 2012 at 9:46 AM, Saurabh saurabh@gmail.com wrote:
Thank you for the information.
Schema of table is:
ID bigint
company_name text
data_set text
time timestamp
Date date
Length of
not have control over that.
Hi Andy: As of now, there are no triggers in the table.
What about indexes?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
as
possible during brief bursts, you are probably just setting yourself
up for this type of issue. (The brief bursts can be useful if they
make better use of cache, but then you have to accept that other
things will be disrupted during those bursts.)
Cheers,
Jeff
--
Sent via pgsql-performance mailing
have to compete for buffer space with other things going on, then
using the index scan could be catastrophic.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Dec 21, 2011 at 6:30 PM, Benjamin Johnson
benjamin.john...@getcarbonblack.com wrote:
Jeff,
Sorry for the delayed response. Please see (some) answers inline.
On 12/1/2011 9:06 AM, Jeff Janes wrote:
On Wed, Nov 30, 2011 at 7:27 AM, Benjamin Johnson
Why shove it in as fast as you can
, and deal with maintenance of the older
partitions, then you will need a lot of spare capacity, relative to
just inserting, to do all of those things. Do you have windows where
there is less insert activity in which other things can get done?
Cheers,
Jeff
--
Sent via pgsql-performance mailing
away, and they might end up getting lost.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
a few more retroactively may not be a big deal.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
)? Fundamentally,
these records aren't changing, you are just trying to interpret them in
the context of the current day. That should be done using a SELECT, not
an UPDATE.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
versa depending on which it thought would be faster.
Although those two queries are semantically the same (I think), a lot of
very similar pairs of queries are not equivalent. For instance, if it
was a NOT IN you couldn't change that to a NOT EXISTS.
Regards,
Jeff Davis
--
Sent via pgsql
splits during a scan of the same bucket. This
would probably be easy if there were no overflow pages. But the
overflow pages get shuffled in with each other and with the main
bucket page during a split. It would take quite some gymnastics to
get around that.
Cheers,
Jeff
--
Sent via pgsql
On Sun, Sep 18, 2011 at 7:59 AM, Stefan Keller sfkel...@gmail.com wrote:
Merlin and Jeff,
General remark again:It's hard for me to imagine that btree is
superior for all the issues mentioned before. I still believe in hash
index for primary keys and certain unique constraints where you need
not just distribute those 32/(5 to 10) bits to the ordinary
hash space, increasing them from 32 to 35 bits, rather than creating a
separate hash space? Doesn't that get you the same resolving power?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
(or faster) than our PG 8.4 instance.
Can you include buffers in your explain analyze? Also, what is the
plan for the new server when hashjoin is disabled?
What if you lower random_page_cost to 1 (or to whatever value seq_page_cost is)?
Cheers,
Jeff
--
Sent via pgsql-performance mailing
at all, only of inserts.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
if the number of buckets could be enlarged
by concurrently making a new, larger, index and then dropping the old one.
I've only thought about proposing it, because currently I don't have time
to do anything on it if the proposal was well received.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list
the btree index accesses are going to
visit the same block over and over again before going to the next block.
With hash indexes, it will jump all over the place.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
did not improve performance, it is
not directly relevant for you.
2. There should be an option do not perform these compactions if the page is
only touched by reads.
If the page is only touched by reads, there would be nothing to compact.
Cheers,
Jeff
--
Sent via pgsql-performance mailing
On Fri, Jul 15, 2011 at 5:21 PM, lars lhofha...@yahoo.com wrote:
On 07/14/2011 04:03 PM, Jeff Janes wrote:
On Wed, Jul 13, 2011 at 3:41 PM, larslhofha...@yahoo.com wrote:
On 07/13/2011 11:42 AM, Kevin Grittner wrote:
So transactions without an XID *are* sensitive to
synchronous_commit
the update has been run once, further executions are degenerate
(they replace the updated indexed column with the same value it
already holds). The HOT code detects this and uses a HOT update in
this case despite the apparent update of an indexed column.
Cheers,
Jeff
--
Sent via pgsql
?
It has surprisingly little impact on the SELECT side:
If your fsync is truly fsyncing, it seems like it should have
considerable effect.
Could you strace with both -ttt and -T, with and without synchronous commit?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance
around a couple ms at minimum, it's probably some kind of
IO latency. You could see that by wrapping the statements in a big
transaction (BEGIN/END block) -- I bet the inserts go very quickly and
the final commit takes longer.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing
for the tsvector columns?
lawdb=# explain analyze select * from docmeta1 where docvector @@
plainto_tsquery('english', 'free');
It would be nice to see the results of explain (analyze, buffers).
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the records are lying in child tables.
Run EXPLAIN ANALYZE on each of those queries, and post the results.
See http://wiki.postgresql.org/wiki/SlowQueryQuestions for a guide on
how to give the necessary information for others to help.
Regards,
Jeff Davis
--
Sent via pgsql-performance
the underlying
table had one as well? (Of course the root block will be a hot-spot,
but certainly not 90% of all requests)
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
to pure read tests. It looks like I just need multiple
threads so I can have multiple reads/writes in flight at the same
time.
Yep - you need multiple threads to get max throughput of your io.
--
Jeff Trout j...@jefftrout.com
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/
--
Sent via
On Thu, May 12, 2011 at 8:53 AM, Prodan, Andrei
andrei.pro...@awinta.com wrote:
@Jeff: thank you for the clear plan interpretation - but I'm afraid I
don't really understand the second bit:
1) I provided the GOOD plan, so we already know what postgres thinks,
right? (Later edit: guess
,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
(rather than firing up a bunch of pgiosims in parallel) and
see how things scale up.
--
Jeff Trout j...@jefftrout.com
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
will probably need to add manual locking to get around this problem.
Cheers
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sat, Apr 30, 2011 at 3:29 PM, Joel Reymont joe...@gmail.com wrote:
On Apr 30, 2011, at 11:11 PM, Jeff Janes wrote:
But what exactly are you inserting? The queries you reported below
are not the same as the ones you originally described.
I posted the wrong query initially. The only
difficult, and I don't know how the system will do it.
However, when I create and populate simple tables based on your
description, I get the index scan being the lower estimated cost. So
the tables I built are not sufficient to study the matter in detail.
Cheers,
Jeff
--
Sent via pgsql
at svctime
from sar. We may be implementing that in the near future to detect
when this creeps up again.
--
Jeff Trout j...@jefftrout.com
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
On Mar 29, 2011, at 10:16 AM, Jeff wrote:
Now that all sounds awful and horrible until you get to overall
performance, especially with reads - you are looking at 20k random
reads per second with a few disks. Adding in writes does kick it
down a noch, but you're still looking at 10k+ iops
on the 24 disk beast?
--
Jeff Trout j...@jefftrout.com
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Access Strategy. By design, bulk inserts generally write out
their own buffers.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
anyway.)
Seems like maybe an Assert would be called for.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
postgres slot_getattr
11509
aside from j2date (probably coming up due to that Join filter I'd
wager) nothing unexpected.
--
Jeff Trout j...@jefftrout.com
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/
--
Sent via pgsql-performance mailing list (pgsql-performance
is indexed, and is used in the outer nestloop. Thinking
more about what is going on cache thrashing is certainly a possibility.
the amazing explain analyze overhead is also very curious - we all
know it adds overhead, but 85ms? Yow.
--
Jeff Trout j...@jefftrout.com
http://www.stuarthamm.net
for specific purposes
will probably be rejected. I am in no way implying that you are
approaching it this way -- I am just trying to characterize an approach
that won't make progress.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
things.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
this from Perl using DBI, DBD::Oracle, and DBD::Pg. As perl
is written in C, that should be a good sign for you.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
for shared_buffers as well,
rather than starting at only 32MB.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that as
the default.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 12/25/10, Mladen Gogala mladen.gog...@vmsinfo.com wrote:
Jeff Janes wrote:
If the background writer cannot keep up, then the individual backends
start doing writes as well, so it isn't really serialized..
Is there any parameter governing that behavior?
No, it is automatic
serialized by the backend.
If the background writer cannot keep up, then the individual backends
start doing writes as well, so it isn't really serialized..
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
401 - 500 of 801 matches
Mail list logo