On Fri, 18 Apr 2008, Tom Lane wrote:
Yeah, it's starting to be obvious that we'd better not ignore sysbench
as not our problem. Do you have any roadmap on what needs to be done
to it?
Just dug into this code again for a minute and it goes something like
this:
1) Wrap the write statements
On Freitag, 18. April 2008, Francisco Reyes wrote:
| I am trying to get a distinct set of rows from 2 tables.
| After looking at someone else's query I noticed they were doing a group by
| to obtain the unique list.
|
| After comparing on multiple machines with several tables, it seems using
|
Francisco Reyes [EMAIL PROTECTED] writes:
Is there any dissadvantage of using group by to obtain a unique list?
On a small dataset the difference was about 20% percent.
Group by
HashAggregate (cost=369.61..381.12 rows=1151 width=8) (actual
time=76.641..85.167 rows=2890 loops=1)
On Thu, 17 Apr 2008, I wrote:
There is only one
central tunable (you have to switch on CONFIG_SCHED_DEBUG):
/proc/sys/kernel/sched_granularity_ns
which can be used to tune the scheduler from 'desktop' (low
latencies) to 'server'
On Fri, 18 Apr 2008 11:36:02 +0200, Gregory Stark [EMAIL PROTECTED]
wrote:
Francisco Reyes [EMAIL PROTECTED] writes:
Is there any dissadvantage of using group by to obtain a unique list?
On a small dataset the difference was about 20% percent.
Group by
HashAggregate (cost=369.61..381.12
Hi,
Il giorno 11/apr/08, alle ore 20:03, Craig Ringer ha scritto:
Speaking of I/O performance with PostgreSQL, has anybody here done
any testing to compare results with LVM to results with the same
filesystem on a conventionally partitioned or raw volume? I'd
probably use LVM even at a
This autovacuum has been hammering my server with purely random i/o
for half a week. The table is only 20GB and the i/o subsystem is good
for 250MB/s sequential and a solid 5kiops. When should I expect it to
end (if ever)?
current_query: VACUUM reuters.value
query_start: 2008-04-15
Jeffrey Baker [EMAIL PROTECTED] writes:
This autovacuum has been hammering my server with purely random i/o
for half a week. The table is only 20GB and the i/o subsystem is good
for 250MB/s sequential and a solid 5kiops. When should I expect it to
end (if ever)?
What have you got
On Fri, 18 Apr 2008, Matthew wrote:
You may also be seeing processes forced to switch between CPUs, which
breaks the caches even more. So what happens if you run pgbench on a
separate machine to the server? Does the problem still exist in that
case?
I haven't run that test yet but will
On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane [EMAIL PROTECTED] wrote:
Jeffrey Baker [EMAIL PROTECTED] writes:
This autovacuum has been hammering my server with purely random i/o
for half a week. The table is only 20GB and the i/o subsystem is good
for 250MB/s sequential and a solid
On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:
On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane [EMAIL PROTECTED] wrote:
Jeffrey Baker [EMAIL PROTECTED] writes:
This autovacuum has been hammering my server with purely random i/o
for half a week. The table is
Hi.
I have this message queue table.. currently with 8m+ records. Picking
the top priority messages seem to take quite long.. it is just a matter
of searching the index.. (just as explain analyze tells me it does).
Can anyone digest further optimizations out of this output? (All records
On Fri, Apr 18, 2008 at 10:34 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:
On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:
On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane [EMAIL PROTECTED] wrote:
Jeffrey Baker [EMAIL PROTECTED] writes:
This autovacuum has been
Jesper Krogh wrote:
Hi.
I have this message queue table.. currently with 8m+ records. Picking
the top priority messages seem to take quite long.. it is just a matter
of searching the index.. (just as explain analyze tells me it does).
Can anyone digest further optimizations out of this
Jeffrey Baker escribió:
That's rather more like it. I guess I always imagined that VACUUM was
a sort of linear process, not random, and that it should proceed at
sequential scan speeds.
It's linear for the table, but there are passes for indexes which are
random in 8.1. That code was
Jeffrey Baker [EMAIL PROTECTED] writes:
I increased it to 1GB, restarted the vacuum, and system performance
seems the same. The root of the problem, that an entire CPU is in the
iowait state and the storage device is doing random i/o, is unchanged:
Yeah, but you just reduced the number of
Craig Ringer wrote:
Jesper Krogh wrote:
Hi.
I have this message queue table.. currently with 8m+ records.
Picking the top priority messages seem to take quite long.. it is just
a matter of searching the index.. (just as explain analyze tells me it
does).
Can anyone digest further
Jesper Krogh [EMAIL PROTECTED] writes:
I have this message queue table.. currently with 8m+ records. Picking
the top priority messages seem to take quite long.. it is just a matter
of searching the index.. (just as explain analyze tells me it does).
Limit (cost=0.00..0.09 rows=1
Has there ever been any analysis regarding the redundant write overhead
of full page writes?
I'm wondering if once could regard an 8k page as being 64 off 128 byte
paragraphs or
32 off 256byte paragraphs, each represented by a bit in a word. And,
when a pageis dirtied
by changes some record
[EMAIL PROTECTED] (Jesper Krogh) writes:
I have this message queue table.. currently with 8m+
records. Picking the top priority messages seem to take quite
long.. it is just a matter of searching the index.. (just as explain
analyze tells me it does).
Can anyone digest further optimizations
Jeffrey Baker wrote:
On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:
# show maintenance_work_mem ;
maintenance_work_mem
--
16384
That appears to be the default. I will try increasing this. Can I
increase it globally from a single backend,
21 matches
Mail list logo