fka...@googlemail.com:
I'll try to execute these tests on a SSD
and/or Raid system.
FYI:
On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB
disks with a write performance of 100 MB/s (worst, to 200
MB/s max), I get a performance of 18.2 MB/s. Before, with
other disk 43 MB/s (worst to
Scott Carey:
You are CPU bound.
30% of 4 cores is greater than 25%. 25% is one core fully
used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
The postgres compression of data in TOAST is
probably the problem. I'm assuming its doing Gzip, and
Pierre Frédéric Caillaud:
I don't remember if you used TOAST compression or not.
I use 'bytea' and SET STORAGE EXTERNAL for this column.
AFAIK this switches off the compression.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On 01/19/10 11:16, fka...@googlemail.com wrote:
fka...@googlemail.com:
I'll try to execute these tests on a SSD
and/or Raid system.
FYI:
On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB
disks with a write performance of 100 MB/s (worst, to 200
MB/s max), I get a performance of
On 19/01/10 10:50, fka...@googlemail.com wrote:
However, the deeper question is (sounds ridiculous): Why am
I I/O bound *this much* here. To recall: The write
performance in pg is about 20-25% of the worst case serial
write performance of the disk (and only about 8-10% of the
best disk perf)
Hi,
I have a query that runs for about 16 hours, it should run at least weekly.
There are also clients connecting via a website, we don't want to keep them
waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the process really lowers the load (in top),
though i
Ivan Voras:
[I just skimmed this thread - did you increase the number of WAL logs to
something very large, like 128?]
Yes, I tried even more.
I will be writing data quite constantly in the real scenario
later. So I wonder if increasing WAL logs will have a
positive effect or not: AFAIK when
On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark gsst...@mit.edu wrote:
Looking at this patch for the commitfest I have a few questions.
So I've touched this patch up a bit:
1) moved the posix_fadvise call to a new fd.c function
pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark gsst...@mit.edu wrote:
Barring any objections shall I commit it like this?
Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same speedup?
--
greg
--
Sent via pgsql-performance mailing
On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark gsst...@mit.edu wrote:
Looking at this patch for the commitfest I have a few questions.
So I've touched this patch up a bit:
1) moved the posix_fadvise call to a new fd.c function
fka...@googlemail.com fka...@googlemail.com wrote:
Scott Carey:
You are CPU bound.
30% of 4 cores is greater than 25%. 25% is one core fully
used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
If you have numbers like that when running one
Greg Stark gsst...@mit.edu writes:
1) moved the posix_fadvise call to a new fd.c function
pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
waiting on it. Currently it's only implemented with
posix_fadvise(DONT_NEED) but I want to look into using sync_file_range
in the future
On 01/19/10 14:36, fka...@googlemail.com wrote:
Ivan Voras:
[I just skimmed this thread - did you increase the number of WAL logs to
something very large, like 128?]
Yes, I tried even more.
I will be writing data quite constantly in the real scenario
later. So I wonder if increasing WAL
El 19/01/2010 13:59, Willy-Bas Loos escribió:
Hi,
I have a query that runs for about 16 hours, it should run at least
weekly.
There are also clients connecting via a website, we don't want to keep
them waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the
On Tue, Jan 19, 2010 at 3:09 PM, Jean-David Beyer jeandav...@verizon.netwrote:
Willy-Bas Loos wrote:
On Tue, Jan 19, 2010 at 2:28 PM, Jean-David Beyer
jeandav...@verizon.netmailto:
jeandav...@verizon.net wrote:
It could make sense.
I once had a job populating a database. It was
On 19-1-2010 13:59 Willy-Bas Loos wrote:
Hi,
I have a query that runs for about 16 hours, it should run at least weekly.
There are also clients connecting via a website, we don't want to keep
them waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the process
Hello,
We are running into some performance issues with running VACUUM FULL on the
pg_largeobject table in Postgres (8.4.2 under Linux), and I'm wondering if
anybody here might be able to suggest anything to help address the issue.
Specifically, when running VACUUM FULL on the pg_largeobject
On Jan 19, 2010, at 2:50 AM, fka...@googlemail.com wrote:
Scott Carey:
You are CPU bound.
30% of 4 cores is greater than 25%. 25% is one core fully
used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
Measuring the cores isn't enough.
Hi Scott,
Sorry for the very late reply on this post, but I'd like to follow up. The
reason that I took so long to reply was due to this suggestion:
Run vacuum verbose to see if you're
overrunning the max_fsm_pages settings or the max_fsm_relations.
My first thought was, does he mean
On Tue, 2010-01-19 at 12:19 -0800, PG User 2010 wrote:
Hello,
We are running into some performance issues with running VACUUM FULL
on the pg_largeobject table in Postgres (8.4.2 under Linux), and I'm
wondering if anybody here might be able to suggest anything to help
address the issue.
Are
On Tue, Jan 19, 2010 at 2:09 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Hi Scott,
Sorry for the very late reply on this post, but I'd like to follow up. The
reason that I took so long to reply was due to this suggestion:
Run vacuum verbose to see if you're
overrunning the
On Fri, 2010-01-15 at 22:05 -0500, Greg Smith wrote:
A few months ago the worst of the bugs in the ext4 fsync code started
clearing up, with
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5f3481e9a80c240f169b36ea886e2325b9aeb745
as a particularly painful one.
On 20/01/2010 4:16 AM, Arjen van der Meijden wrote:
Another command to look at, if you're I/O-bound, is the 'ionice'
command, which is similar to nice, but obviously intended for I/O.
For some I/O-bound background job, one of the 'idle' classes can be a
nice level. But for a (single)
Hi Greg,
On Monday 18 January 2010 17:35:59 Greg Stark wrote:
2) Why does the second pass to do the fsyncs read through fromdir to
find all the filenames. I find that odd and counterintuitive. It would
be much more natural to just loop through the files in the new
directory. But I suppose it
Hi Greg,
On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark gsst...@mit.edu wrote:
Looking at this patch for the commitfest I have a few questions.
So I've touched this patch up a bit:
1) moved the posix_fadvise call to a new fd.c function
On Tuesday 19 January 2010 15:57:14 Greg Stark wrote:
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark gsst...@mit.edu wrote:
Barring any objections shall I commit it like this?
Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same
Greg Stark wrote:
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark gsst...@mit.edu wrote:
Barring any objections shall I commit it like this?
Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same speedup?
I think the final
27 matches
Mail list logo