Greg Stark wrote:
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark wrote:
Barring any objections shall I commit it like this?
Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same speedup?
I think the final version of this
On Tuesday 19 January 2010 15:57:14 Greg Stark wrote:
> On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark wrote:
> > Barring any objections shall I commit it like this?
>
> Actually before we get there could someone who demonstrated the
> speedup verify that this patch still gets that same speedup?
At
Hi Greg,
On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
> On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark wrote:
> > Looking at this patch for the commitfest I have a few questions.
>
> So I've touched this patch up a bit:
>
> 1) moved the posix_fadvise call to a new fd.c function
> pg_fsync
Hi Greg,
On Monday 18 January 2010 17:35:59 Greg Stark wrote:
> 2) Why does the second pass to do the fsyncs read through fromdir to
> find all the filenames. I find that odd and counterintuitive. It would
> be much more natural to just loop through the files in the new
> directory. But I suppose
On 20/01/2010 4:16 AM, Arjen van der Meijden wrote:
Another command to look at, if you're I/O-bound, is the 'ionice'
command, which is similar to nice, but obviously intended for I/O.
For some I/O-bound background job, one of the 'idle' classes can be a
nice level. But for a (single) postgres-pr
On Fri, 2010-01-15 at 22:05 -0500, Greg Smith wrote:
> A few months ago the worst of the bugs in the ext4 fsync code started
> clearing up, with
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5f3481e9a80c240f169b36ea886e2325b9aeb745
>
> as a particularly painful o
On Tue, Jan 19, 2010 at 2:09 PM, Carlo Stonebanks
wrote:
> Hi Scott,
>
> Sorry for the very late reply on this post, but I'd like to follow up. The
> reason that I took so long to reply was due to this suggestion:
>
> < overrunning the max_fsm_pages settings or the max_fsm_relations.
>>>
>
> My fi
On Tue, 2010-01-19 at 12:19 -0800, PG User 2010 wrote:
> Hello,
>
> We are running into some performance issues with running VACUUM FULL
> on the pg_largeobject table in Postgres (8.4.2 under Linux), and I'm
> wondering if anybody here might be able to suggest anything to help
> address the issue.
Hi Scott,
Sorry for the very late reply on this post, but I'd like to follow up. The
reason that I took so long to reply was due to this suggestion:
<
My first thought was, does he mean against the entire DB? That would take a
week! But, since it was recommended, I decided to see what woul
On Jan 19, 2010, at 2:50 AM, fka...@googlemail.com wrote:
> Scott Carey:
>
>> You are CPU bound.
>>
>> 30% of 4 cores is greater than 25%. 25% is one core fully
>> used.
>
> I have measured the cores separately. Some of them reached
> 30%. I am not CPU bound here.
>
Measuring the cores isn'
Hello,
We are running into some performance issues with running VACUUM FULL on the
pg_largeobject table in Postgres (8.4.2 under Linux), and I'm wondering if
anybody here might be able to suggest anything to help address the issue.
Specifically, when running VACUUM FULL on the pg_largeobject table
On 19-1-2010 13:59 Willy-Bas Loos wrote:
Hi,
I have a query that runs for about 16 hours, it should run at least weekly.
There are also clients connecting via a website, we don't want to keep
them waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the process r
On Tue, Jan 19, 2010 at 3:09 PM, Jean-David Beyer wrote:
> Willy-Bas Loos wrote:
>
>
>>
>> On Tue, Jan 19, 2010 at 2:28 PM, Jean-David Beyer
>> > jeandav...@verizon.net>> wrote:
>>
>>It could make sense.
>>
>>I once had a job populating a database. It was I/O bound and ran for a
>>cou
El 19/01/2010 13:59, Willy-Bas Loos escribió:
Hi,
I have a query that runs for about 16 hours, it should run at least
weekly.
There are also clients connecting via a website, we don't want to keep
them waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the p
On 01/19/10 14:36, fka...@googlemail.com wrote:
Ivan Voras:
[I just skimmed this thread - did you increase the number of WAL logs to
something very large, like 128?]
Yes, I tried even more.
I will be writing data quite constantly in the real scenario
later. So I wonder if increasing WAL logs
Greg Stark writes:
> 1) moved the posix_fadvise call to a new fd.c function
> pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
> waiting on it. Currently it's only implemented with
> posix_fadvise(DONT_NEED) but I want to look into using sync_file_range
> in the future -- it looks
"fka...@googlemail.com" wrote:
> Scott Carey:
>
>> You are CPU bound.
>>
>> 30% of 4 cores is greater than 25%. 25% is one core fully
>> used.
>
> I have measured the cores separately. Some of them reached
> 30%. I am not CPU bound here.
If you have numbers like that when running one big qu
On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:
> On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark wrote:
> > Looking at this patch for the commitfest I have a few questions.
>
> So I've touched this patch up a bit:
>
> 1) moved the posix_fadvise call to a new fd.c function
> pg_fsync_start(fd,
On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark wrote:
> Barring any objections shall I commit it like this?
Actually before we get there could someone who demonstrated the
speedup verify that this patch still gets that same speedup?
--
greg
--
Sent via pgsql-performance mailing list (pgsql-perfo
On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark wrote:
> Looking at this patch for the commitfest I have a few questions.
So I've touched this patch up a bit:
1) moved the posix_fadvise call to a new fd.c function
pg_fsync_start(fd,offset,nbytes) which initiates an fsync without
waiting on it. Curre
Ivan Voras:
> [I just skimmed this thread - did you increase the number of WAL logs to
> something very large, like 128?]
Yes, I tried even more.
I will be writing data quite constantly in the real scenario
later. So I wonder if increasing WAL logs will have a
positive effect or not: AFAIK when
Hi,
I have a query that runs for about 16 hours, it should run at least weekly.
There are also clients connecting via a website, we don't want to keep them
waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the process really lowers the load (in "top"),
though i
On 19/01/10 10:50, fka...@googlemail.com wrote:
However, the deeper question is (sounds ridiculous): Why am
I I/O bound *this much* here. To recall: The write
performance in pg is about 20-25% of the worst case serial
write performance of the disk (and only about 8-10% of the
best disk perf) even
On 01/19/10 11:16, fka...@googlemail.com wrote:
fka...@googlemail.com:
I'll try to execute these tests on a SSD
and/or Raid system.
FYI:
On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB
disks with a write performance of 100 MB/s (worst, to 200
MB/s max), I get a performance of 18.2
Pierre Frédéric Caillaud:
> I don't remember if you used TOAST compression or not.
I use 'bytea' and SET STORAGE EXTERNAL for this column.
AFAIK this switches off the compression.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscriptio
Scott Carey:
> You are CPU bound.
>
> 30% of 4 cores is greater than 25%. 25% is one core fully
> used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
> The postgres compression of data in TOAST is
> probably the problem. I'm assuming its doing Gzip,
fka...@googlemail.com:
> I'll try to execute these tests on a SSD
> and/or Raid system.
FYI:
On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB
disks with a write performance of 100 MB/s (worst, to 200
MB/s max), I get a performance of 18.2 MB/s. Before, with
other disk 43 MB/s (worst to
27 matches
Mail list logo