Jeff,
Ask me sometime about my replacement for GNU sort. It uses the same
sorting algorithm, but it's an order of magnitude faster due to better
I/O strategy. Someday, in my infinite spare time, I hope to demonstrate
that kind of improvement with a patch to pg.
Since we desperately need
At 03:45 PM 8/25/2005, Josh Berkus wrote:
Jeff,
Ask me sometime about my replacement for GNU sort. Â It uses the same
sorting algorithm, but it's an order of magnitude faster due to better
I/O strategy. Â Someday, in my infinite spare time, I hope to demonstrate
that kind of improvement
[EMAIL PROTECTED] (Ron) writes:
At 03:45 PM 8/25/2005, Josh Berkus wrote:
Ask me sometime about my replacement for GNU sort. Â It uses the
same sorting algorithm, but it's an order of magnitude faster due
to better I/O strategy. Â Someday, in my infinite spare time, I
hope to demonstrate
Jeffrey W. Baker [EMAIL PROTECTED] writes:
On Wed, 2005-08-24 at 17:20 +1200, Guy Thornley wrote:
Dont forget that already in postgres, you have a process per connection, and
all the processes take care of their own I/O.
That's the problem. Instead you want 1 or 4 or 10 i/o slaves
On Wed, 2005-08-24 at 01:56 -0400, Tom Lane wrote:
Jeffrey W. Baker [EMAIL PROTECTED] writes:
On Wed, 2005-08-24 at 17:20 +1200, Guy Thornley wrote:
Dont forget that already in postgres, you have a process per connection,
and
all the processes take care of their own I/O.
That's the
of effort reinventing the wheel ... but our time will be repaid much
more if we work at levels that the OS cannot have knowledge of, such as
join planning and data statistics.
Considering a global budget of man-hours which is the best ?
1- Spend it on reimplementing half of VFS in
This thread covers several performance ideas. First is the idea that
more parameters should be configurable. While this seems like a noble
goal, we try to make parameters auto-tuning, or if users have to
configure it, the parameter should be useful for a significant number of
users.
In the
[EMAIL PROTECTED] (Steve Poe) writes:
Chris,
Unless I am wrong, you're making the assumpting the amount of time spent
and ROI is known. Maybe those who've been down this path know how to get
that additional 2-4% in 30 minutes or less?
While each person and business' performance gains (or
Tom, Gavin,
To get decent I/O you need 1MB fundamental units all the way down the
stack.
It would also be a good idea to have an application that isn't likely
to change a single bit in a 1MB range and then expect you to record
that change. This pretty much lets Postgres out of the
Agreed!!!
But the knowledge to Auto-tune your application comes from years of
understanding of how users are using the so-called knobs.. But if the knobs
are not there in the first place.. how do you know what people are using?
The so-called big boys are also using their knowledge base of what
On Wed, Aug 24, 2005 at 12:12:22PM -0400, Chris Browne wrote:
Everyone involved in development seems to me to have a reasonably keen
understanding as to what the potential benefits of threading are; the
value is that there fall out plenty of opportunities to parallelize
the evaluation of
Since Bruce referred to the corporate software world I'll chime in...
It has been a while since adding knobs and dials has been considered a good
idea. Customers are almost always bad at tuning their systems, which decreases
customer satisfaction. While many people assume the corporate types
Does that include increasing the size of read/write blocks? I've
noticedthat with a large enough table it takes a while to do a
sequential scan,
even if it's cached; I wonder if the fact that it takes a million
read(2) calls to get through an 8G table is part of that.
Actually some of
On Tue, Aug 23, 2005 at 05:29:01PM -0400, Jignesh Shah wrote:
Actually some of that readaheads,etc the OS does already if it does
some sort of throttling/clubbing of reads/writes.
Note that I specified the fully cached case--even with the workload in
RAM the system still has to process a
On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:
What we have been finding, as RAID controllers get smarter, is that it
is getting increasingly futile to try to attach knobs to 'disk stuff;'
it is *way* more effective to add a few more spindles to an array than
it is to fiddle with
On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:
[EMAIL PROTECTED] (Jignesh Shah) writes:
Does that include increasing the size of read/write blocks? I've
noticedthat with a large enough table it takes a while to do a
sequential scan, even if it's cached; I wonder if the fact
Chris,
Unless I am wrong, you're making the assumpting the amount of time spent
and ROI is known. Maybe those who've been down this path know how to get
that additional 2-4% in 30 minutes or less?
While each person and business' performance gains (or not) could vary,
someone spending the
On Tue, 2005-08-23 at 19:12 -0400, Michael Stone wrote:
On Tue, Aug 23, 2005 at 05:29:01PM -0400, Jignesh Shah wrote:
Actually some of that readaheads,etc the OS does already if it does
some sort of throttling/clubbing of reads/writes.
Note that I specified the fully cached case--even with
Hi Jim,
| How many of these things are currently easy to change with a recompile?
| I should be able to start testing some of these ideas in the near
| future, if they only require minor code or configure changes.
The following
* Data File Size 1GB
* WAL File Size of 16MB
* Block Size of 8K
Steve,
I would assume that dbt2 with STP helps minimize the amount of hours
someone has to invest to determine performance gains with configurable
options?
Actually, these I/O operation issues show up mainly with DW workloads, so the
STP isn't much use there. If I can ever get some of
Josh Berkus wrote:
Steve,
I would assume that dbt2 with STP helps minimize the amount of hours
someone has to invest to determine performance gains with configurable
options?
Actually, these I/O operation issues show up mainly with DW workloads, so the
STP isn't much use there.
On Tue, 2005-08-23 at 19:31 -0700, Josh Berkus wrote:
Steve,
I would assume that dbt2 with STP helps minimize the amount of hours
someone has to invest to determine performance gains with configurable
options?
Actually, these I/O operation issues show up mainly with DW workloads, so
Jeffrey W. Baker [EMAIL PROTECTED] writes:
To get decent I/O you need 1MB fundamental units all the way down the
stack.
It would also be a good idea to have an application that isn't likely
to change a single bit in a 1MB range and then expect you to record
that change. This pretty much lets
Unfortunately I'm really afraid that this conversation is about trees
when the forest is the problem. PostgreSQL doesn't even have an async
reader, which is the sort of thing that could double or triple its
performance. You're talking about block sizes and such, but the kinds
of
On Wed, 2005-08-24 at 17:20 +1200, Guy Thornley wrote:
As for the async IO, sure you might think 'oh async IO would be so cool!!'
and I did, once, too. But then I sat down and _thought_ about it, and
decided well, no, actually, theres _very_ few areas it could actually help,
and in most cases
25 matches
Mail list logo