Tom Lane wrote:
I increased the size of the test case by 10x (basically s/10/100/)
which is enough to push it into the external-sort regime. I get
amazingly stable runtimes now --- I didn't have the patience to run 100
trials, but in 30 trials I have slowest 11538 msec and fastest
On Wed, Feb 15, 2006 at 11:30:54PM -0500, Ron wrote:
Even better (and more easily scaled as the number of GPR's in the CPU
changes) is to use
the set {L; L+1; L+2; t1; R-2; R-1; R}
This means that instead of 7 random memory accesses, we have 3; two
of which result in a
burst access for
* Neil Conway:
On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:
It seems clear that our qsort.c is doing a pretty awful job of picking
qsort pivots, while glibc is mostly managing not to make that mistake.
I haven't looked at the glibc code yet to see what they are doing
differently.
On Thu, Feb 16, 2006 at 01:10:48PM +0100, Florian Weimer wrote:
* Neil Conway:
On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:
It seems clear that our qsort.c is doing a pretty awful job of picking
qsort pivots, while glibc is mostly managing not to make that mistake.
I haven't
Martijn van Oosterhout schrieb:
Last time around there were a number of different algorithms tested.
Did anyone run those tests while getting it to count the number of
actual comparisons (which could easily swamp the time taken to do the
actual sort in some cases)?
The last time I did such
At 06:35 AM 2/16/2006, Steinar H. Gunderson wrote:
On Wed, Feb 15, 2006 at 11:30:54PM -0500, Ron wrote:
Even better (and more easily scaled as the number of GPR's in the CPU
changes) is to use
the set {L; L+1; L+2; t1; R-2; R-1; R}
This means that instead of 7 random memory accesses, we have
On 15/02/06, Jay Greenfield [EMAIL PROTECTED] wrote:
I've been vacuuming between each test run.Not vacuuming results in times all the way up to 121 minutes.For a directcomparison with Access, the vacuuming time with Postgres should really beincluded as this is not required with Access.
Hmm but
At 07:10 AM 2/16/2006, Florian Weimer wrote:
* Neil Conway:
On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:
It seems clear that our qsort.c is doing a pretty awful job of picking
qsort pivots, while glibc is mostly managing not to make that mistake.
I haven't looked at the glibc code yet
Hi, Ron,
Ron wrote:
...and of course if you know enough about the data to be sorted so as to
constrain it appropriately, one should use a non comparison based O(N)
sorting algorithm rather than any of the general comparison based
O(NlgN) methods.
Sounds interesting, could you give us some
On Thu, Feb 16, 2006 at 08:22:55AM -0500, Ron wrote:
3= Especially in modern systems where the gap between internal CPU
bandwidth and memory bandwidth is so great, the overhead of memory
accesses for comparisons and moves is the majority of the overhead
for both the pivot choosing and the
Gary Doades [EMAIL PROTECTED] writes:
I think the reason I wasn't seeing performance issues with normal sort
operations is because they use work_mem not maintenance_work_mem which
was
only set to 2048 anyway. Does that sound right?
Very probable. Do you want to test the theory by jacking
At 09:48 AM 2/16/2006, Martijn van Oosterhout wrote:
On Thu, Feb 16, 2006 at 08:22:55AM -0500, Ron wrote:
3= Especially in modern systems where the gap between internal CPU
bandwidth and memory bandwidth is so great, the overhead of memory
accesses for comparisons and moves is the majority of
Ron [EMAIL PROTECTED] writes:
Your cost comment basically agrees with mine regarding the cost of
random memory accesses. The good news is that the number of datums
to be examined during the pivot choosing process is small enough that
the datums can fit into CPU cache while the pointers to
Markus Schaber wrote:
Ron wrote:
...and of course if you know enough about the data to be sorted so as to
constrain it appropriately, one should use a non comparison based O(N)
sorting algorithm rather than any of the general comparison based
O(NlgN) methods.
Sounds interesting, could you
At 10:52 AM 2/16/2006, Ron wrote:
At 09:48 AM 2/16/2006, Martijn van Oosterhout wrote:
Where this does become interesting is where we can convert a datum to
an integer such that if f(A) f(B) then A B. Then we can sort on
f(X) first with just integer comparisons and then do a full tuple
On Thu, Feb 16, 2006 at 11:32:55AM -0500, Ron wrote:
At 10:52 AM 2/16/2006, Ron wrote:
In fact we can do better.
Using hash codes or what-not to map datums to keys and then sorting
just the keys and the pointers to those datums followed by an
optional final pass where we do the actual data
HI ALL,
I have query for a report. Explain analyze
result is below. The execution plantells that it would use
"t_koltuk_islem_pkey" index on table "t_koltuk_islem" to scan.However,
there is another index on table "t_koltuk_islem" on column "islem_tarihi"
thatcan be combined on plan. Why
Craig A. James [EMAIL PROTECTED] writes:
You can also use this trick when the optimizer is asked for fastest first
result. Say you have a cursor on a column of numbers with good
distribution. If you do a bucket sort on the first two or three digits only,
you know the first page of results
Adnan DURSUN [EMAIL PROTECTED] writes:
I have query for a report. Explain analyze result is below. The =
execution plan tells that it would use t_koltuk_islem_pkey index on =
table t_koltuk_islem to scan. However, there is another index on table =
t_koltuk_islem on column islem_tarihi that
Gary Doades [EMAIL PROTECTED] writes:
I think the reason I wasn't seeing performance issues with normal sort
operations is because they use work_mem not maintenance_work_mem which was
only set to 2048 anyway. Does that sound right?
Very probable. Do you want to test the theory by jacking that
On Feb 16, 2006, at 8:32 AM, Ron wrote:
Let's pretend that we have the typical DB table where rows are
~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in
such a table.
A 32b hash code can be assigned to each row value such that only
exactly equal rows will have the same
At 12:19 PM 2/16/2006, Scott Lamb wrote:
On Feb 16, 2006, at 8:32 AM, Ron wrote:
Let's pretend that we have the typical DB table where rows are
~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in
such a table.
A 32b hash code can be assigned to each row value such that only
On Thu, 2006-02-16 at 12:35 +0100, Steinar H. Gunderson wrote:
glibc-2.3.5/stdlib/qsort.c:
/* Order size using quicksort. This implementation incorporates
four optimizations discussed in Sedgewick:
I can't see any references to merge sort in there at all.
stdlib/qsort.c defines
From: Tom Lane
Date: 02/16/06
19:29:21
To: Adnan DURSUN
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM]
Why does not perform index combination
"Adnan DURSUN" [EMAIL PROTECTED] writes:
I have query for a report. Explain analyze result is
below. The =
execution plan tells that
On Thu, 2006-02-16 at 12:15 -0500, Tom Lane wrote:
Once or twice we've kicked around the idea of having some
datatype-specific sorting code paths alongside the general-purpose one,
but I can't honestly see this as being workable from a code maintenance
standpoint.
Hi, Mark,
Mark Lewis schrieb:
It seems that instead of maintaining a different sorting code path for
each data type, you could get away with one generic path and one
(hopefully faster) path if you allowed data types to optionally support
a 'sortKey' interface by providing a function f which
On Thu, Feb 16, 2006 at 02:17:36PM -0800, Mark Lewis wrote:
It seems that instead of maintaining a different sorting code path for
each data type, you could get away with one generic path and one
(hopefully faster) path if you allowed data types to optionally support
a 'sortKey' interface by
Markus Schaber [EMAIL PROTECTED] writes:
Hmm, to remove redundancy, I'd change the = to a and define:
if a==b then f(a)==f(b)
if ab then f(a)=f(b)
Data types which could probably provide a useful function for f would be
int2, int4, oid, and possibly int8 and text (at least for
It seems that instead of maintaining a different sorting code path for
each data type, you could get away with one generic path and one
(hopefully faster) path if you allowed data types to optionally support
a 'sortKey' interface by providing a function f which maps inputs to 32-
bit int
On Thu, 2006-02-16 at 17:51 -0500, Greg Stark wrote:
Data types which could probably provide a useful function for f would be
int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).
How exactly do you imagine doing this for text?
I could see doing it for
Hi, PFC,
PFC schrieb:
By the way, I'd like to declare my zipcode columns as SQL_ASCII
while the rest of my database is in UNICODE, so they are faster to
index and sort. Come on, MySQL does it...
Another use case for parametric column definitions - charset definitions
- and the first
On Fri, Feb 17, 2006 at 12:05:23AM +0100, PFC wrote:
I would have said a 64 bit int, but it's the same idea. However it
won't work for floats, which is a pity, because floats fit in 64 bits.
Actually, you can compare IEEE floats directly as ints, as long as they're
positive. (If
On Thu, 16 Feb 2006, Mark Lewis wrote:
On Thu, 2006-02-16 at 17:51 -0500, Greg Stark wrote:
Data types which could probably provide a useful function for f would be
int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).
How exactly do you imagine doing this for text?
I could
At 01:47 PM 2/16/2006, Ron wrote:
At 12:19 PM 2/16/2006, Scott Lamb wrote:
On Feb 16, 2006, at 8:32 AM, Ron wrote:
Let's pretend that we have the typical DB table where rows are
~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in
such a table.
A 32b hash code can be assigned to
34 matches
Mail list logo