> On Aug 25, 2017, at 17:07, Tom Lane <t...@sss.pgh.pa.us> wrote:
> 
> =?utf-8?Q?Felix_Geisend=C3=B6rfer?= <fe...@felixge.de> writes:
>> I recently came across a performance difference between two machines that 
>> surprised me:
>> ...
>> As you can see, Machine A spends 5889ms on the Sort Node vs 609ms on Machine 
>> B when looking at the "Exclusive" time with explain.depesz.com [3][4]. I.e. 
>> Machine B is ~10x faster at sorting than Machine B (for this particular 
>> query).
> 
> I doubt this is a hardware issue, it's more likely that you're comparing
> apples and oranges.  The first theory that springs to mind is that the
> sort keys are strings and you're using C locale on the faster machine but
> some non-C locale on the slower.  strcoll() is pretty darn expensive
> compared to strcmp() :-(

You're right, that seems to be it.

Machine A was using strcoll() (lc_collate=en_US.UTF-8)
Machine B was using strcmp() (lc_collate=C)

After switching Machine A to use lc_collate=C, I get:

CTE Scan on zulu  (cost=40673.620..40742.300 rows=3434 width=56) (actual 
time=1368.610..1368.698 rows=58 loops=1)
  CTE zulu
      ->  HashAggregate  (cost=40639.280..40673.620 rows=3434 width=56) (actual 
time=1368.607..1368.659 rows=58 loops=1)
              Group Key: mike.two, ((mike.golf)::text)
            ->  Unique  (cost=37656.690..40038.310 rows=34341 width=104) 
(actual time=958.493..1168.128 rows=298104 loops=1)
                  ->  Sort  (cost=37656.690..38450.560 rows=317549 width=104) 
(actual time=958.491..1055.635 rows=316982 loops=1)
                          Sort Key: mike.two, ((mike.lima)::text) COLLATE 
"papa", mike.echo DESC, mike.quebec
                          Sort Method: quicksort  Memory: 56834kB
                        ->  Seq Scan on mike  (cost=0.000..8638.080 rows=317549 
width=104) (actual time=0.043..172.496 rows=316982 loops=1)
                                Filter: (golf five NOT NULL)
                                Rows Removed by Filter: 26426

So Machine A needs 883ms [1] for the sort vs 609ms [2] for Machine B. That's  
~1.4x faster which seems reasonable :).

Sorry for the delayed response, I didn't have access to machine B to confirm 
this right away.

>                       regards, tom lane

This is my first post to a PostgreSQL mailing list, but I've been lurking
for a while. Thank you for taking the time for replying to e-mails such
as mine and all the work you've put into PostgreSQL over the years.
I'm deeply grateful.

> On Aug 25, 2017, at 17:43, Peter Geoghegan <p...@bowt.ie> wrote:
> 
> On Fri, Aug 25, 2017 at 8:07 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> I doubt this is a hardware issue, it's more likely that you're comparing
>> apples and oranges.  The first theory that springs to mind is that the
>> sort keys are strings and you're using C locale on the faster machine but
>> some non-C locale on the slower.  strcoll() is pretty darn expensive
>> compared to strcmp() :-(
> 
> strcoll() is very noticeably slower on macOS, too.
> 

Thanks. This immediately explains what I saw when testing this query on a Linux 
machine that was also using lc_collate=en_US.UTF-8 but not being slowed down by 
it as much as the macOS machine.

[1] https://explain.depesz.com/s/LOqa
[2] https://explain.depesz.com/s/zVe

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to