Humair Mohammed <huma...@hotmail.com> writes:
> Yes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to rerun 
> the query. Results from EXPLAIN ANALYZE below:
> "Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual 
> time=43200.223..49502.874 rows=3163 loops=1)""  Hash Cond: (((pb.id)::text = 
> (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))""  Join 
> Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> 
> (COALESCE(pg.response, 'MISSING'::character varying))::text)""  ->  Seq Scan 
> on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134) (actual 
> time=0.009..48.200 rows=93496 loops=1)""  ->  Hash  (cost=7537.12..7537.12 
> rows=251212 width=134) (actual time=42919.453..42919.453 rows=251212 
> loops=1)""        Buckets: 1024  Batches: 64  Memory Usage: 650kB""        -> 
>  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212 width=134) (actual 
> time=0.119..173.019 rows=251212 loops=1)""Total runtime: 49503.450 ms"

I have no idea how much memory SQL Server thinks it can use, but
Postgres is limiting itself to work_mem which you've apparently left at
the default 1MB.  You might get a fairer comparison by bumping that up
some --- try 32MB or so.  You want it high enough so that the Hash
output doesn't say there are multiple batches.

                        regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to