Hi,

I tried to implement a fdw module that is designed to utilize GPU
devices to execute
qualifiers of sequential-scan on foreign tables managed by this module.

It was named PG-Strom, and the following wikipage gives a brief
overview of this module.
    http://wiki.postgresql.org/wiki/PGStrom

In our measurement, it achieves about x10 times faster on
sequential-scan with complex-
qualifiers, of course, it quite depends on type of workloads.

Example)
A query counts number of records with (x,y) located within a particular range.
A regular table 'rtbl' and foreign table 'ftbl' contains same
contents; with 10 million of records.

postgres=# SELECT count(*) FROM rtbl WHERE sqrt((x-25.6)^2 + (y-12.8)^2) < 51.2;
 count
-------
 43134
(1 row)

Time: 10537.069 ms

postgres=# SELECT count(*) FROM ftbl WHERE sqrt((x-25.6)^2 + (y-12.8)^2) < 51.2;
 count
-------
 43134
(1 row)

Time: 744.252 ms

(*) Let's see the "How to use" section of the wikipage to reproduce my testcase.

It seems to me quite good result. However, I doubt myself whether the case of
sequential-scan on regular table was not tuned appropriately.
Could you tell me some hint to tune up sequential scan on large tables?
All I did on the test case is expansion of shared_buffers to 1024MB that is
enough to load whole of the example tables on memory.

Thanks,
-- 
KaiGai Kohei <kai...@kaigai.gr.jp>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to