On 19 September 2014 13:51, Björn Wittich <bjoern_witt...@gmx.de> wrote:

> Hi mailing list,
>
> I am relatively new to postgres. I have a table with 500 coulmns and about
> 40 mio rows. I call this cache table where one column is a unique key
> (indexed) and the 499 columns (type integer) are some values belonging to
> this key.
>
> Now I have a second (temporary) table (only 2 columns one is the key of my
> cache table) and I want  do an inner join between my temporary table and
> the large cache table and export all matching rows. I found out, that the
> performance increases when I limit the join to lots of small parts.
> But it seems that the databases needs a lot of disk io to gather all 499
> data columns.
> Is there a possibilty to tell the databases that all these colums are
> always treated as tuples and I always want to get the whole row? Perhaps
> the disk oraganization could then be optimized?
>
>
Hi,
do you have indexes on the columns you use for joins?

Szymon

Reply via email to