2014-09-19 13:51 GMT+02:00 Björn Wittich <bjoern_witt...@gmx.de>: > Hi mailing list, > > I am relatively new to postgres. I have a table with 500 coulmns and about > 40 mio rows. I call this cache table where one column is a unique key > (indexed) and the 499 columns (type integer) are some values belonging to > this key. > > Now I have a second (temporary) table (only 2 columns one is the key of my > cache table) and I want do an inner join between my temporary table and > the large cache table and export all matching rows. I found out, that the > performance increases when I limit the join to lots of small parts. > But it seems that the databases needs a lot of disk io to gather all 499 > data columns. > Is there a possibilty to tell the databases that all these colums are > always treated as tuples and I always want to get the whole row? Perhaps > the disk oraganization could then be optimized? >
sorry for offtopic array databases are maybe better for your purpose http://rasdaman.com/ http://www.scidb.org/ > > > Thank you for feedback and ideas > Best > Neo > > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance >