Hello, I have an interesing problem relating to sql and performance issues and am looking at ways I can increase the performace from postgres.
Currently I have a view created from two tables. All the selects are being done on the view - which normally does not take a lot of time, but because my web app uses filtering on such as symbol ~ '^.*$', side, date etc, the select from the view is taking a lot of time (7000 ms) as per explain analyze. Both the primary and secondary tables have about 400,000 rows. I noticed that it is doing a sequential scan on the primary table which is joined to the secondary table in the view query. I just read when I use filters that postgres will do a seq scan on the table. My question is how can I fix this? Would it be better to create a temporary table for just daily data and have the view for more extended queries? Any other design ideas? Thanks, Radhika -- It is all a matter of perspective. You choose your view by choosing where to stand. Larry Wall --- ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings