On Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:
> I am seeing a performance problem with postgresql v 11.7 on views, and I am 
> wondering if anyone can tell me why or has any suggestion.
> 
> A table is created as:
> 
> CREATE TABLE "FBNK_CUSTOMER" (RECID VARCHAR(255) NOT NULL PRIMARY KEY, 
> XMLRECORD VARCHAR)
> 
> And contains only 180 rows.
> 
> Doing an explain plan on the view created over this gives:
> 
> EXPLAIN ANALYZE
> select RECID from "V_FBNK_CUSTOMER"
> 
> 
> Subquery Scan on "V_FBNK_CUSTOMER"  (cost=0.00..19014.60 rows=180 width=7) 
> (actual time=459.601..78642.189 rows=180 loops=1)
>   ->  Seq Scan on "FBNK_CUSTOMER" a  (cost=0.00..19012.80 rows=180 
> width=14575) (actual time=459.600..78641.950 rows=180 loops=1)
> 
> Yet an Explain plan on the underlying table( on select RECID from 
> "FBNK_CUSTOMER") gives:
> 
> Seq Scan on "FBNK_CUSTOMER"  (cost=0.00..22.80 rows=180 width=7) (actual 
> time=0.004..0.272 rows=180 loops=1)

It still did a seq scan on the table, so I'm not sure what this has to do with
index scans ?

> The following query takes an extremely long time for only 180 rows, and what 
> this means is that we would have to index anything appearing in the where 
> clause for every table in order to use views because the views seem not to 
> consider the select clause.  Why is that and does anyone know a way around 
> this?

Is there a reason why you don't store the extracted value in its own column ?
And maybe keep it up to date using an insert/update trigger on the xmlrecord
column.

-- 
Justin


Reply via email to