On Wed, Feb 4, 2026 at 2:32 AM Peter J. Holzer <[email protected]> wrote:

> On 2026-02-04 00:20:20 +0530, yudhi s wrote:
> >
> >
> > On Tue, 3 Feb, 2026, 9:37 pm Adrian Klaver, <[email protected]>
> wrote:
> >
> >     On 2/3/26 07:59, Ron Johnson wrote:
> >
> >     >
> >     >
> >     > There is no VARCHAR or CHAR; there is only TEXT.  Thus, this is
> 100%
> >     > expected and normal.
> >
> >     What Ron is saying is that there are varchar and char types, but they
> >     boil down to text per:
> >
> >     https://www.postgresql.org/docs/current/datatype-character.html
> >
> >     "text is PostgreSQL's native string data type, in that most built-in
> >     functions operating on strings are declared to take or return text
> not
> >     character varying. For many purposes, character varying acts as
> though
> >     it were a domain over text."
> >
> >     As to performance see:
> >
> >     "
> >     Tip
> >
> >     There is no performance difference among these three types, apart
> from
> >     increased storage space when using the blank-padded type, and a few
> >     extra CPU cycles to check the length when storing into a
> >     length-constrained column. While character(n) has performance
> advantages
> >     in some other database systems, there is no such advantage in
> >     PostgreSQL; in fact character(n) is usually the slowest of the three
> >     because of its additional storage costs. In most situations text or
> >     character varying should be used instead.
> >     "
> >
> >
> > Thank you. I was looking into those casting(::text) in the explain plan
> output
> > in similar way (as it was happening for int8 to numeric join scenario)
> and was
> > thinking, may be it's spending some cpu cycles on doing these ::text
> casting
> > behind the scenes for that column and if there is someway(data type
> change) to
> > stop those. But from your explanation, it looks like those
> representation in
> > the query plan is normal and have no performance overhead as such. Thanks
> > again.
> >
> > In regards to the below, "nested loop" having response time of 100ms. I
> > understand, here the casting function us now removed after changing the
> data
> > type of columns to match in both side of the join.
> >
> > So, is this expected to do a nested loop on 500k rows to take 100ms?
> >
> > ->  Nested Loop  (cost=262.77..1342550.91 rows=579149 width=20) (actual
> time=
> > 6.406..107.946 rows=1049 loops=1)
> >               Join Filter: (df.ent_id = m.ent_id)
> >               Rows Removed by Join Filter: 514648
> >               Buffers: shared hit=1972
> >
>
> Take a closer look at what that nested loop does:
>
>         ->  Nested Loop  (cost=266.53..1548099.38 rows=411215 width=20)
> (actual time=6.009..147.695 rows=1049 loops=1)
>               Join Filter: ((df.ent_id)::numeric = m.ent_id)
>               Rows Removed by Join Filter: 513436
>               Buffers: shared hit=1939
>               ->  Index Scan Backward using txn_tbl_due_dt_idx on txn_tbl
> df  (cost=0.43..115471.09 rows=1417983 width=20) (actual time=0.047..20.155
> rows=43626 loops=1)
>                     Filter: ((txn_tbl_type_nm)::text = ANY
> ('{.......}'::text[]))
>                     Rows Removed by Filter: 17
>                     Buffers: shared hit=1816
>               ->  Materialize  (cost=266.10..328.09 rows=58 width=16)
> (actual time=0.000..0.001 rows=12 loops=43626)
>                   [lots of stuff]
>
> It scans backwards through txn_tbl_due_dt_idx which returns 43626 rows
> and takes 20 milliseconds.
>
> For each of these rows it performs the "Materialize" node, which in turn
> does lots of stuff, but whatever it is, it's fast and probably not worth
> optimizing. The problem is that it's done 43626 times, which takes
> another 120ms.
>
> So the most promising way to proceed it to try to reduce those 43626
> rows. Since the query is already scanning txn_tbl_due_dt_idx from newest
> to oldest, is there a cutoff date where it is safe to ignore everything
> older? If you can get it to scan only 2000 rows that would be 20 times
> faster ...
>
> (I'm a bit confused by your naming. I'm guessing that the "Index Scan
> Backward using txn_tbl_due_dt_idx" is there because of the "order by
> df.tran_date desc", but the name of the index and the column don't
> match.)
>
>
Got it. Thank you.

Yes , As folks here suggested, I created the new index on  "tran_date"
which is used as "order by desc" to only show the newest 1000 rows with a
"limit" operator. And this index backward scan is getting used and helping
to a large extent to drop the response time as opposed to early "table
sequential scan'.

Now , in this query as you said we need to see if we can further put a
filter on the tran_date so as to minimize the records from table txn_tbl
which would minimize the number of loops the materialize operation is
happening. Need to check if that is possible without impacting business
functionality. However,  Is there any way this materialized operation will
happen once i.e kind of a "HASH" Join fashion (where only once it will be
scanned) rather in a nested loop fashion which is currently happening ~43K
times?

Another question i had in mind as there is the filter " Filter:
((txn_tbl_type_nm)::text = ANY ('{.......}'::text[]))" , will including
this column in the index i.e. making it composite (TRAN_DATE,
txn_tbl_type_nm) will be a good idea. Mainly in scenarios where this
txn_tbl_type_nm will filter out more rows i.e. ~100-500K + rows?

Regards
Yudhi

Reply via email to