Dear Tom,
This is very helpful, thank you.
You make a very useful point that the limitation is basically on PL/pgSQL
and other PL languages. And someone on SO already pointed out that an
inline SQL function with a enormous sized TABLE return value also doesn't
have any buffering problems. So that
Thanks you guys are correct... the size of the table caused the optimizer
to do a seq scan instead of using the index. I tried it on a 24 MB and 1
GB table and the expected index was used.
On Fri, Feb 17, 2017 at 7:04 PM, Tom Lane wrote:
> "David G. Johnston" writes:
> > On Fri, Feb 17, 2017
"David G. Johnston" writes:
> On Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra
> wrote:
>> That may seem a bit strange, but I'd bet it finds the short value in some
>> statistic (MCV, histogram) ans so can provide very accurate estimate.
> I'm not seeing how any of the statistic columns would capt
On Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra
wrote:
> That may seem a bit strange, but I'd bet it finds the short value in some
> statistic (MCV, histogram) ans so can provide very accurate estimate.
-> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1
width=0) (actual time=0
Yes, both queries are the same, I just shorten the parameter value to see
what would have happened. The database that I inherited has a column that
stores GUID/UUIDs in a varchar(255) and a select on that table on that
column is doing a FULL TABLE SCAN (seq scan). All the values in the column
are 3
On 02/17/2017 11:42 PM, David G. Johnston wrote:
On Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA mailto:hustler...@gmail.com>>wrote:
my_db=# create index tab_idx1 on tab(ID);
CREATE INDEX
my_db=# explain (analyze, buffers) select count(*) from tab where ID
= '01625cfa-2bf8-45cf' ;
Hustler DBA wrote:
> I am seeing this strange behavior, I don't know if this is by design by
> Postgres.
>
> I have an index on a column which is defined as "character varying(255)".
> When the value I am searching for is of a certain length, the optimizer
> uses the index but when the value is lo
Hi,
On 02/17/2017 11:19 PM, Hustler DBA wrote:
I am seeing this strange behavior, I don't know if this is by design by
Postgres.
I have an index on a column which is defined as "character
varying(255)". When the value I am searching for is of a certain length,
the optimizer uses the index but w
On Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA wrote:
>
> my_db=# create index tab_idx1 on tab(ID);
>
> CREATE INDEX
> my_db=# explain (analyze, buffers) select count(*) from tab where ID = '
> 01625cfa-2bf8-45cf' ;
> QUERY PLAN
>
>
I am seeing this strange behavior, I don't know if this is by design by
Postgres.
I have an index on a column which is defined as "character varying(255)".
When the value I am searching for is of a certain length, the optimizer
uses the index but when the value is long, the optimizer doesn't use t
Hi,
sorry for the late replay, i have check but from what i see this is on the
application side.
i'm not able to login to postgres from the command line.
dose anyone have any other ideas on this problem.
Br,
vuko
- Original Message -
From: "Vitalii Tymchyshyn"
To: "Vucomir Ia
Mike Beaton writes:
> [ generally accurate information ]
> **WARNINGS**
> It would *never* make sense to do `FETCH ALL FROM cursor` for
> astronomically large data, if your client side code (including your data
> access layer) has any bottleneck at all at which means that all the data
> from a c
I asked the same question at the same time on Stack Overflow (sincere
apologies if this is a breach of etiquette - I really needed an answer, and
I thought the two communities might not overlap).
Stackoverflow now has an answer, by me:
http://stackoverflow.com/q/42292341/#42297234 - which is based
**Short version of my question:**
If I hold a cursor reference to an astronomically huge result set in my
client code, would it be ridiculous (i.e. completely defeats the point of
cursors) to issue "FETCH ALL FROM cursorname" as my next command? Or would
this slowly stream the data back to me as I
14 matches
Mail list logo