[SQL] Unable to identify a right operator '>=' for type 'bpchar'

2000-05-24 Thread Michael Ma

Hi,

While running the following SQL statement in postgresql-7.0-1 via
JDBC,

   select a.cus_code, a.dlv_code, b.cus_abbr, a.dlv_abbr,
  a.address, a.tel, a.fax, a.contact
 from dlv_point a, customer b
where ((a.cus_code >= ? and a.cus_code <= ?)
  or (b.cus_abbr >= ? and b.cus_abbr <= ?))
  and trim(b.cus_code) = trim(a.cus_code)
order by 1 ;

I got the following error message:

   Unable to identify a right operator '>=' for type 'bpchar'
   You may meed to add parentheses or an explicit cast.

I have never got this kind of message from Informix or Oracle.

The data type of the fields in the above statement are all char(n).

Can anyone help?

The system I am using is Red Hat Linux 6.2 with Kernel 2.2.14-6.1.1 on
an i586

Thanks in advance.

Michael



Re: [SQL] Unable to identify a right operator '>=' for type 'bpchar'

2000-05-24 Thread Tom Lane

Michael Ma <[EMAIL PROTECTED]> writes:
> While running the following SQL statement in postgresql-7.0-1 via
> JDBC,

>select a.cus_code, a.dlv_code, b.cus_abbr, a.dlv_abbr,
>   a.address, a.tel, a.fax, a.contact
>  from dlv_point a, customer b
> where ((a.cus_code >= ? and a.cus_code <= ?)
>   or (b.cus_abbr >= ? and b.cus_abbr <= ?))
>   and trim(b.cus_code) = trim(a.cus_code)
> order by 1 ;

> I got the following error message:

>Unable to identify a right operator '>=' for type 'bpchar'
>You may meed to add parentheses or an explicit cast.

It looks to me like the parser is thinking that "a.cus_code >= ?"
must represent application of a suffix operator '>=' followed by
application of another suffix operator '?'.  Which is indeed about
the only way it could make any sense of the expression.

I assume you were expecting something to get substituted for the '?'
marks before the query is submitted, but it evidently didn't happen ...

regards, tom lane



[SQL] Why is PostgreSQL 7.0 SQL semantics different from Oracle's?

2000-05-24 Thread Thomas Holmgren

Hello everyone! :)

This little problem is bothering me a lot! It seems that PostgreSQL 7.0
uses different semantics than Oracle when evaluting SQL?!

I have two relations, A and B, both containing the attributes "number" 
(int) and "amount" (int). There's no primary key, and the two relations
can contain multiple identical tuples.

I would like to query for a table containing the total amount for each
different number in B, substracted from the total amount for each
different number in A. In other words, sum A by grouping "number", sum B 
by grouping "number" and finaly calculate the difference between the sums
for each "number".

I have defined two views, viewA and viewB. They are defined as follow:

CREATE VIEW viewA AS SELECT number, sum(amount) AS amount
FROM A GROUP BY number;

CREATE VIEW viewB AS SELECT number, sum(amount) AS amount
FROM B FROUP BY number;

This query then gives me the desired result (when I'm using Oracle):

SELECT viewA.number, viewA.amount - viewB.amount AS difference
FROM viewA, viewB
WHERE viewA.number = viewB.number

BUT WHEN I'm doing the EXACT SAME THING in PostgreSQL 7.0 I get a
different result! It seems that Postgres executes the natural join in the
query BEFORE performing the sum() in the definition of the views thus
giving me a wrong result.

How can I fix that??
How come PostgreSQL uses different semantics when evaluating SQL
expressions than other BDMSs?

Thank you! :))



Mvh.
Thomas Holmgren
Institut for Datalogi
Aalborg Universitet




Re: [SQL] Unable to identify a right operator '>=' for type 'bpchar' (solved)

2000-05-24 Thread Michael Ma

It is solved and closed now.

Thanks



[SQL] Why is PostgreSQL 7.0 SQL semantics different from Oracle's?

2000-05-24 Thread Thomas Holmgren


Hello everyone! :)

This little problem is bothering me a lot! It seems that PostgreSQL 7.0
uses different semantics than Oracle when evaluting SQL?!

I have two relations, A and B, both containing the attributes "number" 
(int) and "amount" (int). There's no primary key, and the two relations
can contain multiple identical tuples.

I would like to query for a table containing the total amount for each
different number in B, substracted from the total amount for each
different number in A. In other words, sum A by grouping "number", sum B 
by grouping "number" and finaly calculate the difference between the sums
for each "number".

I have defined two views, viewA and viewB. They are defined as follow:

CREATE VIEW viewA AS SELECT number, sum(amount) AS amount
FROM A GROUP BY number;

CREATE VIEW viewB AS SELECT number, sum(amount) AS amount
FROM B FROUP BY number;

This query then gives me the desired result (when I'm using Oracle):

SELECT viewA.number, viewA.amount - viewB.amount AS difference
FROM viewA, viewB
WHERE viewA.number = viewB.number

BUT WHEN I'm doing the EXACT SAME THING in PostgreSQL 7.0 I get a
different result! It seems that Postgres executes the natural join in the
query BEFORE performing the sum() in the definition of the views thus
giving me a wrong result.

How can I fix that??
How come PostgreSQL uses different semantics when evaluating SQL
expressions than other BDMSs?

Thank you! :))


Mvh.
Thomas Holmgren
Institut for Datalogi
Aalborg Universitet




[SQL] Re: Automatic index numbers

2000-05-24 Thread Kees Kuip

Daniel Mendyke wrote:
> 
> How can I automatically create a unique index
> number when I add new data to a table?
> 

Try 

CREATE TABLE test
(
   id serial
)



[SQL]

2000-05-24 Thread gomathi raju

Hi friends,

I want to get the system timestamp from postgresql database.
But I dont have a dual table from where ,I can select it.
Give me a solution, from which table(system) I can get it.

Regards,
gomathi


Get free email and a permanent address at http://www.netaddress.com/?N=1



[SQL] possible bug with group by?

2000-05-24 Thread Joseph Shraibman

Is this a bug or am I just misunderstanding something?

playpen=> create table tablea ( a int,b int , c int );
CREATE
playpen=> insert into tablea(a, b) values (1 ,2);
INSERT 28299 1
playpen=> insert into tablea(a, b, c) values (2 ,3, 4);
INSERT 28300 1
playpen=> select a, b, case when c is null then 'not set' else 'set' end
as z from tablea;
a|b|z
-+-+---
1|2|not set
2|3|set
(2 rows)


playpen=> select a, b, case when c is null then 'not set' else 'set' end
as z from tablea group by a, b, z;
ERROR:  Unable to identify an operator '<' for types 'unknown' and
'unknown'
You will have to retype this query using an explicit cast
playpen=>




[SQL] Clarified Question

2000-05-24 Thread Kyle Bateman



 
How can I create a function that will take in two variables and return
an integer, when one of the variables is the tablename ?!
I have tried :
   
create function tst_func(text, varchar(16))
   
as
   
'BEGIN
   
result=select max(history_id)+1 from $1
   
where client_id = $2;
   
return result;

END;'
 
This kind of thing does work in pltcl but AFAIK you can't specify a table
name with a parameter in plpgsql.  This may give you the idea:
create function myfunc(text,text) returns result as '
    set res [spi_exec "select f1 from $1 where f2
= \'$2\']
    return $res
    ' language 'pltcl';
 

begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard



Re: [SQL] Why is PostgreSQL 7.0 SQL semantics different from Oracle's?

2000-05-24 Thread Jan Wieck

Thomas Holmgren wrote:
>
> Hello everyone! :)
>
> This little problem is bothering me a lot! It seems that PostgreSQL 7.0
> uses different semantics than Oracle when evaluting SQL?!

Not that much, but ...

> [...]
>
> I have defined two views, viewA and viewB. They are defined as follow:
>
> CREATE VIEW viewA AS SELECT number, sum(amount) AS amount
> FROM A GROUP BY number;
>
> CREATE VIEW viewB AS SELECT number, sum(amount) AS amount
> FROM B FROUP BY number;

here   the  problems  start.  PostgreSQL  has  (since  epoch)
problems with aggregates, GROUP BY  clauses  and  some  other
things when used in views.  We know exactly what causes these
problems, but fixing them requires some huge  changes  across
the  entire  backend.  This  work  is  scheduled  for the 7.2
release.

> BUT WHEN I'm doing the EXACT SAME THING in PostgreSQL 7.0 I get a
> different result! It seems that Postgres executes the natural join in the
> query BEFORE performing the sum() in the definition of the views thus
> giving me a wrong result.

Close, due to the fact that after applying the rewrite  rules
for  the  views,  the entire thing is one join, but with only
one (and thus  wrong)  gouping  step  on  the  toplevel.  The
groupings  must be done on deeper levels per view, but theres
no way to tell that in the querytree from the rewriter.

> How can I fix that??
> How come PostgreSQL uses different semantics when evaluating SQL
> expressions than other BDMSs?

You can help us doing the huge changes in a couple of months.
Even  if  you cannot help coding it, you might penetrate what
we do with all those complicated schemas.

Stay tuned.


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





Re: [SQL] possible bug with group by?

2000-05-24 Thread Julie Hunt



Joseph Shraibman wrote:

>
>
> playpen=> select a, b, case when c is null then 'not set' else 'set' end
> as z from tablea group by a, b, z;
> ERROR:  Unable to identify an operator '<' for types 'unknown' and
> 'unknown'
> You will have to retype this query using an explicit cast
> playpen=>

Hi,

should your group by be a, b, c not a, b, z?

Regards
Julie




Re: [SQL] possible bug with group by?

2000-05-24 Thread Ross J. Reedstrom

On Wed, May 24, 2000 at 06:30:49PM -0400, Joseph Shraibman wrote:
> Is this a bug or am I just misunderstanding something?
> 

Not a bug, pgsql is just less willing to cast things willy-nilly
in 7.0 than it was in 6.x.  In this case, the system doesn't know what
'not set' and 'set' are supposed to be, so if can't decide what operator
to use to compare them for grouping. Try something like:

 select a, b, case when c is null then 'not set'::text else 'set'::text end
   as z from tablea group by a, b, z;

Or even SQL92 compliant:

select a, b, case when c is null then CAST ('not set' AS TEXT)  else CAST
('set' as text) end as z from tablea group by a, b, z;


> 
> playpen=> select a, b, case when c is null then 'not set' else 'set' end
> as z from tablea group by a, b, z;
> ERROR:  Unable to identify an operator '<' for types 'unknown' and
> 'unknown'
> You will have to retype this query using an explicit cast
> playpen=>
> 

Ross
-- 
Ross J. Reedstrom, Ph.D., <[EMAIL PROTECTED]> 
NSBRI Research Scientist/Programmer
Computer and Information Technology Institute
Rice University, 6100 S. Main St.,  Houston, TX 77005



[SQL] Use of index in 7.0 vs 6.5

2000-05-24 Thread Ryan Bradetich

Tom (Or anyone else who is good with PostgreSQL statistics),

I am in the process of transitioning from postgreSQL 6.5.3 to
postgreSQL 7.0.  I ran into an issue where a sequential scan
is being choosen on postgreSQL 7.0 where an index scan was
choosen on postgreSQL 6.5.3.

Note: All tables have been freshly vacuum'd and analyzed.

procman=# select version();
  version
---
 PostgreSQL 7.0.0 on hppa2.0w-hp-hpux11.00, compiled by gcc 2.95.2
(1 row)

procman=# explain select count(catagory) from medusa where host_id = 404
and catagory like 'A%';
NOTICE:  QUERY PLAN:
Aggregate  (cost=189546.19..189546.19 rows=1 width=12)
  ->  Seq Scan on medusa  (cost=0.00..189529.43 rows=6704 width=12)
EXPLAIN

Note: The above query produces an index scan on postgreSQL 6.5.3.

procman=# set enable_seqscan = off;
SET VARIABLE

procman=# explain select count(catagory) from medusa where host_id = 404
and catagory like 'A%';
NOTICE:  QUERY PLAN:
Aggregate  (cost=207347.36..207347.36 rows=1 width=12)
  ->  Index Scan using medusa_host_id_key on medusa
(cost=0.00..207330.60 rows=6704 width=12)
EXPLAIN

Here are the statistics:

procman=# select attname,attdisbursion,s.*
procman-# from pg_statistic s, pg_attribute a, pg_class c
procman-# where starelid = c.oid and attrelid = c.oid and staattnum =
attnum
procman-# and relname = 'medusa';
  attname  | attdisbursion | starelid | staattnum | staop | stanullfrac
| stacommonfrac |
stacommonval |
 staloval |stahival
---+---+--+---+---+-+---+-+

--+-

 host_id   |0.00621312 | 30874288 | 1 |97 |   0
| 0.0279425 |
446
| 0
  | 11011
(1 row)


Here is my analysis of the stastics (based on the examples in the
archive).

The most common value host_id in the table is 446 with row fraction of
~ 2.8%.
The estimated number of rows in the index is 6704.  This table has
4,630,229
entries in the table.

Hopefully this analysis is correct, if not .. please correct me :)

I do not understand why the planner would choose a seqscan over the
index scan because
6704/4,630,229 is ~ 0.15%.

Thanks for your time,

Ryan



- Ryan





Re: [SQL] possible bug with group by?

2000-05-24 Thread Stephan Szabo

> Is this a bug or am I just misunderstanding something?
>
> playpen=> create table tablea ( a int,b int , c int );
> CREATE
> playpen=> insert into tablea(a, b) values (1 ,2);
> INSERT 28299 1
> playpen=> insert into tablea(a, b, c) values (2 ,3, 4);
> INSERT 28300 1
> playpen=> select a, b, case when c is null then 'not set' else 'set' end
> as z from tablea;
> a|b|z
> -+-+---
> 1|2|not set
> 2|3|set
> (2 rows)
>
>
> playpen=> select a, b, case when c is null then 'not set' else 'set' end
> as z from tablea group by a, b, z;
> ERROR:  Unable to identify an operator '<' for types 'unknown' and
> 'unknown'
> You will have to retype this query using an explicit cast
> playpen=>

I'm not 100% sure, but my guess would be that it's not certain what
type 'not set' and 'set' are going to be (hence type 'unknown') and when
it tries to group it, it's unable to determine how to tell what's greater
than
something else.

As a workaround, you should be able to do something like the following:
select a,b, case when c is null then cast('not set' as text) else cast('set'
as text)
end as z from tablea group by a, b, z;






Re: [SQL] possible bug with group by?

2000-05-24 Thread Tom Lane

Joseph Shraibman <[EMAIL PROTECTED]> writes:
> playpen=> select a, b, case when c is null then 'not set' else 'set' end
> as z from tablea group by a, b, z;
> ERROR:  Unable to identify an operator '<' for types 'unknown' and 'unknown'
> You will have to retype this query using an explicit cast

It's not GROUP BY's fault, it's just the oft-repeated issue about
quoted string literals not having any definite type in Postgres.
The parser postpones assigning a type until it sees the literal used
in a context where a type can be determined --- and in a case like
this, it never can.  You need to force the issue with a cast, eg

select a, b,
 case when c is null then 'not set'::text else 'set'::text end as z
from tablea group by a, b, z;

regards, tom lane



Re: [SQL] possible bug with group by?

2000-05-24 Thread Joseph Shraibman

Julie Hunt wrote:
> 
> Joseph Shraibman wrote:
> 
> >
> >
> > playpen=> select a, b, case when c is null then 'not set' else 'set' end
> > as z from tablea group by a, b, z;
> > ERROR:  Unable to identify an operator '<' for types 'unknown' and
> > 'unknown'
> > You will have to retype this query using an explicit cast
> > playpen=>
> 
> Hi,
> 
> should your group by be a, b, c not a, b, z?
> 

That would work for this example.  But shouldn't it work for z also?

I discovered that if I replace z with text I don't get that error, but
then I can't name my columns.



Re: [SQL] possible bug with group by?

2000-05-24 Thread Joseph Shraibman

Stephan Szabo wrote:
> 
> > Is this a bug or am I just misunderstanding something?
> >
> > playpen=> create table tablea ( a int,b int , c int );
> > CREATE
> > playpen=> insert into tablea(a, b) values (1 ,2);
> > INSERT 28299 1
> > playpen=> insert into tablea(a, b, c) values (2 ,3, 4);
> > INSERT 28300 1
> > playpen=> select a, b, case when c is null then 'not set' else 'set' end
> > as z from tablea;
> > a|b|z
> > -+-+---
> > 1|2|not set
> > 2|3|set
> > (2 rows)
> >
> >
> > playpen=> select a, b, case when c is null then 'not set' else 'set' end
> > as z from tablea group by a, b, z;
> > ERROR:  Unable to identify an operator '<' for types 'unknown' and
> > 'unknown'
> > You will have to retype this query using an explicit cast
> > playpen=>
> 
> I'm not 100% sure, but my guess would be that it's not certain what
> type 'not set' and 'set' are going to be (hence type 'unknown') and when
> it tries to group it, it's unable to determine how to tell what's greater
> than
> something else.

But why would group by need to sort it?  To insert it into a tree to
make lookups of distinct values faster?
> 
> As a workaround, you should be able to do something like the following:
> select a,b, case when c is null then cast('not set' as text) else cast('set'
> as text)
> end as z from tablea group by a, b, z;

That does work. Thanks.



Re: [SQL] possible bug with group by?

2000-05-24 Thread Tom Lane

Joseph Shraibman <[EMAIL PROTECTED]> writes:
> But why would group by need to sort it?  To insert it into a tree to
> make lookups of distinct values faster?

No, to bring identical values together.  GROUP BY and DISTINCT are both
implemented as basically a "sort | uniq" pipeline.

regards, tom lane



Re: [SQL] Use of index in 7.0 vs 6.5

2000-05-24 Thread Tom Lane

Ryan Bradetich <[EMAIL PROTECTED]> writes:
> I am in the process of transitioning from postgreSQL 6.5.3 to
> postgreSQL 7.0.  I ran into an issue where a sequential scan
> is being choosen on postgreSQL 7.0 where an index scan was
> choosen on postgreSQL 6.5.3.

Since you're complaining, I assume the seqscan is slower ;-).
But you didn't say how much slower --- what are the actual timings?

Basically what's going on here is that we need to tune the fudge-factor
constants in the cost model so that they have something to do with
reality on as wide a variety of systems as possible.  You did an
excellent job of showing the estimates the planner computed --- but
what we really need here is to see how those relate to reality.

> I do not understand why the planner would choose a seqscan over the
> index scan because 6704/4,630,229 is ~ 0.15%.

I'm a bit surprised too.  What is the average tuple width on this table?
(Actually, probably a better question is how many pages and tuples
are in the relation according to its pg_class entry.  Try "select * from
pgclass where relname = 'medusa'".)

regards, tom lane



Re: [SQL] Use of index in 7.0 vs 6.5

2000-05-24 Thread Ryan Bradetich

Tom Lane wrote:

> Ryan Bradetich <[EMAIL PROTECTED]> writes:
> > I am in the process of transitioning from postgreSQL 6.5.3 to
> > postgreSQL 7.0.  I ran into an issue where a sequential scan
> > is being choosen on postgreSQL 7.0 where an index scan was
> > choosen on postgreSQL 6.5.3.
>
> Since you're complaining, I assume the seqscan is slower ;-).
> But you didn't say how much slower --- what are the actual timings?

Opps... Had them written down, just forgot to include them in the email :)

with enable_seqscan = on:
real 18.05
sys0.01
user  0.02

with enable_seqscan = off:
real  0.08
sys   0.01
user 0.02

I stopped and restarted the postmaster daemon between these timing to
flush the cache.


> Basically what's going on here is that we need to tune the fudge-factor
> constants in the cost model so that they have something to do with
> reality on as wide a variety of systems as possible.  You did an
> excellent job of showing the estimates the planner computed --- but
> what we really need here is to see how those relate to reality.
>
> > I do not understand why the planner would choose a seqscan over the
> > index scan because 6704/4,630,229 is ~ 0.15%.
>
> I'm a bit surprised too.  What is the average tuple width on this table?
> (Actually, probably a better question is how many pages and tuples
> are in the relation according to its pg_class entry.  Try "select * from
> pgclass where relname = 'medusa'".)
>
> regards, tom lane

procman=# select * from pg_class where relname = 'medusa';
 relname | reltype | relowner | relam | relpages | reltuples | rellongrelid
| relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers
| relukeys | relfkeys | relrefs | relhas
pkey | relhasrules | relacl
-+-+--+---+--+---+--+-+-+-+--+---+-+--+--+-+---

-+-+
 medusa  |   0 |36000 | 0 |   120076 |   4630229 |0
| t   | f   | r   |6 | 0 |   0
|0 |0 |   0 | f
 | f   |
(1 row)

procman=# \d medusa
  Table "medusa"
 Attribute |   Type| Modifier
---+---+--
 host_id   | integer   |
 timestamp | timestamp |
 current   | integer   |
 catagory  | text  |
 cat_desc  | text  |
 anomaly   | text  |

This table has two fairly large text fields, the cat_desc and the anomaly.
The catagory field is very short and in the format: [ABC][0-9][0-9].

Thanks for the help,

- Ryan




Re: [SQL] Use of index in 7.0 vs 6.5

2000-05-24 Thread Tom Lane

Ryan Bradetich <[EMAIL PROTECTED]> writes:
> procman=# explain select count(catagory) from medusa where host_id = 404
> and catagory like 'A%';

> Here is my analysis of the stastics (based on the examples in the
> archive).

> The most common value host_id in the table is 446 with row fraction of
> ~ 2.8%.  The estimated number of rows in the index is 6704.  This
> table has 4,630,229 entries in the table.
> I do not understand why the planner would choose a seqscan over the
> index scan because 6704/4,630,229 is ~ 0.15%.

I see at least part of the answer.  The 'rows=' number in the EXPLAIN
output is the planner's estimate of the net number of rows out after
applying all available WHERE restrictions.  In this case we've got a
clause "host_id = 404" which can be used with the index on host_id,
and then we have another clause "catagory like 'A%'" which will be
applied on-the-fly to the tuples returned by the indexscan.  The
rows number tells you about the estimated number of rows out after
that second filter step.  However, the cost of the indexscan depends
mainly on the number of tuples that have to be fetched, and that is
determined by the selectivity of just the "host_id = 404" clause.

I made a dummy table with the schema you showed and then inserted
the statistics you reported into the system tables (who's afraid
of "update pg_class ..." ;-)).  If I didn't mess up, you should
be able to reproduce these EXPLAIN results:

set enable_seqscan = off;
explain select count(catagory) from medusa where host_id = 404;
NOTICE:  QUERY PLAN:

Aggregate  (cost=206943.69..206943.69 rows=1 width=12)
  ->  Index Scan using medusa_host_id_key on medusa  (cost=0.00..206781.97 rows=64690 
width=12)

set enable_seqscan = on;
explain select count(catagory) from medusa where host_id = 404;
NOTICE:  QUERY PLAN:

Aggregate  (cost=178115.59..178115.59 rows=1 width=12)
  ->  Seq Scan on medusa  (cost=0.00..177953.86 rows=64690 width=12)


This shows that the planner is actually estimating that the indexscan
will fetch about 64690 rows (of which it guesses only 6704 will remain
after applying the catagory clause, but that's not really relevant to
the cost estimate).  Since there are 120076 pages in the table, that
would mean pretty nearly one separate page fetch for each retrieved
tuple, if the matching tuples are randomly distributed --- and that
very probably *would* take more time than reading the whole table
sequentially.  So the planner's chain of logic holds up if all these
assumptions are correct.

Since you find that in reality the indexscan method is very quick,
I'm guessing that there are actually fairly few tuples matching
host_id = 404.  Could you run a quick "select count(*)" to check?

This seems to point up (once again) the deficiency of assuming that
the most-common value in the table is a good guide to the frequency
of typical values.  You showed that host_id = 446 occurs in 2.8% of
the rows in this table; a search for 446 very probably would be faster
as a seqscan than as an indexscan (you might care to try it and see).
But that's probably a statistical outlier that's not got much to do
with the frequency of typical values in the table.

The only really good answer to this problem is to collect more-detailed
statistics in VACUUM ANALYZE, which I hope to see us doing in a release
or so.  In the meantime I am wondering about deliberately skewing the
cost model in favor of indexscans, because I sure haven't heard many
complaints about erroneous selection of indexscans...

One way to put a thumb on the scales is to reduce the value of the SET
variable random_page_cost.  The default value is 4.0, which seems to
correspond more or less to reality, but reducing it to 3 or so would
shift the planner pretty nicely in the direction of indexscans.

regards, tom lane