Tim Bunce wrote:
>
> > - av_store(av, i, newSViv(ora2sql_type(imp_sth->fbh[i].dbtype)));
> > + av_store(av, i, newSViv(ora2sql_type(imp_sth->fbh+i).dbtype));
>
> Umm, why change from subscript to pointer arithmetic?
>
ora2sql_type needs more then dbtype, so it's called with imp_fbh_t*
as argument type. O.k., imp_fbh_t is not very big, but normally I do
not copy structs on the stack. On the other hand, ora2sql_type
returns sql_fbh_t by value, because it's a local variable. Of course,
there are ways to avoid this too.
> It it possible and useful to distinguish cases where the precision is
> small enough to fit in a perl integer? I suspect not because (I think)
> declaring a field 'INTEGER' or even 'SMALLINT' in Oracle still defines
> it as 'NUMBER(38,0)'. But then people who care could always manually
> decalre a field as 'NUMBER(n,0)' where n is small as we could usefully
> return SQL_INTEGER then, yes? (If so, want to update the patch? :)
>
Yes, INTEGER is the same as NUMBER(38,0) in Oracle (see my test case).
SQL_INTEGER is (conceptual) only a specialization of SQL_DECIMAL(p,s)
with s = 0. Because this special case is very common, SQL defines a
a type of his own. I.e. the type is redundant and we don't really need
it! DBI has the option to reflect this SQL feature or not. It's your
decision, we have no preference.
BTW: SQL says nothing about the precision of SQL_INTEGER (it's
'implementation defined')! So there is no guaranty that a SQL_INTEGER
fits into an integer in another type system. That sounds not very
useful for an application written in Perl or C, but think for instance
about schema replication ...
>
> Does anyone know the practical difference between SQL_DECIMAL and
> SQL_NUMERIC? The only info I can find seems to relate to their
> use in COBOL ('PACKED DECIMAL' vs 'DISPLAY SIGN LEADING SEPARATE').
>
Practical difference? Don't know. NUMERIC(p,s) is more precise defined,
i.e. it has the precision p. DECIMAL(p,s) has an 'implementation-defined
decimal precision equal to or greater than the value of the specified
<precision>'. I think some vendors prefered this vague definition in
the standard spec's. Maybe the practical difference is the possibility
that you get a value from such a system that is bigger than expected :-(
Steffen